Summarizer

Tool Count Management

Discussion of whether 80+ tools in context is the real problem, suggesting sub-agents for areas of focus rather than compressing everything

← Back to MCP server that reduces Claude Code context consumption by 98%

The discussion centers on whether managing over 80 active tools is a systemic design flaw or a result of users "holding it wrong" by failing to curate their toolsets. While some argue that such high tool counts degrade performance and should be replaced by specialized sub-agents to preserve context, others highlight the practical reality that users often accumulate bloat automatically through default server installations. Ultimately, the debate pits the need for better automated "sandboxing" of large data outputs against the belief that users must take more responsibility for chunking information to avoid overwhelming the model’s focus.

5 comments tagged with this topic

View on HN · Topics
Do you need 80+ tools in context? Even if reduced, why not use sub agents for areas of focus? Context is gold and the more you put into it unrelated to the problem at hand the worse your outcome is. Even if you don't hit the limit of the window. Would be like compressing data to read into a string limit rather than just chunking the data
View on HN · Topics
That's a fair point and honestly the ideal approach. But in practice most people don't hand-curate their MCP server list per task. They install 5-6 servers and suddenly have 80 tools loaded by default. Context-mode doesn't solve the tool definition bloat, that's the input side problem. It handles the output side, when those tools actually run and dump data back. Even with a focused set of tools, a single Playwright snapshot or git log can burn 50k tokens. That's what gets sandboxed.
View on HN · Topics
> With 81+ tools active, I see your problem.
View on HN · Topics
“you’re holding it wrong” - ok, or we could make it better
View on HN · Topics
Sometimes people are actually holding it wrong though