Summarizer

LLM Input

llm/9db4e77f-8dd5-46da-972e-40d33f3399ef/batch-7-0ac7619b-42dd-40da-99d0-47651785868a-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Feasibility of Parallel Agent Workflows
   Related: Skepticism regarding the human capacity to supervise multiple AI agents simultaneously, utilizing analogies like washing dishes vs. laundry, and debating the cognitive load required for context switching between 10 active coding streams.
2. Code Quantity versus Quality
   Related: Discussions on whether generating 50-100 Pull Requests a week represents true productivity or merely 'token-maxxing', with concerns about code churn, technical debt, and the inability of humans to properly review such high volumes of generated code.
3. The One-Person Unicorn Startup
   Related: Debates on whether AI enables solo founders to build billion-dollar companies, arguing that while coding is easier, business bottlenecks like sales, marketing, and product-market fit remain unsolved by LLMs, despite rumors of stealth successes.
4. Claude Code Product Feedback
   Related: User feedback on the Claude Code CLI tool, mentioning specific bugs like terminal flickering and context loss, comparisons to tools like Codex and Cursor, and complaints about reliability and lack of basic features.
5. Cost and Access Disparities
   Related: Analysis of the financial feasibility of running Opus 4.5 agents in parallel, noting that while Anthropic employees may have unlimited access, the cost for average users would be prohibitive due to token limits and API pricing.
6. Marketing Hype and Astroturfing
   Related: Accusations that the original post and similar recent content represent a coordinated marketing campaign by Anthropic, with users expressing distrust of 'influencer' style posts and potential conflicts of interest from the tool's creator.
7. Future of Software Engineering
   Related: Existential concerns about the devaluation of coding skills, the shift from creative building to managerial reviewing of AI output, and fears that junior developers will lose the opportunity to learn through doing.
8. Technical Workflow Configurations
   Related: Specific details on managing AI agents, including the use of git worktrees for isolation, planning modes, 'teleporting' sessions between local CLI and web interfaces, and using markdown files to define agent behaviors.
9. AI Code Review Strategies
   Related: Approaches for handling AI-generated code, such as using separate AI instances to review PRs, the necessity of rigorous CI/CD guardrails, and the danger of blindly trusting 'green' tests without human oversight.
10. The Light Mode Terminal Debate
   Related: A humorous yet contentious side discussion sparked by the creator's use of a light-themed terminal, leading to arguments about eye strain, readability, astigmatism, and developer cultural norms regarding dark mode.
11. SaaS Commoditization and Moats
   Related: Predictions that AI will drive the marginal cost of software to zero, eroding traditional SaaS business models, and that future business value will rely on proprietary data, domain expertise, and distribution rather than code.
12. Agentic Limitations and Reliability
   Related: Criticisms of current AI agents acting like 'slot machines' requiring constant steering, their struggle with complex concurrency bugs, and the observation that they often produce boilerplate rather than solving deep architectural problems.
13. Corporate Adoption and Budgeting
   Related: Anecdotes about colleagues burning through massive amounts of API credits with varying degrees of success, and the disconnect between management's desire for AI productivity and the reality of review bottlenecks.
14. Context Management Techniques
   Related: Discussions on how to optimize context for AI agents, including the use of CLAUDE.md or AGENTS.md to establish rules, and the technical challenges of context limits and pruning during long sessions.
15. Vibe Coding vs. Engineering
   Related: The distinction between 'vibe coding' (iterating until it feels right without deep understanding) and traditional engineering, with experienced developers using AI as a force multiplier rather than a replacement for understanding.
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46524404",
  "text": "I have to say it sounds insane. 5 tabs of claude, back and forth from terminal to browser - and no actual workflow detailed. Are we to believe that claude is making changes in parallel to one codebase, and if so - why?"
}
,
  
{
  "id": "46522942",
  "text": "I actually use dozens of claude codes \"in parallel\" myself (most are sitting idle for a lot of the time though). I set up a web interface and then made it usable by others at clodhost.com if anybody wants to try it (free)!"
}
,
  
{
  "id": "46525135",
  "text": "The PostToolUse hook tip for formatting Claude's code is the only actual tip here. Everything else reads like marketing copy."
}
,
  
{
  "id": "46525314",
  "text": "That’s in the docs though"
}
,
  
{
  "id": "46522878",
  "text": "It'd be nice if he explained the cost to be running 10 agents all day."
}
,
  
{
  "id": "46526422",
  "text": "Yeah... I had a fairly in-depth conversation with Claude a couple of days ago about Claude Code and the way it works, and usage limits, and comparison to how other AI coding tools work, and the extremely blunt advice from Claude was that Claude Code was not suitable for serious software development due to usage limits! (props to Anthropic for not sugar coating it!)\n\nMaybe on the Max 20x plan it becomes viable, and no doubt on the Boris Cherny unlimited usage plan it does, but it seems that without very aggressive non-stop context pruning you will rapidly hit limits and the 5-hour timeout even working with a single session, let alone 5 Claude Code sessions and another 5-10 web ones!\n\nThe key to this is the way that Claude Code (the local part) works and interacts with Claude AI (the actual model, running in the cloud). Basically Claude Code maintains the context, comprising mostly of the session history, contents of source files it has accessed, and the read/write/edit tools (based on Node.js) it is providing for Claude AI. This entire context, including all files that have been read, and the tools definitions, are sent to Claude AI (eating into your token usage limit) with EVERY request, so once Claude Code has accessed a few source files then the content of those files will \"silently\" be sent as part of every subsequent request, regardless of what it is. Claude gave me an example of where with 3 smallish files open (a few thousand lines of code), then within 5 requests the token usage might be 80,000 or so, vs the 40,000 limit of the Pro plan or 200,000 limit of the Max 5x plan. Once you hit limit then you have to wait 5 hours for a usage reset, so without Cherny's infinite usage limit this becomes a game of hurry up and wait (make 5 requests, then wait 5 hours and make 5 more).\n\nYou can restrict what source files Claude Code has access to, to try to manage context size (e.g. in a C++ project, let it access all the .h module definition files, but block all the .cpp ones) as well as manually inspecting the context all the time to see what is being sent that can be removed. I believe there is some automatic context compaction happening periodically too, but apparently not enough to prevent many/most people hitting usage time outs when working on larger projects.\n\nNot relevant here, but Claude also explained how Cursor manages to provide fast/cheap autocomplete using it's own models by building a vector index of the code base to only pull relevant chunks of code into the context."
}
,
  
{
  "id": "46522905",
  "text": "he's probably on the max plan ;)"
}
,
  
{
  "id": "46522995",
  "text": "so... what's he actually doing with 10 terminals of claude code?"
}
,
  
{
  "id": "46523002",
  "text": "Working on Claude Code?"
}
,
  
{
  "id": "46523139",
  "text": "Yo dawg, we heard you like for Code to code Code for you."
}
,
  
{
  "id": "46524100",
  "text": "very interesting discussion\nI don't know how can you keep up with all the comments ^_^\nso I create a review with pros and cons\nhttps://gist.github.com/notjulian/3a623d7889e5971d4b9fd1aac9..."
}
,
  
{
  "id": "46525225",
  "text": "Absolute madness and no thank you.\n\nHave others not noticed the extremely obvious astroturfing campaign specifically promoting Claude code that is mostly happening on X in recent days/weeks?"
}
,
  
{
  "id": "46523413",
  "text": "He is singlehandedly responsible for using up an entire city's worth of power and water this way?"
}
,
  
{
  "id": "46524277",
  "text": "It’s his marketing budget."
}
,
  
{
  "id": "46523165",
  "text": "Does anyone know if it’s possible to have “ultrathink” be the default instead of saying it in every prompt?"
}
,
  
{
  "id": "46523433",
  "text": "https://x.com/bcherny/status/2007892431031988385?s=20\nSeems to be moved to the default now. PSA for anyone who didn't see"
}
,
  
{
  "id": "46523363",
  "text": "Put it in CLAUDE.md because that just gets added to the prompt"
}
,
  
{
  "id": "46523432",
  "text": "Ah, thank you! Now I feel like an idiot. I guess I was thinking “ultrathink” was a specially interpreted command within claude code (sort of like a slash command)."
}
,
  
{
  "id": "46529849",
  "text": "why he open multiple tab? is it means he gives multiple task to them?"
}
,
  
{
  "id": "46529908",
  "text": "it means he develope 5 to 10 features for same repo"
}
,
  
{
  "id": "46525095",
  "text": "A classic hacker news post that will surely interest coders from all walks of life! ~\n\nAfter regular use of an AI coding assistant for some time, I see something unusual: my biggest wins came from neither better prompts nor a smarter model. They originated from the way I operated.\n\nAt first, I thought of it as autocomplete. Afterwards, similar to a junior developer. In the end, a collaborator who requires constraints.\n\nHere is a framework I have landed on.\n\nFirst Step: Request for everything. Obtain acceleration, but lots of noise.\n\nStage two: Include regulations. Less Shock, More Trust.\n\nPhase 3: Allow time for acting but don’t hesitate to perform reviews aggressively.\n\nA few habits that made a big difference.\n\nSpecify what can be touched or come into contact with.\n\nAsking it to explain differences before applying them.\n\nConsider “wrong but confident” answers as signal to tighten scope.\n\nWondering what others see only after time.\n\nWhat transformations occurred after the second or fourth week?\n\nWhen was the trust increased or reduced?\n\nWhat regulations do you wish you had added earlier?"
}
,
  
{
  "id": "46526756",
  "text": "Bun is the new Yarn?"
}
,
  
{
  "id": "46526593",
  "text": "I'm sure this works for people and I'm happy for them but this sounds like absolute hell to me.\n\nJust take out everything that I like about SWE and then just leave me to do the stuff I hate."
}
,
  
{
  "id": "46524751",
  "text": "The amount of people holding strong opinions on LLMs who openly admit they have not tried the state of the art tools is so high on Hacker news right now, that it's refreshing to get actual updates from the tool's creators.\n\nI read a comment yesterday that said something like \"many people tried LLMs early on, it was kind of janky and so they gave up, thinking LLMs are bad\". They were probably right at the time, but the tech _has_ improved since then, while those opinions have not changed much.\n\nSo, yes claude code and sonnet/opus 4.5 is another step change that you should try out. For $20/month you can run claude code in the terminal and regular claude on the web app."
}
,
  
{
  "id": "46524480",
  "text": "Let's stop linking to a pedophile-enabling website like X please"
}
,
  
{
  "id": "46524469",
  "text": "\"My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. \"\n\nWell, of course he doesn't need to customize it. It's already working the way he wants it, seeing as how he created it"
}
,
  
{
  "id": "46527984",
  "text": "Another Claude related article. It's starting to feel like spam now."
}
,
  
{
  "id": "46523262",
  "text": "> [I'm] the creator of Claude Code.\n\nbut also\n\n> Claude Code works great out of the box, so I personally don't customize it much.\n\nAm I the only one to notice the irony of this juxtaposition?"
}
,
  
{
  "id": "46523373",
  "text": "For lots of software unless you really know what you are doing it's best to just leave the default settings alone and not dig too deep into what's not immediately intended to do. For my application lots of bug reports come from people using our advanced settings without reading any of the instructions at all and screwing it up\n\nSo in the case of him being the creator obviously he built it for his needs"
}
,
  
{
  "id": "46525331",
  "text": "What’s ironic? He made a good product that works well without needing to configure it?"
}
,
  
{
  "id": "46529267",
  "text": "He doesn't need to configure it because he made his preferences the default."
}
,
  
{
  "id": "46525365",
  "text": "I'm a heavy claude code user but this is starting to smell like BS. There is nothing special in claude code, opus is a good model and with lots of requests it can give good results. There is nothing unique to it."
}
,
  
{
  "id": "46522859",
  "text": "Absolutely shocking... Boris uses a light themed terminal?! Kidding aside, these were great tips. I am quite intrigued by the handing off of local Claude sessions to the web version. I wonder if this feature exists for the other Coding CLI agents."
}
,
  
{
  "id": "46522785",
  "text": "I doubt he’d use Claude code as it is. I’m sure he’d upgrade to think harder and do more iterations and go deeper. Codex for example already does that but could go deeper a bit longer to figure out more."
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

34

← Back to job