Summarizer

LLM Input

llm/9db4e77f-8dd5-46da-972e-40d33f3399ef/batch-3-912b6e87-a49a-48a6-845f-057b3e96e642-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Feasibility of Parallel Agent Workflows
   Related: Skepticism regarding the human capacity to supervise multiple AI agents simultaneously, utilizing analogies like washing dishes vs. laundry, and debating the cognitive load required for context switching between 10 active coding streams.
2. Code Quantity versus Quality
   Related: Discussions on whether generating 50-100 Pull Requests a week represents true productivity or merely 'token-maxxing', with concerns about code churn, technical debt, and the inability of humans to properly review such high volumes of generated code.
3. The One-Person Unicorn Startup
   Related: Debates on whether AI enables solo founders to build billion-dollar companies, arguing that while coding is easier, business bottlenecks like sales, marketing, and product-market fit remain unsolved by LLMs, despite rumors of stealth successes.
4. Claude Code Product Feedback
   Related: User feedback on the Claude Code CLI tool, mentioning specific bugs like terminal flickering and context loss, comparisons to tools like Codex and Cursor, and complaints about reliability and lack of basic features.
5. Cost and Access Disparities
   Related: Analysis of the financial feasibility of running Opus 4.5 agents in parallel, noting that while Anthropic employees may have unlimited access, the cost for average users would be prohibitive due to token limits and API pricing.
6. Marketing Hype and Astroturfing
   Related: Accusations that the original post and similar recent content represent a coordinated marketing campaign by Anthropic, with users expressing distrust of 'influencer' style posts and potential conflicts of interest from the tool's creator.
7. Future of Software Engineering
   Related: Existential concerns about the devaluation of coding skills, the shift from creative building to managerial reviewing of AI output, and fears that junior developers will lose the opportunity to learn through doing.
8. Technical Workflow Configurations
   Related: Specific details on managing AI agents, including the use of git worktrees for isolation, planning modes, 'teleporting' sessions between local CLI and web interfaces, and using markdown files to define agent behaviors.
9. AI Code Review Strategies
   Related: Approaches for handling AI-generated code, such as using separate AI instances to review PRs, the necessity of rigorous CI/CD guardrails, and the danger of blindly trusting 'green' tests without human oversight.
10. The Light Mode Terminal Debate
   Related: A humorous yet contentious side discussion sparked by the creator's use of a light-themed terminal, leading to arguments about eye strain, readability, astigmatism, and developer cultural norms regarding dark mode.
11. SaaS Commoditization and Moats
   Related: Predictions that AI will drive the marginal cost of software to zero, eroding traditional SaaS business models, and that future business value will rely on proprietary data, domain expertise, and distribution rather than code.
12. Agentic Limitations and Reliability
   Related: Criticisms of current AI agents acting like 'slot machines' requiring constant steering, their struggle with complex concurrency bugs, and the observation that they often produce boilerplate rather than solving deep architectural problems.
13. Corporate Adoption and Budgeting
   Related: Anecdotes about colleagues burning through massive amounts of API credits with varying degrees of success, and the disconnect between management's desire for AI productivity and the reality of review bottlenecks.
14. Context Management Techniques
   Related: Discussions on how to optimize context for AI agents, including the use of CLAUDE.md or AGENTS.md to establish rules, and the technical challenges of context limits and pruning during long sessions.
15. Vibe Coding vs. Engineering
   Related: The distinction between 'vibe coding' (iterating until it feels right without deep understanding) and traditional engineering, with experienced developers using AI as a force multiplier rather than a replacement for understanding.
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46523605",
  "text": "Do you generally only have one problem? For me the use case is that I have numerous needs and Claude frees up time to work on some of the more complicated ones."
}
,
  
{
  "id": "46523440",
  "text": "I usually have 4-5, but it's because they are working on different parts of the codebase, or some I will use as read only to brainstorm"
}
,
  
{
  "id": "46526466",
  "text": "The problem isn't generating requirements, it's validating work. Spec driven development and voice chat with ticket/chat context is pretty fast, but the validation loop is still mostly manual. When I'm building, I can orchestrate multiple swarm no problem, however any time I have to drop in to validate stuff, my throughput drops and I can only drive 1-2 agents at a time."
}
,
  
{
  "id": "46525989",
  "text": "It depends on the specifics of the tasks; I routinely work on 3-5 projects at once (sometimes completely different stuff), and having a tool like cloud code fits great in my workflow.\n\nAlso, the feedback doesnt have to be immediate: sometimes I have sessions that run over a week, because of casual iterations; In my case its quite common to do this to test concepts, micro-benchmarking and library design."
}
,
  
{
  "id": "46533946",
  "text": "I see you haven’t tried BMAD-METHOD or spec-kit yet."
}
,
  
{
  "id": "46523389",
  "text": ">> I need 1 agent that successfully solves the most important problem\n\nIn most of these kinds of posts, that's still you. I don't believe i've come across a pro-faster-keyboard post yet that claims AGI. Despite the name, LLMs have no agency, it's still all on you.\n\nOnce you've defined the next most important problem, you have a smaller problem - translate those requirements into code which accurately meets them. That's the bit where these models can successfully take over. I think of them as a faster keyboard and i've not seen a reason to change my mind yet despite using them heavily."
}
,
  
{
  "id": "46523511",
  "text": "Why do you assume AGI needs to have agency?"
}
,
  
{
  "id": "46523726",
  "text": "Not OP, but I think that without some creative impetus like 'agency', how useful is an AGI going to be?"
}
,
  
{
  "id": "46525033",
  "text": "If cars do not have agency how useful are they going to be. If the Internet does not have agency how useful is going to be. if fire has no agency (debatable) how useful is going be."
}
,
  
{
  "id": "46527451",
  "text": "Call it what you want, but people are going to call the LLM with tools in a loop, and it will do something . There was the AI slop email to Rob Pike thing the other day, which was from someone giving an agent the instruction to \"do good\", or some vague high level thing like that."
}
,
  
{
  "id": "46524286",
  "text": "If you're trying to solve one very hard problem, parallelism is not the answer. Recursion is.\n\nRecursion can give you an exponential reduction in error as you descend into the call stack. It's not guaranteed in the context of an LLM but there are ways to strongly encourage some contraction in error at each step. As long as you are, on average, working with a slightly smaller version of the problem each time you recurse, you still get exponential scaling."
}
,
  
{
  "id": "46524716",
  "text": "> It's like someone is claiming they unlocked ultimate productivity by washing dishes, in parallel with doing laundry, and cleaning their house.\n\nBut we do this routinely with machines. Not saying I don't get your point re 100 PRs a week, just that it's a strange metaphor given the similarities."
}
,
  
{
  "id": "46524817",
  "text": "50-100 PRs a week but they still can't fix the 'flickering' bug"
}
,
  
{
  "id": "46524892",
  "text": "This is just the creator of Claude Code overselling Claude Code"
}
,
  
{
  "id": "46523460",
  "text": "> I need 1 agent that successfully solves the most important problem.\n\nIf you only have that one problem, that is a reasonable criticism, but you may have 10 different problems and want to focus on the important one while the smaller stuff is AIed away.\n\n> I don't understand how you can generate requirements quicky enough to have 10 parallel agents chewing away at meaningful work.\n\nI am generally happy with the assumptions it makes when given few requirements? In a lot of cases I just need a feature and the specifics are fairly open or very obvious given the context.\n\nFor example, I am adding MFA options to one project. As I already have MFA for another portal on it, I just told Claude to add MFA options for all users. Single sentence with no details. Result seems perfectly servicable, if in need of some CSS changes."
}
,
  
{
  "id": "46524096",
  "text": "Exactly. And if that problem is complex, your first step should be to plan how to sub-divide it anyway. So just ask Claude to map out interdependencies for tasks to look for opportunities to paralellise."
}
,
  
{
  "id": "46523351",
  "text": "The captive audience is not you, it's people salivating at the train of thought where they can 100x productivity of whatever and push those features that will get paying customers so they can get bought from private equity and ride out on the sunset. This whole thing is existential dread on a global scale, driven by sociopaths and everyone is just unable to not bend over."
}
,
  
{
  "id": "46523466",
  "text": "Painfully true. A lot of YouTube on LLM coding tools has become just that. Make quick bucks, look it generated a dashboard of some sort (why is it always dashboards?) and a high polished story of someone vibing a copy of a successful Saas and selling it off for a million.\n\nA shame really, for there are good resources for better making use of LLMs in coding."
}
,
  
{
  "id": "46527201",
  "text": "Prototyping."
}
,
  
{
  "id": "46523430",
  "text": "It's all smokes really. Claude Code is an unreliable piece of software and yet one of the better ones in LLM-Coding. ( https://github.com/anthropics/claude-code/issues ). That and I highly suspect it's mostly engineers who are working on it instead of LLMs. Google itself with all its resources and engineers can't come up with a half-decent CLI for coding.\n\nReminder: The guy works for Claude. Claude is over-hyping LLMs. That's like a Jeweler dealer assistant telling you how Gold chains helped his romantic life."
}
,
  
{
  "id": "46523646",
  "text": "Gemini CLI is decent."
}
,
  
{
  "id": "46524002",
  "text": "Is it?\n\nYesterday, gemini told me to run this:\n\necho 'export ANDROID_HOME=/opt/my-user/android-sdk' > ~/.bashrc\n\nWhich would have effectively overriden my whole bashrc config if I had blindly copy-pasted it.\n\nA few minutes later, asking it to create a .gitignore file for the current project - right after generating a private key, it failed to include the private key file to the .gitignore.\n\nI don't see yet how these tools can be labeled as 'major productivity boosters' if you loose basic security and privacy with them..."
}
,
  
{
  "id": "46524074",
  "text": "We were discussing the CLI, the output that's on the model."
}
,
  
{
  "id": "46527643",
  "text": "Let’s not forget the massive bias in the author: for all we know this post is a thinly veiled marketing pitch for “how to use the most tokens from your AI provider and ramp up your bill.”\n\nThis isn’t about being the most productive or having the best workflow, it’s about maximizing how much Claude is a part of your workflow."
}
,
  
{
  "id": "46526611",
  "text": "> This is interesting to hear, but I don't understand how this workflow actually works\n\nThe cynic in me is it's a marketing pitch to sell \"see this is way cheaper than 10 devs!\". The \"agent\" thing leans heavily into bean counter CTO/CIO marketing."
}
,
  
{
  "id": "46523747",
  "text": "Claude is absolutely plastering Facebook with this bullshit.\n\nEvery PR Claude makes needs to be reviewed. Every single one. So great! You have 10 instances of Claude doing things. Great! You're still going to need to do 10 reviews."
}
,
  
{
  "id": "46523936",
  "text": "Facebook, Reddit, and LinkedIn are all being heavily astroturfed by Anthropic people to oversell the usefulness of Claude Code. It's actually wild."
}
,
  
{
  "id": "46525393",
  "text": "I am surprised by how many people don't know that Claude Code is an excellent product. Nevertheless, PR / influencer astroturfing makes me not want to use a product, which is why I use Claude in the first place and not any OpenAi products."
}
,
  
{
  "id": "46534064",
  "text": "It is an excellent product but the narrative being pushed is that there's something unique about Claude Code, as if ChatGPT or Gemini don't have exactly the same thing."
}
,
  
{
  "id": "46524578",
  "text": "It's interesting to see this sentiment, given there are literal dozens of people I know in person who have no affiliations with Anthropic, living in Tokyo, and rave about Claude Code. It is good. Not perfect, but it does a lot of good stuff that we couldn't do before because of time restrictions."
}
,
  
{
  "id": "46524348",
  "text": "This site seems astroturfed too. But tbh it's pretty good marketing compared to just buying ads."
}
,
  
{
  "id": "46523761",
  "text": "That's why you have Codex review the code.\n\n(I'm only half joking. Having one LLM review the PRs of another is actually useful as a first line filter.)"
}
,
  
{
  "id": "46524125",
  "text": "Even having Opus review code written by Opus works very well as a first pass. I typically have it run a sub-agent to review its own code using a separate prompt. The sub-agents gets fresh context, so it won't get \"poisoned\" by the top level contexts justifications for the questionable choices it might have made. The prompts then direct the top level instance to repeat the verification step until the sub-agent gives the code a \"pass\", and fix any issues flagged.\n\nThe result is change sets that still need review - and fixes - but are vastly cleaner than if you review the first output.\n\nDoing runs with other models entirely is also good - they will often identify different issues - but you can get far with sub-agents and different persona ( and you can, if you like, have Claude Code use a sub agent to run codex to prompt it for a review, or vice versa - a number of the CLI tools seems to have \"standardized\" on \"-p <prompt>\" to ask a question on the command line)\n\nBasically, reviewing output from Claude (or Codex, or any model) that hasn't been through multiple automated review passes by a model first is a waste of time - it's like reviewing the first draft from a slightly sloppy and overly self-confident developer who hasn't bothered checking if their own work even compiles first."
}
,
  
{
  "id": "46524816",
  "text": "Thanks, that sounds all very reasonable!\n\n> Basically, reviewing output from Claude (or Codex, or any model) that hasn't been through multiple automated review passes by a model first is a waste of time - it's like reviewing the first draft from a slightly sloppy and overly self-confident developer who hasn't bothered checking if their own work even compiles first.\n\nWell, that's what the CI is for. :)\n\nIn any case, it seems like a good idea to also feed the output of compiler errors and warnings and the linter back to your coding agent."
}
,
  
{
  "id": "46525737",
  "text": "> Well, that's what the CI is for. :)\n\nSure, but I'd prefer to catch it before that, not least because it's a simpler feedback loop to ensure Claude fixes its own messes.\n\n> In any case, it seems like a good idea to also feed the output of compiler errors and warnings and the linter back to your coding agent.\n\nClaude seems to \"love\" to use linters and error messages if it's given the chance and/or the project structure hints at an ecosystem where certain tools are usually available. But just e.g. listing by name a set of commands it can use to check things in CLAUDE.md will often be enough to have it run it aggressively.\n\nIf not enough, you can use hooks to either force it, or sternly remind it after every file edit, or e.g. before it attempts to git commit."
}
,
  
{
  "id": "46524660",
  "text": "At the begining of the project, the runs are fast, but as the project gets bigger, the runs are slower:\n\n- there are bigger contexts\n\n- the test suite is much longer and slower\n\n- you need to split worktree, resources (like db, ports) and sometimes containers to work in isolation\n\nSo having 10 workers will run for a long time. Which give plenty of time to write good spec.\n\nYou need good spec, so the llm produce good tests, so it can write good code to match these tests.\n\nHaving a very strong spec + test suite + quality gates (linter, type checkers, etc) is the only way to get good results from an LLM as the project become more complex.\n\nUnlike a human, it's not very good at isolating complexity by itself, nor stopping and asking question in the face of ambiguity. So the guardrails are the only thing that keeps it on track.\n\nAnd running a lot of guardrail takes time.\n\nE.G: yesterday I had a big migration to do from HTMX to viewjs, I asked the LLM to produce screenshots of each state, and then do the migration in steps in a way that kept the screenshit 90% identical.\n\nThis way I knew it would not break the design.\n\nBut it's very long to run e2e tests + screenshot comparison every time you do a modification. Still faster than a human, but it gives plenty of time to talk to another llm.\n\nPlus you can assign them very different task:\n\n- One work on adding a new feature\n\n- One improves the design\n\n- One refactor part of the code (it's something you should do regularly, LLM produce tech debt quickly)\n\n- One add more test to your test suite\n\n- One is deploying on a new server\n\n- One is analyzing the logs of your dev/test/prod server and tell you what's up\n\n- One is cooking up a new logo for you and generating x versions at different resolutions.\n\nEtc.\n\nIt's basically a small team at your disposal."
}
,
  
{
  "id": "46523927",
  "text": "> I don't understand how you can generate requirements quicky enough to have 10 parallel agents chewing away at meaningful work.\n\nYou use agents to expand the requirements as well , either in plan mode (as OP does) or with a custom scaffold (rules in CLAUDE.md about how to handle requirements; personally I prefer giving Claude the latitude to start when Claude is ready rather than wait for my go-ahead)\n\n> I don't understand how you can have any meaningful supervising role over 10 things at once given the limits of human working memory.\n\n[this got long: TL;DR: This is what works for me: Stop worrying about individual steps; use sub-agents and slash-commands to encapsulate units of work to make Claude run longer; use permissions to allow as much as you dare (and/or run in a VM to allow Claude to run longer; give Claude tools to verify its work (linters, test suites, sub-agents double-checking the work against the spec) and make it use it; don't sit and wait and read invidiual parts of the conversation - it will only infuriate you to see Claude make stupid mistakes, but if well scaffolded it will fix them before it returns the code to you, so stop reading, breathe, and let it work; only verify when Claude has worked for a long time and checked its own work -- that way you review far less code and far more complete and coherent changes]\n\nYou don't. You wait until each agent is done , and you review the PR's. To make this kind of thing work well you need agents and slash-commands, like OP does - sub-agents in particular help prevent the top-level agents from \"context anxiety\": Claude Code appears to have knowledge of context use, and will be prone to stopping before context runs out; sub-agents use their own context and the top-level agent only uses context to manage the input to and output from them, so the more is farmed out to sub-agents, the longer Claude Code is willing to run. I when I got up this morning, Claude Code had run all night and produced about 110k words of output.\n\nThis also requires extensive permissions to use safe tools without asking (what OP does), or --dangerously-skip-permissions (I usually do this; you might want to put this in a container/VM as it will happily do things like \"killall -9 python\" or similar without \"thinking through\" consequences - I've had it kill the terminal it itself ran in before), or it'll stop far too quickly.\n\nYou'll also want to explicitly tell it to do things in parallel when possible. E.g. if you want to use it as a \"smarter linter\" (DO NOT rely on it as the only linter, use a regular one too, but using claude to apply more complex rules that requires some reasoning works great), you can ask it to \"run the linter agent in parallel on all typescript files\" for example, and it will tend to spawn multiple sub-agents running in parallel, and metaphorically twiddle its thumbs waiting for them to finish (it's fun seeing it get \"bored\" and decide to do other things in the meantime, or get impatient and check on progress obsessively).\n\nYou'll also want to make Claude use sub-agents to review, verify, test its work, with instructions to repeat until all the verification sub-agents give its changes a PASS (see 12/ and 13/ in the thread) - there is no reason for you to waste your time reviewing code that Claude itself can tell isn't ready.\n\n[E.g. concrete example: \"Vanilla\" Claude \"loves\" using instance_variable_get() in Ruby if facing a class that is missing an accessor for an instance variable. Whether you know Ruby or not, that should stand out like a sore thumb - it's a horrifically gross code smell, as it's basically bypassing encapsulation entirely. But you shouldn't worry about that - if you write Ruby with Claude, you'd want a rule in CLAUDE.md telling it how to address missing accessors, and sub-agent, and possibly a hook, making sure that Claude is told to fix it immediately if it ever uses it.]\n\nFarming it off to sub-agents both makes it willing to work longer, especially on \"boring\" tasks, and avoids the problem that it'll look at past work and decide it already \"knows\" this code is ready and start skipping steps.\n\nThe key thing is to stop obsessing over every step Claude takes, and treat that as a developer experimenting with something they're not clear on how to do yet. If you let it work, and its instructions are good, and it has ways of checking its work, it will figure out its first attempts are broken, fix them, and leave you with output that takes far less of your time to review.\n\nWhen Claude tells you its done with a change, if you stop egregious problems, fix your CLAUDE.md, fix your planning steps, fix your agents.\n\nNone of the above will absolve you of reviewing code, and you will need to kick things back and have it fix them, and sometimes that will be tedious, but Claude is good enough that the problems you have it fix should be complex, not simple code smells or logic errors, and 9 out 10 times they should signal that your scaffold is lacking im"
}
,
  
{
  "id": "46523783",
  "text": "skill issue\n\nsame way a lesser engineer might say they cannot do X or Y"
}
,
  
{
  "id": "46522824",
  "text": "He did a follow up with someone on Reddit and the answers were posted here: https://www.reddit.com/r/ClaudeAI/comments/1q2c0ne/comment/n...\n\nIt offers a lot more context and I found it more helpful than the original twitter thread"
}
,
  
{
  "id": "46523397",
  "text": "50-100 PRs a week to me is insane. I'm a little skeptical and wonder how large/impactful they are. I use AI a lot and have seen significant productivity gains but not at that level lol."
}
,
  
{
  "id": "46524572",
  "text": "Yeh, 100 PRs a week is a PR every 24 minutes at standard working hours (not including lunch break). That would be crazy to even review."
}
,
  
{
  "id": "46524063",
  "text": "I work for a FAANG and I'm the top reviewer in my team (in terms of number of PRs reviewed). I work on an internal greenfield project, so something really fast moving.\n\nFor ALL of 2025 I reviewed around 400 PRs. And that already took me an extreme amount of time.\n\nNobody is reviewing this many PRs.\n\nI've also raised around 350 PRs in the same year, which is also #1 for my team.\n\nAI or not, nobody is raising upwards of 3,500 CRs a year. In fact, my WHOLE TEAM of 15 people has barely raised this number of CRs for the year.\n\nI don't know why people keep believing those wild unproven claims from actors who have everything to gain from you believing them. Has common sense gone down the drain that much, even for educated professionals?"
}
,
  
{
  "id": "46525283",
  "text": "> I don't know why people keep believing those wild unproven claims from actors who have everything to gain from you believing them.\n\nIt's grifters all the way down. The majority of people pushing this narrative have vested interests, either because they own some AI shovelware company or are employed by one of the AI shovelware companies. Anthropic specifically is running guerilla marketing campaigns fucking everywhere at the moment, it's why every single one of these types of spammed posts reads the same way. They've also switched up a bit of late, they stopped going with the \"It makes me a 10x engineer!\" BS (though you still see plenty of that) and are instead going with this weird \"I can finally have fun developing again!\" narrative instead, I guess trying to cater to the ex-devs that are now managers or whatever.\n\nWhat happens is you get juniors and non-technical people seeing big numbers and being like \"Wow, that's so impressive!\" without stopping to think for 5 seconds what the kind of number they're trying to push even actually means. 100 PRs is absurd unless they're tiny oneliners, and even if they were tiny changes, there's 0 chance anyone is looking at the code being shat out here."
}
,
  
{
  "id": "46526918",
  "text": "Reviewing PRs should be for junior engineers, architectural changes, brand new code, or broken tests. You should not review every PR; if you do, you're only doing it out of habit, not because it's necessary.\n\nPRs come originally from the idea that there's an outsider trying to merge code into somebody's open source project, and the Benevolent Dictator wants to make sure it's done right. If you work on a corporate SWEng team, this is a completely different paradigm. You should trust your team members to write good-enough code, as long as conventions are followed, linters used, acceptance tests pass, etc."
}
,
  
{
  "id": "46527262",
  "text": "> You should trust your team members to write good-enough code...\n\nThat's the thing, I trust my teammate, I absolutely do not trust any LLM blindly. So if I were to receive 100 PRs a week and they were all AI-generated, I would have to check all 100 PRs unless I just didn't give a shit about the quality of the code being shit out I guess.\n\nAnd regardless, whether I trust my teammates or not, it's still good to have 2 eyes on code changes, even if they're simple ones. The majority of the PRs I review are indeed boring (boring is good, in this context) ones where I don't need to say anything, but everyone inevitably makes mistakes, and in my experience the biggest mistakes can be found in the simplest of PRs because people get complacent in those situations."
}
,
  
{
  "id": "46527395",
  "text": "It seems like LLMs really have made people insane."
}
,
  
{
  "id": "46524351",
  "text": "I am also skeptical about the need for such a large number of PRs. Do those open because of previous PRs not accomplishing their goals?\n\nIt's frustrating because being part of a small team, I absolutely fucking hate it when any LLM product writes or refractors thousands of lines of code. It's genuinely infuriating because now I am fully reliant on it to make any changes, even if it's really simple. Just seems like a new version of vendor lock-in to me."
}
,
  
{
  "id": "46523986",
  "text": "Because he is working on a product that is hot and has demand from the users for new features/bug fixes/whatnot and also gets visibility on getting such things delivered. Most of us don't work on products that have that on a daily basis."
}
,
  
{
  "id": "46524346",
  "text": "In other words, nobody cares that the generated code is shit, because there is no human who can review that much code. Not even on high level.\n\nAccording to the discussion here, they don’t even care whether the tests are real. They just care about that it’s green. If tests are useless in reality? Who cares, nobody has time to check them!\n\nAnd who will suffer because of this? Who cares, they pray that not them!"
}
,
  
{
  "id": "46527462",
  "text": ">nobody cares that the generated code is shit\n\nThat is the case, whether the code is AI generated or not. Go take a look at some of the source code for tools you use ever day, and you'll find a lot of shit code. I'd go so far as to say, after ~30 years of contributing to open source, that it's the rare jewel that has clean code."
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

50

← Back to job