Summarizer

LLM Input

llm/8632d754-c7a3-4ec2-977a-2733719992fa/batch-1-c67baa70-05a3-45fa-a84c-5434ba4d257a-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Determinism vs. Probabilistic Output
   Related: Comparisons between compilers (deterministic, reliable) and LLMs (probabilistic, 'fuzzy'). Users debate whether 100% correctness is required for tools, with some arguing that LLMs are fundamentally different from traditional automation because they lack a 'ground truth' logic, while others argue that error rates are acceptable if the utility is high enough.
2. The Code Review Bottleneck
   Related: Concerns that generating code faster merely shifts the bottleneck to reviewing code, which is often harder and more time-consuming than writing it. Users discuss the cognitive load of verifying 'vibe code' and the risks of blindly trusting output that looks correct but contains subtle bugs or security flaws.
3. Erosion of Programming Skills
   Related: Fears that relying on AI causes developers to lose fundamental skills ('use it or lose it'), such as forgetting syntax for frameworks like RSpec. Users discuss the value of the 'Stare'—deep mental simulation of problems—and whether outsourcing thinking to machines degrades human expertise and the ability to solve novel problems without assistance.
4. Financial Barriers and Costs
   Related: Discussions about the high cost of running continuous agents (potentially hundreds of dollars a month), with some noting that the author's wealth (as a billionaire/founder) biases his perspective on affordability. Users question whether the productivity gains justify the expense for average developers or if this creates a divide based on access to compute.
5. Agentic Workflows and Harnessing
   Related: Technical strategies for controlling AI behavior, such as 'harness engineering,' using AGENTS.md files to document rules and prevent regressions, and setting up feedback loops where agents run tests to verify their own work. This includes moving beyond simple chatbots to autonomous background processes that triage issues or perform research.
6. Safety and Sandboxing
   Related: Practical concerns about giving AI agents shell access or file system permissions. Users discuss the risks of agents accidentally 'nuking' systems, installing unwanted dependencies, or running dangerous commands, and recommend solutions like running agents in containers, VMs, or using specific sandboxing tools like Leash to limit blast radius.
7. Environmental Impact
   Related: Reactions to the author's suggestion to 'always have an agent running,' with users expressing alarm at the potential energy consumption and environmental cost of millions of developers running constant background inference tasks for marginal productivity gains, described by some as 'cooking the planet.'
8. Architects vs. Builders Analogy
   Related: Extensive debate using construction analogies to describe the shift in the developer's role. Comparisons are made between architects (who design and delegate) and builders, with arguments about whether AI users are 'vibe architects' who don't understand the materials, or professional engineers utilizing modern equivalents of CAD software and heavy machinery.
9. AI as Junior Developers
   Related: The characterization of AI agents as an infinite supply of 'slightly drunken new college grads' or interns who are fast and cheap but require constant supervision. Users discuss the ratio of senior engineer time needed to review AI output and the lack of a path for these 'AI juniors' to ever become seniors.
10. Trust and Hallucination Risks
   Related: Skepticism regarding the reliability of AI, highlighted by examples like 'wind-powered cars' or bad recipes. Users argue that because LLMs predict tokens rather than understanding physics or logic, they are 'confidently stupid' and require expert humans to filter out hallucinations, making them dangerous for those lacking deep domain knowledge.
11. Productivity vs. Inefficiency
   Related: Debates over whether AI actually saves time or just feels productive. Some cite studies suggesting productivity drops (e.g., 19%), while others argue that the efficiency comes from parallelizing tasks or handling boilerplate. Users critique the lack of hard metrics in the article and the reliance on 'feeling' more efficient.
12. Corporate Process vs. Individual Flow
   Related: The distinction between individual productivity gains (solopreneurs, solo projects) and organizational reality. Users note that while AI speeds up coding, it doesn't solve organizational bottlenecks like meetings, cross-team coordination, or gathering requirements, limiting its revolutionary impact on large enterprises compared to solo work.
13. Spec Writing as the New Coding
   Related: The idea that working with agents shifts the primary task from writing syntax to writing detailed specifications and prompts. Users note that AI forces developers to be more explicit about requirements, effectively turning English specs into the source code, though some argue this is just a verbose and nondeterministic programming language.
14. Hype Cycles and Model Churn
   Related: Frustration with the rapid pace of change in the AI landscape ('honeymoon phase'). Users complain about building workflows around a specific model only for it to change or degrade ('drift') in the next update, leading to a constant need to relearn prompt engineering and tooling idiosyncrasies.
15. Local Models vs. Cloud Privacy
   Related: Concerns about uploading proprietary source code to cloud providers like Anthropic or OpenAI. Users discuss the trade-offs between using superior cloud models (Claude Code) versus privacy-preserving local models (OpenCode) or self-hosted solutions, and the difficulty of trusting AI companies with sensitive intellectual property.
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46908387",
  "text": "> Engineers are an entirely distinct set of roles that among other things validate the plan in its totality, not only the \"new\" 1/5th. Our job spans both of these.\n\nWhere this analogy breaks down is that the work you’re describing is done by Professional Engineers that have strict licensing and are (criminally) liable for the end result of the plans they approve.\n\nThat is an entirely different role from the army of civil, mechanical, and electrical engineers (some who are PEs and some who are not) who do most of the work for the principal engineer/designated engineer/engineer of record, that have to trust building codes and tools like FEA/FEM that then get final approval from the most senior PE. I don’t think the analogy works, as software engineers rarely report to that kind of hierarchy. Architects of Record on construction projects are usually licensed with their own licensing organization too, with layers of licensed and unlicensed people working for them."
}
,
  
{
  "id": "46908602",
  "text": "That diversity of roles is what \"among other things\" was meant to convey. My job at least isn't terribly different, except that licensing doesn't exist and I don't get an actual stamp. My company (and possibly me depending on the facts of the situation) is simply liable if I do something egregious that results in someone being hurt."
}
,
  
{
  "id": "46908646",
  "text": "> Where this analogy breaks down is that the work you’re describing is done by Professional Engineers that have strict licensing and are (criminally) liable for the end result of the plans they approve.\n\nthere are plenty of software engineers that work in regulated industries, with individual licensing, criminal liability, and the ability to be struck off and banned from the industry by the regulator\n\n... such as myself"
}
,
  
{
  "id": "46909663",
  "text": "Sure.\n\nBut no one stops you from writing software again.\n\nIt's not that PE's can't design or review buildings in whatever city the egregious failure happened.\n\nIt's that PE's can't design or review buildings at all in any city after an egregious failure.\n\nIt's not that PE's can't design or review hospital building designs because one of their hospital designs went so egregiously sideways.\n\nIt's that PE's can't design or review any building for any use because their design went so egregiously sideways.\n\nI work in an FDA regulated software area. I need 510k approval and the whole nine. But if I can't write regulated medical or dental software anymore, I just pay my fine and/or serve my punishment and go sling React/JS/web crap or become a TF/PyTorch monkey. No one stops me. Consequences for me messing up are far less severe than the consequences for a PE messing up. I can still write software because, in the end, I was never an \"engineer\" in that hard sense of the word.\n\nSame is true of any software developer. Or any unlicensed area of \"engineering\" for that matter. We're only playing at being \"engineers\" with the proverbial \"monopoly money\". We lose? Well, no real biggie.\n\nPE's agree to hang a sword of damocles over their own heads for the lifetime of the bridge or building they design. That's a whole different ball game."
}
,
  
{
  "id": "46912335",
  "text": "> Consequences for me messing up are far less severe than the consequences for a PE messing up.\n\nif I approve a bad release that leads to an egregious failure, for me it's a prison sentence and unlimited fines\n\nin addition to being struck off and banned from the industry\n\n> That's a whole different ball game.\n\nif you say so"
}
,
  
{
  "id": "46913589",
  "text": "> if I approve a bad release that leads to an egregious failure, for me it's a prison sentence and unlimited fines\n\nAgain, I'm in 510k land. The same applies to myself. No one's gonna allow me to irradiate a patient with a 10x dose because my bass ackwards software messed up scientific notation. To remove the wrong kidney because I can't convert orthonormal basis vectors correctly.\n\nBut the fact remains that no one would stop either of us from writing software in the future in some other domain.\n\nThey do stop PE's from designing buildings in the future in any other domain. By law. So it's very much a different ball game. After an egregious error, we can still practice our craft, because we aren't \"engineers\" at the end of the day. (Again, \"engineers\" in that hard sense of the word.) PE's can't practice their craft any longer after an egregious error. Because they are \"engineers\" in that hard sense of the word."
}
,
  
{
  "id": "46910801",
  "text": "It's not about the tooling it's about the reasoning. An architect copy pasting existing blueprints is still in charge and has to decide what the copy paste and where. Same as programmer slapping a bunch of code together, plumbing libraries or writing fresh code. They are the ones who drive the logical reasoning and the building process.\n\nThe ai tooling reverses this where the thinking is outsourced to the machine and the user is borderline nothing more than a spectator, an observer and a rubber stamp on top.\n\nAnyone who is in this position seriously need to think their value added. How do they plan to justify their position and salary to the capital class. If the machine is doing the work for you, why would anyone pay you as much as they do when they can just replace you with someone cheaper, ideally with no-one for maximum profit.\n\nEveryone is now in a competition not only against each other but also against the machine. And any specialized. Expert knowledge moat that you've built over decades of hard work is about to evaporate.\n\nThis is the real pressing issue.\n\nAnd the only way you can justify your value added, your position, your salary is to be able to undermine the AI, find flaws in it's output and reasoning. After all if/when it becomes flawless you have no purpose to the capital class!"
}
,
  
{
  "id": "46910933",
  "text": "> The ai tooling reverses this where the thinking is outsourced to the machine and the user is borderline nothing more than a spectator, an observer and a rubber stamp on top.\n\nI find it a bit rare that this is the case though. Usually I have to carefully review what it's doing and guide it. Either by specific suggestions, or by specific tests, etc. I treat it as a \"code writer\" that doesn't necessarily understand the big picture. So I expect it to fuck up, and correcting it feels far less frustrating if you consider it a tool you are driving rather than letting it drive you . It's great when it gets things right but even then it's you that is confirming this."
}
,
  
{
  "id": "46910993",
  "text": "This is exactly what I said in the end. Right now you rely on it fucking things up. What happens to you when the AI no longer fucks things up? Sorry to say, but your position is no longer needed."
}
,
  
{
  "id": "46909251",
  "text": "> When was the last time you reviewed the machine code produced by a compiler? ...\n\nAny time I’m doing serious optimization or knee-deep in debugging something where the bug emerged at -O2 but not at -O0.\n\nSometimes just for fun to see what the compiler is doing in its optimization passes.\n\nYou severely limit what you can do and what you can learn if you never peek underneath."
}
,
  
{
  "id": "46910572",
  "text": "> We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!\n\nMaybe not, but we don't allow non-architects to vomit out thousands of diagrams that they cannot review, and that is never reviewed, which are subsequently used in the construction of the house.\n\nYour analogy to s/ware is fatally and irredeemably flawed, because you are comparing the regulated and certification-heavy production of content, which is subsequently double-checked by certified professionals, with an unregulated and non-certified production of content which is never checked by any human."
}
,
  
{
  "id": "46910678",
  "text": "I don't see a flaw, I think you're just gatekeeping software creation.\n\nAnyone can pick up some CAD software and design a house if they so desire. Is the town going to let you build it without a certified engineer/architect signing off? Fuck no. But we don't lock down CAD software.\n\nAnd presumably, mission critical software is still going to be stamped off on by a certified engineer of some sort."
}
,
  
{
  "id": "46910754",
  "text": "> Anyone can pick up some CAD software and design a house if they so desire. Is the town going to let you build it without a certified engineer/architect signing off? Fuck no. But we don't lock down CAD software.\n\nNo, we lock down using that output from the CAD software in the real world.\n\n> And presumably, mission critical software is still going to be stamped off on by a certified engineer of some sort.\n\nThe \"mission critical\" qualifier is new to your analogy, but is irrelevant anyway - the analogy breaks because, while you can do what you like with CAD software on your own PC, that output never gets used outside of your PC without careful and multiple levels of review, while in the s/ware case, there is no review."
}
,
  
{
  "id": "46911459",
  "text": "I am not really sure what you are getting at here. Are you suggesting that people should need to acquire some sort of credential to be allowed to code?"
}
,
  
{
  "id": "46911491",
  "text": "> Are you suggesting that people should need to acquire some sort of credential to be allowed to code?\n\nNo, I am saying that you are comparing professional $FOO practitioners to professional $BAR practitioners, but it's not a valid comparison because one of those has review and safety built into the process, and the other does not.\n\nYou can't use the assertion \"We currently allow $FOO practitioners to use every single bit of automation\" as evidence that \"We should also allow $BAR practitioners to use every bit of automation\", because $FOO output gets review by certified humans, and $BAR output does not."
}
,
  
{
  "id": "46911947",
  "text": "Thanks brother. I flew half way around the world yesterday and am jetlagged as fuck from a 12 hour time change. I'm sorry, my brain apparently shut off, but I follow now. Was out to lunch."
}
,
  
{
  "id": "46912433",
  "text": "> Thanks brother. I flew half way around the world yesterday and am jetlagged as fuck from a 12 hour time change. I'm sorry, my brain apparently shut off, but I follow now. Was out to lunch.\n\nYou know, this was a very civilised discussion; below I've got someone throwing snide remarks my way for some claims I made. You just factually reconfirmed and re-verified until I clarified my PoV.\n\nYou're a very pleasant person to argue with."
}
,
  
{
  "id": "46911812",
  "text": "> We don't call architects 'vibe architects' even though (…)\n\n> We don't call builders 'vibe builders' for (…)\n\n> When was the last time (…)\n\nNone of those are the same thing. At all. They are still all deterministic approaches. The architect’s library of things doesn’t change every time they use it or present different things depending on how they hold it. It’s useful because it’s predictable. Same for all your other examples.\n\nIf we want to have an honest discussion about the pros and cons of LLM-generated code, proponents need to stop being dishonest in their comparisons. They also need to stop plugging their ears and not ignore the other issues around the technology. It is possible to have something which is useful but whose advantages do not outweigh the disadvantages."
}
,
  
{
  "id": "46911840",
  "text": "I think the word predictable is doing a bit of heavy lifting there.\n\nLets say you shovel some dirt, you’ve got a lot of control over where you get it from and where you put it..\n\nNow get in your big digger’s cabin and try to have the same precision. At the level of a shovel-user, you are unpredictable even if you’re skilled. Some of your work might be out a decent fraction of the width of a shovel. That’d never happen if you did it the precise way!\n\nBut you have a ton more leverage. And that’s the game-changer."
}
,
  
{
  "id": "46911861",
  "text": "That’s another dishonest comparison. Predictability is not the same as precision. You don’t need to be millimetric when shovelling dirt at a construction site. But you do need to do it when conducting brain surgery. Context matters."
}
,
  
{
  "id": "46912090",
  "text": "Sure. If you’re racing your runway to go from 0 to 100 users you’d reach for a different set of tools than if you’re contributing to postgres.\n\nIn other words I agree completely with you but these new tools open up new possibilities. We have historically not had super-shovels so we’ve had to shovel all the things no matter how giant or important they are."
}
,
  
{
  "id": "46912734",
  "text": "> these new tools open up new possibilities.\n\nI’m not disputing that. What I’m criticising is the argument from my original parent post of comparing it to things which are fundamentally different, but making it look equivalent as a justification against criticism."
}
,
  
{
  "id": "46910239",
  "text": "Compilers are deterministic."
}
,
  
{
  "id": "46908023",
  "text": "I skimmed over it, and didn’t find any discussion of:\n\n- Pull requests\n- Merge requests\n- Code review\n\nI feel like I’m taking crazy pills. Are SWE supposed to move away from code review, one of the core activities for the profession? Code review is as fundamental for SWE as double entry is for accounting.\n\nYes, we know that functional code can get generated at incredible speeds. Yes, we know that apps and what not can be bootstrapped from nothing by “agentic coding”.\n\nWe need to read this code, right? How can I deliver code to my company without security and reliability guarantees that, at their core, come from me knowing what I’m delivering line-by-line?"
}
,
  
{
  "id": "46908663",
  "text": "Give it a read, he mentions briefly how he uses for PR triages and resolving GH issues.\n\nHe doesn't go in details, but there is a bit:\n\n> Issue and PR triage/review. Agents are good at using gh (GitHub CLI), so I manually scripted a quick way to spin up a bunch in parallel to triage issues. I would NOT allow agents to respond, I just wanted reports the next day to try to guide me towards high value or low effort tasks.\n\n> More specifically, I would start each day by taking the results of my prior night's triage agents, filter them manually to find the issues that an agent will almost certainly solve well, and then keep them going in the background (one at a time, not in parallel).\n\nThis is a short excerpt, this article is worth reading. Very grounded and balanced."
}
,
  
{
  "id": "46909313",
  "text": "Okay I think this somewhat answers my question. Is this individual a solo developer? “Triaging GitHub issues” sounds a bit like open source solo developer.\n\nGuess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.\n\nI remain unconvinced that agentic AI scales beyond solo development, where the individual is liable for the output of the agents. More precisely, I can use agentic AI to write my code, but at the end of the day when I submit it to my org it’s my responsibility to understand it, and guarantee (according to my personal expertise) its security and reliability.\n\nConversely, I would fire (read: reprimand) someone so fast if I found out they submitted code that created a vulnerability that they would have reasonably caught if they weren’t being reckless with code submission speed, LLM or not.\n\nAI will not revolutionize SWE until it revolutionizes our processes. It will definitely speed us up (I have definitely become faster), but faster != revolution."
}
,
  
{
  "id": "46909543",
  "text": "> Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.\n\nThey probably aren't really. At least in orgs I worked at, writing the code wasn't usually the bottleneck. It was in retrospect, 'context' engineering, waiting for the decision to get made, making some change and finding it breaks some assumption that was being made elsewhere but wasn't in the ticket, waiting for other stakeholders to insert their piece of the context, waiting for $VENDOR to reply about why their service is/isn't doing X anymore, discovering that $VENDOR_A's stage environment (that your stage environment is testing against for the integration) does $Z when $VENDOR_B_C_D don't do that, etc.\n\nThe ecosystem as a whole has to shift for this to work."
}
,
  
{
  "id": "46910152",
  "text": "The author of the blog made his name and fortune founding Hashicorp, makers of Vagrant and Terraform among other things. Having done all that in his twenties he retired as the CTO and reappeared after a short hiatus with a new open source terminal, Ghostty."
}
,
  
{
  "id": "46912980",
  "text": "I had a bit of an adjustment of my beliefs since writing these comments. My current take:\n\n- AI is revolutionizing how individuals work\n- It is not clear yet how AI can revolutionize how organizations work (even SWE)"
}
,
  
{
  "id": "46911413",
  "text": "Can't believe you don't know who the author is my man."
}
,
  
{
  "id": "46911934",
  "text": "Generally don’t pay attention to names unless it’s someone like Torvalds, Stroustrop, or Guido. Maybe this guy needs another decade of notoriety or something."
}
,
  
{
  "id": "46910730",
  "text": "If you had that article, would you read it fully before firing off questions?"
}
,
  
{
  "id": "46908304",
  "text": "Either really comprehensive tests (that you read) or read it. Usually i find you can skim most of it, but like in core sections like billing or something you gotta really review it. The models still make mistakes."
}
,
  
{
  "id": "46910795",
  "text": "You can't skim over AI code.\n\nFor even mid-level tasks it will make bad assumptions, like sorting orders or timezone conversions.\n\nBasic stuff really.\n\nYou've probably got a load of ticking time bomb bugs if you've just been skimming it."
}
,
  
{
  "id": "46909342",
  "text": "You read it. You now have an infinite army of overconfident slightly drunken new college grads to throw at any problem.\n\nSome times you’re gonna want to slowly back away from them and write things yourself. Sometimes you can farm out work to them.\n\nCode review their work as you would any one else’s, in fact more so.\n\nMy rule of thumb has been it takes a senior engineer per every 4 new grads to mentor them and code review their work. Or put another way bringing on a new grad gets you +1 output at the cost of -0.25 a senior.\n\nAlso, there are some tasks you just can’t give new college grads.\n\nSame dynamic seems to be shaping up here. Except the AI juniors are cheap and work 24*7 and (currently) have no hope of growing into seniors."
}
,
  
{
  "id": "46909487",
  "text": "> Same dynamic seems to be shaping up here. Except the AI juniors are cheap and work 24*7 and (currently) have no hope of growing into seniors.\n\nEach individual trained model... sure. But otoh you can look at it as a very wide junior with \"infinite (only limited by your budget)\" willpower. Sure, three years ago they were GPT-3.5, basically useless. And now they're Opus 4.6. I wonder what the next few years will bring."
}
,
  
{
  "id": "46908344",
  "text": "we're talking about _this_ post? He specifically said he only runs one agent, so sure he probably reviews the code or as he stated finds means of auto-verifying what the agent does (giving the agent a way to self-verify as part of its loop)."
}
,
  
{
  "id": "46908148",
  "text": "So read the code."
}
,
  
{
  "id": "46908257",
  "text": "Cool, code review continues to be one of the biggest bottlenecks in our org, with or without agentic AI pumping out 1k LOC per hour."
}
,
  
{
  "id": "46909625",
  "text": "For me, AI is the best for code research and review\n\nSince some team members started using AI without care, I did create bunch of agents/skills/commands and custom scripts for claude code. For each PR, it collects changes by git log/diff, read PR data and spin bunch of specialized agents to check code style, architecture, security, performance, and bugs. Each agent armed with necessary requirement documents, including security compliance files. False positives are rare, but it still misses some problems. No PR with ai generated code passes it. If AI did not find any problems, I do manual review."
}
,
  
{
  "id": "46908278",
  "text": "Ok? You still have to read the code."
}
,
  
{
  "id": "46911754",
  "text": "That's just not what has been happening in large enterprise projects, internal or external, since long before AI.\n\nFamous example - but by no means do I want to single out that company and product: https://news.ycombinator.com/item?id=18442941\n\nFrom my own experience, I kept this post bookmarked because I too worked on that project in the late 1990s, you cannot review those changes anyway. It is handled as described, you keep tweaking stuff until the tests pass. There is fundamentally no way to understand the code. Maybe its different in some very core parts, but most of it is just far too messy. I tried merely disentangling a few types ones, because there were a lot of duplicate types for the most simple things, such as 32 bit integers, and it is like trying to pick one noodle out of a huge bowl of spaghetti, and everything is glued and knotted together, so you always end up lifting out the entire bowl's contents. No AI necessary, that is just how such projects like after many generations of temporary programmers (because all sane people will leave as soon as they can, e.g. once they switched from an H1B to a Green Card) under ticket-closing pressure.\n\nI don't know why since the beginning of these discussions some commenters seem to work off wrong assumptions that thus far our actual methods lead to great code. Very often they don't, they lead to a huge mess over time that just gets bigger.\n\nAnd that is not because people are stupid, its because top management has rationally determined that the best balance for overall profits does not require perfect code. If the project gets too messy to do much the customers will already have been hooked and can't change easily, and when they do, some new product will have already replaced the two decades old mature one. Those customers still on the old one will pay premium for future bug fixes, and the rest will jumpt to the new trend. I don't think AI can make what's described above any, or much worse."
}
,
  
{
  "id": "46909495",
  "text": "You're missing the point. The point is that reading the code is more time consuming than writing it, and has always been thus. Having a machine that can generate code 100x faster, but which you have to read carefully to make sure it hasn't gone off the rails, is not an asset. It is a liability."
}
,
  
{
  "id": "46909502",
  "text": "Tell that to Mitchell Hashimoto."
}
,
  
{
  "id": "46908362",
  "text": "I didn't get into creating software so I could read plagiarism laundering machines output. Sorry, miss me with these takes. I love using my keyboard, and my brain."
}
,
  
{
  "id": "46908931",
  "text": "So you have a hobby.\n\nI have a profession. Therefore I evaluate new tools. Agents coding I've introduced into my auxiliary tool forgings (one-off bash scripts) and personal projects, and I'm just now comfortable to introduce into my professional work. But I still evaluate every line."
}
,
  
{
  "id": "46913033",
  "text": "\"auxiliary tool forgings\" You aren't a serious person."
}
,
  
{
  "id": "46913109",
  "text": "I may not be a serious person, but I am a serious professional."
}
,
  
{
  "id": "46908891",
  "text": "I love for companies to pay me money that I can in turn exchange for food, clothes and shelter."
}
,
  
{
  "id": "46908612",
  "text": "So then type the code as well and read it after. Why are you mad"
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

50

← Back to job