llm/5888b8dc-b96e-4444-9c3c-465dde409e92/batch-6-aac25311-cacb-48c8-9ff5-b91043f78c05-input.json
You are a comment classifier. Given a list of topics and a batch of comments, assign each comment to up to 3 of the most relevant topics.
TOPICS (use these 1-based indices):
1. AI productivity claims skepticism
2. Joy of programming vs shipping products
3. Skill atrophy concerns with AI
4. Experienced vs inexperienced developer AI gains
5. Web development complexity is optional
6. Code review burden with AI
7. Vibe coding quality concerns
8. Learning while using LLMs
9. Time-constrained developers benefiting
10. AI for boilerplate and scaffolding
11. Frontend framework fatigue
12. Managing AI like junior developers
13. Return to simpler web stacks
14. AI as autocomplete evolution
15. Side project enablement
16. Technical debt from AI code
17. Cost and pricing of AI tools
18. Pattern recognition and code quality
19. Parenting and hobby coding time
20. AI hallucinations and reliability
COMMENTS TO CLASSIFY:
[
{
"id": "46496681",
"text": "I estimated that i was 1.2x when we only had tab completion models. 1.5x would be too modest. I've done plenty of ~6-8 hour tasks in ~1-2 hours using llms."
}
,
{
"id": "46496894",
"text": "Indeed. I just did a 4-6 month refactor + migration project in less than 3 weeks."
}
,
{
"id": "46491192",
"text": "I recently used AI to help build the majority of a small project (database-driven website with search and admin capabilities) and I'd confidently say I was able to build it 3 to 5 times faster with AI. For context, I'm an experienced developer and know how to tweak the AI code when it's wonky and the AI can't be coerced into fixing its mistakes."
}
,
{
"id": "46491328",
"text": "What's the link?"
}
,
{
"id": "46491398",
"text": "The site is password protected because it's intended for scholarly researchers, and ironically the client doesn't want LLMs scraping it."
}
,
{
"id": "46495177",
"text": "10x probably means “substantial gain”. There is no universal unit of gain.\n\nHowever if the difference is between doing a project vs not doing is, then the gain is much more than 10x."
}
,
{
"id": "46495447",
"text": "There is no x is because LLM performance is non deterministic. You get slop out at varying degrees of quality and so your job shifts from writing to debugging."
}
,
{
"id": "46494713",
"text": "I don't know what to tell you, it's just true. I have done what was previously days of BI/SQL dredging and visualizing in 20 minutes. You can be shocked and skeptical but it doesn't make it not true."
}
,
{
"id": "46493737",
"text": "From one personal project,\n\nLast month:\n\n128 files changed, 39663 insertions(+), 4439 deletions(-)\nRange: 8eb4f6a..HEAD\nNon-merge commits: 174\nDate range (non-merge): 2025-12-04 → 2026-01-04 (UTC)\nActive days (non-merge): 30\n\nLast 7 days:\n\n59 files changed, 19412 insertions(+), 857 deletions(-)\nRange: c8df64e..HEAD\nNon-merge commits: 67\nDate range (non-merge): 2025-12-28 → 2026-01-04 (UTC)\nActive days (non-merge): 8\n\nThis has a lot of non-trivial stuff in it. In fact, I'm just about done with all of the difficult features that had built up over the past couple years."
}
,
{
"id": "46492928",
"text": "Don't worry, it's an LLM that wrote it based on the patterns in the text, e.g. \"Starting a new project once felt insurmountable. Now, it feels realistic again.\""
}
,
{
"id": "46493080",
"text": "That is a normal, run of the mill sentence."
}
,
{
"id": "46496760",
"text": "Yes, for an LLM. The good thing about LLMs is that they can infer patterns. The bad thing about LLMs is that they infer patterns. The patterns change a bit over time, but the overuse of certain language patterns remains a constant.\n\nOne could argue that some humans write that way, but ultimately it does not matter if the text was generated by an LLM, reworded by a human in a semi-closed loop or organically produced by human. The patterns indicate that the text is just a regurgitation of buzzwords and it's even worse if an LLM-like text was produced organically."
}
,
{
"id": "46493114",
"text": "I can't prove it of course but I stand by it."
}
,
{
"id": "46494051",
"text": "Claiming that use of more complicated words and sentences is evidence of LLM use is just paranoia. Plenty of folk write like OP does, myself included."
}
,
{
"id": "46491161",
"text": "Numbers don't matter if it makes you \"feel\" more productive.\n\nI've started and finished way more small projects i was too lazy to start without AI. So infinitely more productive?\n\nThough I've definitely wasted some time not liking what AI generated and started a new chat."
}
,
{
"id": "46491242",
"text": "> Numbers don't matter\n\nYes that's already been well established."
}
,
{
"id": "46496046",
"text": "It does matter because you're still using up your life on this stuff."
}
,
{
"id": "46494232",
"text": "One of my favorite engineers calls AI a \"wish fulfillment slot machine.\""
}
,
{
"id": "46490973",
"text": "Just as a personal data point, are you a developer? Do you use AI?"
}
,
{
"id": "46491011",
"text": "Yes and yes."
}
,
{
"id": "46491054",
"text": "And you find yourself less productive?"
}
,
{
"id": "46491099",
"text": "No but I don't use it to generate code usually.\n\nI gave agents a solid go and I didn't feel more productive, just became more stupid."
}
,
{
"id": "46491376",
"text": "A year or so ago I was seriously thinking of making a series of videos showing how coding agents were just plain bad at producing code. This was based on my experience trying to get them to do very simple things (e.g. a five-pointed star, or text flowing around the edge of circle, in HTML/CSS). They still tend to fail at things like this, but I've come to realize that there are whole classes of adjacent problems they're good at, and I'm starting to leverage their strengths rather than get hung up on their weaknesses.\n\nPerhaps you're not playing to their strengths, or just haven't cracked the code for how to prompt them effectively? Prompt engineering is an art, and slight changes to prompts can make a big difference in the resulting code."
}
,
{
"id": "46491595",
"text": "Perhaps it is a skill issue. But I don't really see the point of trying when it seems like the gains are marginal. If agent workflows really do start offering 2x+ level improvements then perhaps I'll switch over, in the meantime I won't have to suffer mental degradation from constant LLM usage."
}
,
{
"id": "46492302",
"text": "and what are those strengths, if you don't mind me asking?"
}
,
{
"id": "46493393",
"text": "- Providing boilerplate/template code for common use cases\n- Explaining what code is doing and how it works\n- Refactoring/updating code when given specific requirements\n- Providing alternative ways of doing things that you might not have thought of yourself\n\nYMMV; every project is different so you might not have occasion to use all of these at the same time."
}
,
{
"id": "46496221",
"text": "I appreciate your reply. A lot of people just say how wonderful and revolutionary LLMs are, but when asked for more concrete stuff they give vague answers or even worse, berate you for being skeptical/accuse you of being a luddite.\n\nYour list gives me a starting point and I'm sure it can even be expanded. I do use LLMs the way you suggested and find them pretty useful most of the time - in chat mode. However, when using them in \"agent mode\" I find them far less useful."
}
,
{
"id": "46491277",
"text": "Username checks out"
}
,
{
"id": "46492434",
"text": "I think it depends what you are doing. I’ve had Claude right the front end of a rust/react app and it was 10x if not x (because I just wouldn’t have attempted it). I’ve also had it write the documentation for a low level crate - work that needs to be done for the crate to be used effectively - but which I would have half-arsed because who like writing documentation?\n\nRecently I’ve been using it to write some async rust and it just shits the bed. It regularly codes the select! drop issue or otherwise completely fails to handle waiting on multiple things. My prompts have gotten quite sweary lately. It is probably 1x or worse. However, I am going to try formulating a pattern with examples to stuff in its context and we’ll see. I view the situation as a problem to be overcome, not an insurmountable failure. There may be places where an AI just can’t get it right: I wouldn’t trust it to write the clever bit tricks I’m doing elsewhere. But even there, it writes (most of) the tests and the docs.\n\nOn the whole, I’m having far more fun with AI, and I am at least 2x as productive, on average.\n\nConsider that you might be stuck in a local (very bad) maximum. They certainly exist, as I’ve discovered. Try some side projects, something that has lots of existing examples in the training set. If you wanted to start a Formula 1 team, you’re going to need to know how to design a car, but there’s also a shit ton of logistics - like getting the car to the track - that an AI could just handle for you. Find boring but vital work the AI can do because, in my experience, that’s 90% of the work."
}
,
{
"id": "46493644",
"text": "Mmm, I do a lot of frontend work but I find writing the frontend code myself is faster. That seems to be mostly what everyone says it's good for. I find it useful for other stuff like writing mini scripts, figuring out arguments for command line tools, reviewing code, generating dumb boilerplate code, etc. Just not for actually writing code."
}
,
{
"id": "46494229",
"text": "I’m better at it in the spaces where I deliver value. For me that’s the backend, and I’m building complex backends with simple frontends. Sounds like your expertise is the front end, so you’re gonna be doing stuff that’s beyond me, and beyond what the AI was trained on. I found ways to make the AI solve backend pain points (documentation, tests, boiler plate like integrations). There’s probably spaces where the AI can make your work more productive, or, like my move into the front end, do work that you didn’t do before."
}
,
{
"id": "46491202",
"text": "> I'd be shocked if the developer wasn't actually less productive\n\nI agree 10x is a very large number and it's almost certainly smaller—maybe 1.5x would be reasonable. But really? You would be shocked if it was above 1.0x? This kind of comment always strikes me as so infantilizing and rude, to suggest that all these developers are actually slower with AI, but apparently completely oblivious to it and only you know better."
}
,
{
"id": "46491318",
"text": "I would never suggest that only I know better. Plenty of other people are observing the same thing, and there is also research backing it up.\n\nMaybe shocked is the wrong term. Surprised, perhaps."
}
,
{
"id": "46493081",
"text": "There are simply so many counterexamples out there of people who have developed projects in a small fraction of the time it would take manually. Whether or not AI is having a positive effect on productivity on average in the industry is a valid question, but it's a statistical one. It's ridiculous to argue that AI has a negative effect on productivity in every single individual case."
}
,
{
"id": "46493653",
"text": "It's all talk and no evidence."
}
,
{
"id": "46491587",
"text": "We’re seeing no external indicators of large productivity gains. Even assuming that productivity gains in large corporations are swallowed up by inefficiencies, you’d expect externally verifiable metrics to show a 2x or more increase in productivity among indie developers and small companies.\n\nSo far it’s just crickets."
}
,
{
"id": "46497746",
"text": "I feel like I can manage the entire stack again - with confidence.\n\nI have less confidence after a session, now I second guess everything and it slows me down because I know the foot-gun is in there somewhere.\n\nFor example, yesterday Gemini started added garbage Unicode and then diagnosed file corruption which it failed to fix.\n\nAnd before you reply, yes it's my fault for not adding \"URGENT CRITICAL REQUIREMENT: don't add rubbish Unicode\" to my GEMINI.md."
}
,
{
"id": "46492143",
"text": "My problem is that code review has always been the least enjoyable part of the job. It’s pure drudgery, and is mentally taxing. Unless you’re vibe coding, you’re now doing a lot of code review. It’s almost all you’re doing outside of the high-level planning and guidance (which is enjoyable).\n\nI’ve settled on reviewing the security boundaries and areas that could affect data leaks / invalid access. And pretty much scanning everything else.\n\nFrom time to time, I find it doing dumb things- n+1 queries, mutation, global mutable variables, etc, but for the most part, it does well enough that I don’t need to be too thorough.\n\nHowever, I wouldn’t want to inherit these codebases without an AI agent to do the work. There are too many broken windows for human maintenance to be considered."
}
,
{
"id": "46492997",
"text": "Worse, you’re doing code review of poorly written code with random failure modes no human would create, and an increasingly big ball of mud that is unmaintainable over time. It’s just the worst kind of reviewing imaginable. The AI makes an indecipherable mess, and you have to work out what the hell is going on."
}
,
{
"id": "46494259",
"text": "There's been so much pressure to use AI at work.\n\nMy codebase is a zen garden I've been raking for 6 years. I have concerns about what's going to happen after a few months of \"we're using AI cause they told us to.\""
}
,
{
"id": "46496143",
"text": "That must be so satisfying. I’ve heard the phrase “code farming” before, but I like the zen garden analogy.\n\nIf the future is indeed AI, and I’m certainly hearing a lot of people using it extensively, then I think there has to be a mindset shift. Our job will change from craft to damage limitation. Our goal will be to manage a manic junior developer who produces a mixture of good code and slop without architectural level reasoning. Code will rot fast and correctness will hinge on testing as much as you can.\n\nIt seems like a horrible future. However, it does seem to me that given decades we were unable to build good development practices. Our tooling is terrible. Most of our languages are terrible. Our solution was to let inexperienced devs create languages with all the same flaws, repeating the same mistakes. Web dev is a great example of inefficient software dev that has held the world to ransom. Maybe AI slop is payback for software developers."
}
,
{
"id": "46494210",
"text": "> The AI makes an indecipherable mess\n\nHumans are perfectly capable of this themselves and in fact often do it..."
}
,
{
"id": "46496085",
"text": "That’s true, but the AI can make it bigger, faster, and more messy."
}
,
{
"id": "46496380",
"text": "> My problem is that code review has always been the least enjoyable part of the job.\n\nThe article is about personal projects. The need to review the code is usually 10x less :-)"
}
,
{
"id": "46496269",
"text": "For most of my AI uses, I already have an implementation in mind. The prompt is small enough that most of the time, the agent would get it 90% there. In a way, it's basically an advanced autocomplete.\n\nI think this is quite nice cause it doesn't feel like code review. It's more of a: did it do it? Yes? Great. Somewhat? Good enough, i can work from there. And when it doesn't work, I just scrap that and re-prompt or implement it manually.\n\nBut I do agree with what you say. When someone uses AI without making the code their own, it's a nightmare. I've had to review some PRs where I feel like I'm prompting AI rather than an engineer. I did wonder if they simply put my reviews directly to some agent..."
}
,
{
"id": "46493338",
"text": "Agreed. I've settled on writing the code myself and having AI do the first pass review."
}
,
{
"id": "46493910",
"text": "I enjoy when:\nThings are simple.\nThings are a complicated, but I can learn something useful.\n\nI do not enjoy when:\nThings are arbitrarily complicated.\nThings are a complicated, but I'm just using AI to blindly get something done instead of learning.\nThings are arbitrarily complicated and not incentivized to improve because now \"everyone can just use AI\"\n\nIt feels like instead of all stepping back and saying \"we need to simplify things\" we've doubled down on abstraction _again_"
}
,
{
"id": "46500199",
"text": "Don't you enjoy the fact that some staff is actually _done_?"
}
,
{
"id": "46491131",
"text": "I've come to realise that not only do I hate reading stuff written by AI. I also hate reading stuff praising AI. They all say the same thing. It's just boring."
}
,
{
"id": "46497025",
"text": "Same here. I wrote this comment before I saw yours: https://news.ycombinator.com/item?id=46496990\n\nIt really brings no value. I'm not learning anything new here. And the discussion around it is always the same thing."
}
]
Return ONLY a JSON array with this exact structure (no other text):
[
{
"id": "comment_id_1",
"topics": [
1,
3,
5
]
}
,
{
"id": "comment_id_2",
"topics": [
2
]
}
,
...
]
Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices
- Only assign topics that are genuinely relevant to the comment
- If no topics match, use an empty array:
{
"id": "...",
"topics": []
}
50