Summarizer

LLM Input

llm/fa6df919-50f4-440a-804d-6a9d3e9721d8/batch-6-886c916f-e909-4867-a652-11c9f296c677-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Returning Developers and Parents
   Related: People who moved into management or became parents finding AI enables them to code again in short time windows without needing hours to ramp up on forgotten details
2. Productivity Claims Skepticism
   Related: Debates over whether 10x productivity gains are real or exaggerated, with critics noting lack of controlled studies and potential for gambling-like dopamine hits from prompting
3. Learning vs Efficiency Tradeoff
   Related: Tension between using AI to get things done quickly versus the value of learning through struggle, friction, and hands-on experience with tools and concepts
4. Craft vs Results Orientation
   Related: Division between developers who enjoy the process of writing code as craft versus those who see code as means to an end and value outcomes over process
5. Code Review Burden
   Related: Concerns that AI shifts work from enjoyable coding to tedious reviewing of AI output, with questions about maintainability and technical debt accumulation
6. Vibe Coding Quality Concerns
   Related: Skepticism about code quality from AI assistance, fears of slop, hidden bugs, and unmaintainable codebases that require experienced developers to fix
7. Web Development Complexity
   Related: Discussion of whether modern web development is unnecessarily complex with frameworks, bundlers, and toolchains, or if complexity serves legitimate organizational needs
8. Personal Project Renaissance
   Related: Stories of developers completing long-postponed side projects, building tools for personal use, and feeling creative freedom with AI assistance
9. Skill Atrophy Fears
   Related: Worries that relying on AI will cause developers to lose skills, never develop expertise, and become unable to debug or understand their own systems
10. IKEA Furniture Analogy
   Related: Debate comparing AI-assisted coding to assembling IKEA furniture versus carpentry, questioning whether using AI constitutes real development
11. Historical Tech Parallels
   Related: Comparisons to printing press disrupting scribes, calculators replacing mental math, and compilers abstracting assembly, debating if AI is similar
12. LLM Usage Skill Requirements
   Related: Arguments that getting value from LLMs requires skill, experience to recognize good and bad output, and knowing what questions to ask
13. Simplicity vs Framework Culture
   Related: Advocacy for vanilla PHP, plain JavaScript, and avoiding unnecessary complexity, arguing tools exist by choice not necessity
14. Cost and Subscription Concerns
   Related: Practical questions about whether $20/month subscriptions are sufficient versus $200/month, and fears of future price increases or feature gating
15. Hallucinations and Reliability
   Related: Frustrations with LLMs producing non-existent functions, incorrect code, and requiring extensive verification and correction
16. Race to Bottom Economics
   Related: Fears that everyone having access to AI coding will flood markets with competitors, devalue software development, and reduce wages
17. Executive Dysfunction Aid
   Related: Theory that AI productivity gains come partly from helping developers overcome starting friction and maintain focus through context switching
18. Boilerplate Liberation
   Related: Appreciation for AI handling tedious setup, configuration, documentation, and scaffolding while humans focus on interesting problems
19. Fun Definition Debate
   Related: Fundamental disagreement about what makes programming enjoyable - the process of writing code versus seeing results and solving problems
20. Manager Coding Concerns
   Related: Criticism of managers using AI to write production code without proper skills, causing incidents and requiring real engineers to fix issues
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46491281",
  "text": "That sounds reasonable to me. AI is best at generating super basic and common code, it will have plenty of training on game templates and simple games.\n\nObviously you cannot generalize that to all software development though."
}
,
  
{
  "id": "46491818",
  "text": "As you get deeper beyond the starter and bootstrap code it definitely takes a different approach to get value.\n\nThis is in part because context limits of large code bases and because the knowledge becomes more specialized and the LLM has no training on that kind of code.\n\nBut people are making it work, it just isn't as black and white."
}
,
  
{
  "id": "46496379",
  "text": "That’s the issue, though, isn’t it? Why isn’t it black and white? Clear massive productivity gains at Google or MS and their dev armies should be visible from orbit.\n\nJust today on HN I’ve seen claims of 25x and 10x and 2x productivity gains. But none of it starting with well calibrated estimations using quantifiable outcomes, consistent teams, whole lifecycle evaluation, and apples to apples work.\n\nIn my own extensive use of LLMs I’m reminded of mouse versus command line testing around file navigation. Objective facts and subjective reporting don’t always line up, people feel empowered and productive while ‘doing’ and don’t like ‘hunting’ while uncertain… but our sense of the activity and measurable output aren’t the same.\n\nI’m left wondering why a 2x Microsoft of OpenAI would ever sell their competitive advantage to others. There’s infinite money to be made exploiting such a tech, but instead we see highschool homework, script gen, and demo ware that is already just a few searches away and downloadable.\n\nLLMs are in essence copy and pasting existing work while hopping over uncomfortable copyright and attribution qualms so devs feel like ‘product managers’ and not charlatans. Is that fundamentally faster than a healthy stack overflow and non-enshittened Google? Over a product lifecycle? … ‘sometimes, kinda’ in the absence of clear obvious next-gen production feels like we’re expecting a horse with a wagon seat built in to win a Formula 1 race."
}
,
  
{
  "id": "46494034",
  "text": "> That sounds reasonable to me. AI is best at generating super basic and common code\n\nI'm currently using AI (Claude Code) to write a new Lojban parser in Haskell from scratch, which is hardly something \"super basic and common\". It works pretty well in practice, so I don't think that assertion is valid anymore. There are certainly differences between different tasks in terms of what works better with coding agents, but it's not as simple as \"super basic\"."
}
,
  
{
  "id": "46494192",
  "text": "I'm sure there is plenty of language parsers written in Haskell in the training data. Regardless, the question isn't if LLMs can generate code (they clearly can), it's if agentic workflows are superior to writing code by hand."
}
,
  
{
  "id": "46494248",
  "text": "There's no shortage of parsers in Haskell, but parsing a human language is very different from parsing a programming language. The grammar is much, much more complex, and this means that e.g. simple approaches that adequate error messages don't really work here because failures are non-actionable."
}
,
  
{
  "id": "46491396",
  "text": "One concern is those less experienced engineers might never become experienced if they’re using AI from the start. Not that everyone needs to be good at coding. But I wonder what new grads are like these days. I suspect few people can fight the temptation to make their lives a little easier and skip learning some lessons."
}
,
  
{
  "id": "46496681",
  "text": "I estimated that i was 1.2x when we only had tab completion models. 1.5x would be too modest. I've done plenty of ~6-8 hour tasks in ~1-2 hours using llms."
}
,
  
{
  "id": "46496894",
  "text": "Indeed. I just did a 4-6 month refactor + migration project in less than 3 weeks."
}
,
  
{
  "id": "46491192",
  "text": "I recently used AI to help build the majority of a small project (database-driven website with search and admin capabilities) and I'd confidently say I was able to build it 3 to 5 times faster with AI. For context, I'm an experienced developer and know how to tweak the AI code when it's wonky and the AI can't be coerced into fixing its mistakes."
}
,
  
{
  "id": "46491328",
  "text": "What's the link?"
}
,
  
{
  "id": "46491398",
  "text": "The site is password protected because it's intended for scholarly researchers, and ironically the client doesn't want LLMs scraping it."
}
,
  
{
  "id": "46495177",
  "text": "10x probably means “substantial gain”. There is no universal unit of gain.\n\nHowever if the difference is between doing a project vs not doing is, then the gain is much more than 10x."
}
,
  
{
  "id": "46495447",
  "text": "There is no x is because LLM performance is non deterministic. You get slop out at varying degrees of quality and so your job shifts from writing to debugging."
}
,
  
{
  "id": "46494713",
  "text": "I don't know what to tell you, it's just true. I have done what was previously days of BI/SQL dredging and visualizing in 20 minutes. You can be shocked and skeptical but it doesn't make it not true."
}
,
  
{
  "id": "46493737",
  "text": "From one personal project,\n\nLast month:\n\n128 files changed, 39663 insertions(+), 4439 deletions(-)\nRange: 8eb4f6a..HEAD\nNon-merge commits: 174\nDate range (non-merge): 2025-12-04 → 2026-01-04 (UTC)\nActive days (non-merge): 30\n\nLast 7 days:\n\n59 files changed, 19412 insertions(+), 857 deletions(-)\nRange: c8df64e..HEAD\nNon-merge commits: 67\nDate range (non-merge): 2025-12-28 → 2026-01-04 (UTC)\nActive days (non-merge): 8\n\nThis has a lot of non-trivial stuff in it. In fact, I'm just about done with all of the difficult features that had built up over the past couple years."
}
,
  
{
  "id": "46492928",
  "text": "Don't worry, it's an LLM that wrote it based on the patterns in the text, e.g. \"Starting a new project once felt insurmountable. Now, it feels realistic again.\""
}
,
  
{
  "id": "46493080",
  "text": "That is a normal, run of the mill sentence."
}
,
  
{
  "id": "46496760",
  "text": "Yes, for an LLM. The good thing about LLMs is that they can infer patterns. The bad thing about LLMs is that they infer patterns. The patterns change a bit over time, but the overuse of certain language patterns remains a constant.\n\nOne could argue that some humans write that way, but ultimately it does not matter if the text was generated by an LLM, reworded by a human in a semi-closed loop or organically produced by human. The patterns indicate that the text is just a regurgitation of buzzwords and it's even worse if an LLM-like text was produced organically."
}
,
  
{
  "id": "46493114",
  "text": "I can't prove it of course but I stand by it."
}
,
  
{
  "id": "46494051",
  "text": "Claiming that use of more complicated words and sentences is evidence of LLM use is just paranoia. Plenty of folk write like OP does, myself included."
}
,
  
{
  "id": "46491161",
  "text": "Numbers don't matter if it makes you \"feel\" more productive.\n\nI've started and finished way more small projects i was too lazy to start without AI. So infinitely more productive?\n\nThough I've definitely wasted some time not liking what AI generated and started a new chat."
}
,
  
{
  "id": "46491242",
  "text": "> Numbers don't matter\n\nYes that's already been well established."
}
,
  
{
  "id": "46496046",
  "text": "It does matter because you're still using up your life on this stuff."
}
,
  
{
  "id": "46490973",
  "text": "Just as a personal data point, are you a developer? Do you use AI?"
}
,
  
{
  "id": "46491011",
  "text": "Yes and yes."
}
,
  
{
  "id": "46491054",
  "text": "And you find yourself less productive?"
}
,
  
{
  "id": "46491099",
  "text": "No but I don't use it to generate code usually.\n\nI gave agents a solid go and I didn't feel more productive, just became more stupid."
}
,
  
{
  "id": "46491376",
  "text": "A year or so ago I was seriously thinking of making a series of videos showing how coding agents were just plain bad at producing code. This was based on my experience trying to get them to do very simple things (e.g. a five-pointed star, or text flowing around the edge of circle, in HTML/CSS). They still tend to fail at things like this, but I've come to realize that there are whole classes of adjacent problems they're good at, and I'm starting to leverage their strengths rather than get hung up on their weaknesses.\n\nPerhaps you're not playing to their strengths, or just haven't cracked the code for how to prompt them effectively? Prompt engineering is an art, and slight changes to prompts can make a big difference in the resulting code."
}
,
  
{
  "id": "46491595",
  "text": "Perhaps it is a skill issue. But I don't really see the point of trying when it seems like the gains are marginal. If agent workflows really do start offering 2x+ level improvements then perhaps I'll switch over, in the meantime I won't have to suffer mental degradation from constant LLM usage."
}
,
  
{
  "id": "46492302",
  "text": "and what are those strengths, if you don't mind me asking?"
}
,
  
{
  "id": "46493393",
  "text": "- Providing boilerplate/template code for common use cases\n- Explaining what code is doing and how it works\n- Refactoring/updating code when given specific requirements\n- Providing alternative ways of doing things that you might not have thought of yourself\n\nYMMV; every project is different so you might not have occasion to use all of these at the same time."
}
,
  
{
  "id": "46496221",
  "text": "I appreciate your reply. A lot of people just say how wonderful and revolutionary LLMs are, but when asked for more concrete stuff they give vague answers or even worse, berate you for being skeptical/accuse you of being a luddite.\n\nYour list gives me a starting point and I'm sure it can even be expanded. I do use LLMs the way you suggested and find them pretty useful most of the time - in chat mode. However, when using them in \"agent mode\" I find them far less useful."
}
,
  
{
  "id": "46494232",
  "text": "One of my favorite engineers calls AI a \"wish fulfillment slot machine.\""
}
,
  
{
  "id": "46491277",
  "text": "Username checks out"
}
,
  
{
  "id": "46492434",
  "text": "I think it depends what you are doing. I’ve had Claude right the front end of a rust/react app and it was 10x if not x (because I just wouldn’t have attempted it). I’ve also had it write the documentation for a low level crate - work that needs to be done for the crate to be used effectively - but which I would have half-arsed because who like writing documentation?\n\nRecently I’ve been using it to write some async rust and it just shits the bed. It regularly codes the select! drop issue or otherwise completely fails to handle waiting on multiple things. My prompts have gotten quite sweary lately. It is probably 1x or worse. However, I am going to try formulating a pattern with examples to stuff in its context and we’ll see. I view the situation as a problem to be overcome, not an insurmountable failure. There may be places where an AI just can’t get it right: I wouldn’t trust it to write the clever bit tricks I’m doing elsewhere. But even there, it writes (most of) the tests and the docs.\n\nOn the whole, I’m having far more fun with AI, and I am at least 2x as productive, on average.\n\nConsider that you might be stuck in a local (very bad) maximum. They certainly exist, as I’ve discovered. Try some side projects, something that has lots of existing examples in the training set. If you wanted to start a Formula 1 team, you’re going to need to know how to design a car, but there’s also a shit ton of logistics - like getting the car to the track - that an AI could just handle for you. Find boring but vital work the AI can do because, in my experience, that’s 90% of the work."
}
,
  
{
  "id": "46493644",
  "text": "Mmm, I do a lot of frontend work but I find writing the frontend code myself is faster. That seems to be mostly what everyone says it's good for. I find it useful for other stuff like writing mini scripts, figuring out arguments for command line tools, reviewing code, generating dumb boilerplate code, etc. Just not for actually writing code."
}
,
  
{
  "id": "46494229",
  "text": "I’m better at it in the spaces where I deliver value. For me that’s the backend, and I’m building complex backends with simple frontends. Sounds like your expertise is the front end, so you’re gonna be doing stuff that’s beyond me, and beyond what the AI was trained on. I found ways to make the AI solve backend pain points (documentation, tests, boiler plate like integrations). There’s probably spaces where the AI can make your work more productive, or, like my move into the front end, do work that you didn’t do before."
}
,
  
{
  "id": "46491202",
  "text": "> I'd be shocked if the developer wasn't actually less productive\n\nI agree 10x is a very large number and it's almost certainly smaller—maybe 1.5x would be reasonable. But really? You would be shocked if it was above 1.0x? This kind of comment always strikes me as so infantilizing and rude, to suggest that all these developers are actually slower with AI, but apparently completely oblivious to it and only you know better."
}
,
  
{
  "id": "46491318",
  "text": "I would never suggest that only I know better. Plenty of other people are observing the same thing, and there is also research backing it up.\n\nMaybe shocked is the wrong term. Surprised, perhaps."
}
,
  
{
  "id": "46493081",
  "text": "There are simply so many counterexamples out there of people who have developed projects in a small fraction of the time it would take manually. Whether or not AI is having a positive effect on productivity on average in the industry is a valid question, but it's a statistical one. It's ridiculous to argue that AI has a negative effect on productivity in every single individual case."
}
,
  
{
  "id": "46493653",
  "text": "It's all talk and no evidence."
}
,
  
{
  "id": "46491587",
  "text": "We’re seeing no external indicators of large productivity gains. Even assuming that productivity gains in large corporations are swallowed up by inefficiencies, you’d expect externally verifiable metrics to show a 2x or more increase in productivity among indie developers and small companies.\n\nSo far it’s just crickets."
}
,
  
{
  "id": "46497746",
  "text": "I feel like I can manage the entire stack again - with confidence.\n\nI have less confidence after a session, now I second guess everything and it slows me down because I know the foot-gun is in there somewhere.\n\nFor example, yesterday Gemini started added garbage Unicode and then diagnosed file corruption which it failed to fix.\n\nAnd before you reply, yes it's my fault for not adding \"URGENT CRITICAL REQUIREMENT: don't add rubbish Unicode\" to my GEMINI.md."
}
,
  
{
  "id": "46492143",
  "text": "My problem is that code review has always been the least enjoyable part of the job. It’s pure drudgery, and is mentally taxing. Unless you’re vibe coding, you’re now doing a lot of code review. It’s almost all you’re doing outside of the high-level planning and guidance (which is enjoyable).\n\nI’ve settled on reviewing the security boundaries and areas that could affect data leaks / invalid access. And pretty much scanning everything else.\n\nFrom time to time, I find it doing dumb things- n+1 queries, mutation, global mutable variables, etc, but for the most part, it does well enough that I don’t need to be too thorough.\n\nHowever, I wouldn’t want to inherit these codebases without an AI agent to do the work. There are too many broken windows for human maintenance to be considered."
}
,
  
{
  "id": "46492997",
  "text": "Worse, you’re doing code review of poorly written code with random failure modes no human would create, and an increasingly big ball of mud that is unmaintainable over time. It’s just the worst kind of reviewing imaginable. The AI makes an indecipherable mess, and you have to work out what the hell is going on."
}
,
  
{
  "id": "46494259",
  "text": "There's been so much pressure to use AI at work.\n\nMy codebase is a zen garden I've been raking for 6 years. I have concerns about what's going to happen after a few months of \"we're using AI cause they told us to.\""
}
,
  
{
  "id": "46496143",
  "text": "That must be so satisfying. I’ve heard the phrase “code farming” before, but I like the zen garden analogy.\n\nIf the future is indeed AI, and I’m certainly hearing a lot of people using it extensively, then I think there has to be a mindset shift. Our job will change from craft to damage limitation. Our goal will be to manage a manic junior developer who produces a mixture of good code and slop without architectural level reasoning. Code will rot fast and correctness will hinge on testing as much as you can.\n\nIt seems like a horrible future. However, it does seem to me that given decades we were unable to build good development practices. Our tooling is terrible. Most of our languages are terrible. Our solution was to let inexperienced devs create languages with all the same flaws, repeating the same mistakes. Web dev is a great example of inefficient software dev that has held the world to ransom. Maybe AI slop is payback for software developers."
}
,
  
{
  "id": "46494210",
  "text": "> The AI makes an indecipherable mess\n\nHumans are perfectly capable of this themselves and in fact often do it..."
}
,
  
{
  "id": "46496085",
  "text": "That’s true, but the AI can make it bigger, faster, and more messy."
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

50

← Back to job