llm/fa6df919-50f4-440a-804d-6a9d3e9721d8/batch-0-1b46193f-bf27-4563-90cc-be026a5cc173-input.json
The following is content for you to classify. Do not respond to the comments—classify them.
<topics>
1. Returning Developers and Parents
Related: People who moved into management or became parents finding AI enables them to code again in short time windows without needing hours to ramp up on forgotten details
2. Productivity Claims Skepticism
Related: Debates over whether 10x productivity gains are real or exaggerated, with critics noting lack of controlled studies and potential for gambling-like dopamine hits from prompting
3. Learning vs Efficiency Tradeoff
Related: Tension between using AI to get things done quickly versus the value of learning through struggle, friction, and hands-on experience with tools and concepts
4. Craft vs Results Orientation
Related: Division between developers who enjoy the process of writing code as craft versus those who see code as means to an end and value outcomes over process
5. Code Review Burden
Related: Concerns that AI shifts work from enjoyable coding to tedious reviewing of AI output, with questions about maintainability and technical debt accumulation
6. Vibe Coding Quality Concerns
Related: Skepticism about code quality from AI assistance, fears of slop, hidden bugs, and unmaintainable codebases that require experienced developers to fix
7. Web Development Complexity
Related: Discussion of whether modern web development is unnecessarily complex with frameworks, bundlers, and toolchains, or if complexity serves legitimate organizational needs
8. Personal Project Renaissance
Related: Stories of developers completing long-postponed side projects, building tools for personal use, and feeling creative freedom with AI assistance
9. Skill Atrophy Fears
Related: Worries that relying on AI will cause developers to lose skills, never develop expertise, and become unable to debug or understand their own systems
10. IKEA Furniture Analogy
Related: Debate comparing AI-assisted coding to assembling IKEA furniture versus carpentry, questioning whether using AI constitutes real development
11. Historical Tech Parallels
Related: Comparisons to printing press disrupting scribes, calculators replacing mental math, and compilers abstracting assembly, debating if AI is similar
12. LLM Usage Skill Requirements
Related: Arguments that getting value from LLMs requires skill, experience to recognize good and bad output, and knowing what questions to ask
13. Simplicity vs Framework Culture
Related: Advocacy for vanilla PHP, plain JavaScript, and avoiding unnecessary complexity, arguing tools exist by choice not necessity
14. Cost and Subscription Concerns
Related: Practical questions about whether $20/month subscriptions are sufficient versus $200/month, and fears of future price increases or feature gating
15. Hallucinations and Reliability
Related: Frustrations with LLMs producing non-existent functions, incorrect code, and requiring extensive verification and correction
16. Race to Bottom Economics
Related: Fears that everyone having access to AI coding will flood markets with competitors, devalue software development, and reduce wages
17. Executive Dysfunction Aid
Related: Theory that AI productivity gains come partly from helping developers overcome starting friction and maintain focus through context switching
18. Boilerplate Liberation
Related: Appreciation for AI handling tedious setup, configuration, documentation, and scaffolding while humans focus on interesting problems
19. Fun Definition Debate
Related: Fundamental disagreement about what makes programming enjoyable - the process of writing code versus seeing results and solving problems
20. Manager Coding Concerns
Related: Criticism of managers using AI to write production code without proper skills, causing incidents and requiring real engineers to fix issues
0. Does not fit well in any category
</topics>
<comments_to_classify>
[
{
"id": "46488894",
"text": "Something I like about our weird new LLM-assisted world is the number of people I know who are coding again, having mostly stopped as they moved into management roles or lost their personal side project time to becoming parents.\n\nAI assistance means you can get something useful done in half an hour, or even while you are doing other stuff. You don't need to carve out 2-4 hours to ramp up any more.\n\nIf you have significant previous coding experience - even if it's a few years stale - you can drive these things extremely effectively. Especially if you have management experience, quite a lot of which transfers to \"managing\" coding agents (communicate clearly, set achievable goals, provide all relevant context.)"
}
,
{
"id": "46491236",
"text": "I don't know but to me this all sounds like the antithesis of what makes programming fun. You don't have productivity goals for hobby coding where you'd have to make the most of your half an hour -- that sounds too much like paid work to be fun. If you have a half an hour, you tinker for a half an hour and enjoy it. Then you continue when you have another half an hour again. (Or push into night because you can't make yourself stop.)"
}
,
{
"id": "46491935",
"text": "What you consider fun isn't universal. Some folks don't want to just tinker for half an hour, some folks enjoy getting a particular result that meets specific goals. Some folks don't find the mechanics of putting lines of code together as fun as what the code does when it runs. That might sound like paid work to you, but it can be gratifying for not-you."
}
,
{
"id": "46493495",
"text": "For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI."
}
,
{
"id": "46494978",
"text": "The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser."
}
,
{
"id": "46495543",
"text": "Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ..."
}
,
{
"id": "46498299",
"text": "So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way.\n\nThough, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me."
}
,
{
"id": "46499247",
"text": "Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results."
}
,
{
"id": "46499739",
"text": "\"You're holding it wrong\"\n\n99% of an LLM's usefulness vanishes, if it behaves like an addled old man.\n\n\"What's that sonny? But you said you wanted that!\"\n\n\"Wait, we did that last week? Sorry let me look at this again\"\n\n\"What? What do you mean, we already did this part?!\""
}
,
{
"id": "46500470",
"text": "Wrong mental model. Addled old men can't write code 1000x faster than any human."
}
,
{
"id": "46502791",
"text": "I'd prefer 1x \"wrong stuff\" than wrong stuff blasted 1000x. How is that helpful?\n\nFurther, they can't write code that fast, because you have to spend 1000x explaining it to them."
}
,
{
"id": "46498509",
"text": "Which LLMs have you tried? Claude Code seems to be decent at not hallucinating, Gemini CLI is more eager.\n\nI don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it."
}
,
{
"id": "46498533",
"text": "Mostly chatgpt because I see 0 value in paying for any llm, nor do I wish to gice up my data to any llm provider"
}
,
{
"id": "46500774",
"text": "They work better with project context and access to tools, so yeah, the web interface is not their best foot forward.\n\nThat doesn't mean the agents are amazing, but they can be useful."
}
,
{
"id": "46501151",
"text": "A simple \"how do I access x in y framework in the intended way\" shouldnt require any more context.\n\ninstead of telling me about z option it keeps hallucinating something that doesnt exist and even says its in the docs when it isnt.\n\nLiterally just wasting my time"
}
,
{
"id": "46502065",
"text": "I was in the same camp until a few months ago. I now think they're valid tools, like compilers. Not in the sense that everyone compares them (compilers made asm development a minuscule niche of development).\n\nBut in the sense that even today many people don't use compilers or static analysis tools. But that world is slowly shrinking.\n\nSame for LLMs, the non LLM world will probably shrink.\n\nYou might be able to have a long and successful career without touching them for code development. Personally I'd rather check them out since tools are just tools."
}
,
{
"id": "46495900",
"text": "As long as what it says is reliable and not made up."
}
,
{
"id": "46496343",
"text": "I feel like we are just covering whataboutism tropes now.\n\nYou can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.\n\nAnd with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me."
}
,
{
"id": "46499710",
"text": "This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself."
}
,
{
"id": "46500078",
"text": "Learning means friction, it's not going to happen any other way."
}
,
{
"id": "46500405",
"text": "Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree."
}
,
{
"id": "46503542",
"text": "I agree! I just don’t think the friction has to come from tediously trawling through a bunch of web pages that don’t contain the answer to your question."
}
,
{
"id": "46501138",
"text": "\"What an LLM is to me is the most remarkable tool that we've ever come up with, and it's the equivalent of a e-bike for our minds\""
}
,
{
"id": "46495058",
"text": "You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working."
}
,
{
"id": "46496222",
"text": "Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either."
}
,
{
"id": "46497363",
"text": "Necessarily, LLM output that works isn't gibberish.\n\nThe code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say \"continue\" because it would stop in the middle of writing a function."
}
,
{
"id": "46498585",
"text": "Necessarily, LLM output that works isn't gibberish.\n\nHardly. Poorly conjured up code can still work."
}
,
{
"id": "46498677",
"text": "\"Gibberish\" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish\n\nEspecially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work.\n\nEven the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand."
}
,
{
"id": "46498490",
"text": "It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues.\n\nAlso, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.\n\nI think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made."
}
,
{
"id": "46498555",
"text": "they have a data bank the size of the internet so they can\npull hints that sometimes surprise even experienced devs.\n\nThat's a polite way of phrasing \"they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers.\" I just discovered another victim: the Renesas forums. Cloudflare is blocking me from accessing the site completely, the only site I've ever had this happen to. But I'm glad you're able to have your fun.\n\nit might turn out the balance is something like 25% handmade - 75% LLM made.\n\nDoubtful. As the arms race continues AI DDoS bots will have less and less recent \"training\" material. Not a day goes by that I don't discover another site employing anti-AI bot software."
}
,
{
"id": "46498840",
"text": "> they've stolen a mountain of information\n\nIn law, training is not itself theft. Pirating books for any reason including training is still a copyright violation, but the judges ruled specifically that the training on data lawfully obtained was not itself an offence.\n\nCloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. (And indeed would struggle to be, given all search engines have for a long time been doing just that).\n\n> As the arms race continues AI DDoS bots will have less and less recent \"training\" material\n\nMy experience as a human is that humans keep re-inventing the wheel, and if they instead re-read the solutions from even just 5 years earlier (or 10, or 15, or 20…) we'd have simpler code and tools that did all we wanted already.\n\nFor example, \"making a UI\" peaked sometime between the late 90s and mid 2010s with WYSIWYG tools like Visual Basic (and the mac equivalent now known as Xojo) and Dreamweaver, and then in the final part of that a few good years where Interface Builder finally wasn't sucking on Xcode. And then everyone on the web went for React and Apple made SwiftUI with a preview mode that kept crashing.\n\nIf LLMs had come before reactive UI, we'd have non-reactive alternatives that would probably suck less than all the weird things I keep seeing from reactive UIs."
}
,
{
"id": "46500756",
"text": "> That's a polite way of phrasing \"they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers.\"\n\nYes, but I can't stop them, can you?\n\n> But I'm glad you're able to have your fun.\n\nUnfortunately I have to be practical.\n\n> Doubtful. As the arms race continues AI DDoS bots will have less and less recent \"training\" material. Not a day goes by that I don't discover another site employing anti-AI bot software.\n\nAlmost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs.\n\nThe hope that they'll run out of relevant material is slim.\n\nOh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT.\n\nI have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places."
}
,
{
"id": "46500489",
"text": "Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it.\n\nSome people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse."
}
,
{
"id": "46495302",
"text": "I don't think \"learning\" is a goal here..."
}
,
{
"id": "46495563",
"text": "I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again."
}
,
{
"id": "46496101",
"text": "Exactly, the whole point is it wouldn’t take 30 minutes (more like 3 hours) if the tooling didn’t change all the fucking time. And if the ecosystem wasn’t a house of cards 8 layers of json configuration tall.\n\nInstead you’d learn it, remember it, and it would be useful next time. But it’s not."
}
,
{
"id": "46498338",
"text": "And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something"
}
,
{
"id": "46500346",
"text": "Depends on what level of abstraction you're comfortable with. I have no problem driving a car I didn't build."
}
,
{
"id": "46500397",
"text": "I didnt build my car either. But I understand a bit of most of the main mechanics, like how the ABS works, how powered steering does, how an ICE works and so on."
}
,
{
"id": "46498232",
"text": "I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc."
}
,
{
"id": "46496766",
"text": ">> The difference is that after you’ve googled it for ½ hour, you’ve learned something.\n\nI've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday ."
}
,
{
"id": "46497029",
"text": "Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)..."
}
,
{
"id": "46496401",
"text": "Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually.\n\nInstead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.\n\nHow do we automate our human in the loop vibe reactions?"
}
,
{
"id": "46498534",
"text": "> Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs.\n\nThis is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering."
}
,
{
"id": "46498770",
"text": "> Instead of manual coding training your time is better invested in learning to channel coding agents\n\nAll channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time.\n\n> how to test code to our satisfaction\n\nSure testing has value.\n\n> how to know if what AI did was any good\n\nThis is what code review is for.\n\n> Testing without manual review, because manual review is just vibes\n\nCalling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases.\n\nIf your code reviews are 'vibes', you're bad at code review\n\n> If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.\n\nTo fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap."
}
,
{
"id": "46506753",
"text": "> This is what code review is for.\n\nMy point is that visual inspection of code is just \"vibe testing\", and you can't reproduce it. Even you yourself, 6 months later, can't fully repeat the vibe check \"LGTM\" signal. That is why the proper form is a code test."
}
,
{
"id": "46497585",
"text": "Yes and no.\n\nYes, I recon coding is dead.\n\nNo, that doesn't mean there's nothing to learn.\n\nPeople like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain \"three items costing less than £1 each cannot add up to more than £3\" to the cashier shows that even this trivial level of mental arithmetic is not universal.\n\nI now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud."
}
,
{
"id": "46506783",
"text": "> But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking).\n\nCode review done visually is \"just vibe testing\" in my book. It is not something you can reproduce, it depends on the context in your head this moment. So we need actual code tests. Relying on \"Looks Good To Me\" is hand waving, code smell level testing.\n\nWe are discussing vibe coding but the problem is actually vibe testing. You don't even need to be in the AI age to vibe test, it's how we always did it when manually reviewing code. And in this age it means \"walking your motorcycle\" speed, we need to automate this by more extensive code tests."
}
,
{
"id": "46494273",
"text": "The difference is whether or not you find computers interesting and enjoy understanding how they work.\n\nFor the people who just want to solve some problem unrelated to computers but require a computer for some part of the task, yes AI would be more “fun”."
}
,
{
"id": "46494358",
"text": "I don’t find this to be true. I enjoy computers quite a bit. I enjoy the hardware, scaling problems, theory behind things, operating systems, networking, etc.\n\nMost of all I find what computers allow humanity to achieve extremely interesting and motivating. I call them the worlds most complicated robot.\n\nI don’t find coding overly fun in itself. What I find fun is the results I get when I program something that has the result I desire. Maybe that’s creating a service for friends to use, maybe it’s a personal IT project, maybe it’s having commercial quality WiFi at home everyone is amazed at when they visit, etc. Sometimes - even often - it’s the understanding that leads to pride in craftsmanship.\n\nBut programming itself is just a chore for me to get done in service of whatever final outcome I’m attempting to achieve. Could be delivering bits on the internet for work, or automating OS installs to look at the 50 racks of servers humming away with cable porn level work done in the cabinets.\n\nI never enjoyed messing around with HTML at that much in the 90s. But I was motivated to learn it just enough to achieve the cool ideas I could come up with as a teenager and share them with my friends.\n\nI can appreciate clean maintainable code, which is the only real reason LLMs don’t scratch the itch as much as you’d expect for someone like me."
}
]
</comments_to_classify>
Based on the comments above, assign each to up to 3 relevant topics.
Return ONLY a JSON array with this exact structure (no other text):
[
{
"id": "comment_id_1",
"topics": [
1,
3,
5
]
}
,
{
"id": "comment_id_2",
"topics": [
2
]
}
,
{
"id": "comment_id_3",
"topics": [
0
]
}
,
...
]
Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment
Remember: Output ONLY the JSON array, no other text.
50