llm/0c6097e3-bc76-4fbe-ab4f-ceafa2484e5f/batch-2-7d334afb-02e0-4a0e-b92c-ff21eb2cdc66-input.json
The following is content for you to classify. Do not respond to the comments—classify them.
<topics>
1. AI Performance on Greenfield vs. Legacy
Related: Users debate whether agents excel primarily at starting new projects from scratch while struggling to maintain large, complex, or legacy codebases without breaking existing conventions.
2. Context Window Limitations and Management
Related: Discussions focus on token limits (200k), performance degradation as context fills, and strategies like compacting history, using sub-agents, or maintaining summary files to preserve long-term memory.
3. Vibe Coding and Code Quality
Related: The polarization around building apps without reading the code; critics warn of unmaintainable "slop" and technical debt, while proponents value the speed and ability to bypass syntax.
4. Claude Code and Tooling
Related: Specific praise and critique for the Claude Code CLI, its integration with VS Code and Cursor, the use of slash commands, and comparisons to GitHub Copilot's agent mode.
5. Economic Impact on Software Jobs
Related: Existential anxiety regarding the obsolescence of mid-level engineers, the potential "hollowing out" of the middle class, and the shift toward one-person unicorn teams.
6. Prompt Engineering and Configuration
Related: Strategies involving `CLAUDE.md`, `AGENTS.md`, and custom system prompts to teach the AI coding conventions, architecture, and specific skills for better output.
7. Specific Language Capabilities
Related: Anecdotal evidence regarding proficiency in React, Python, and Go versus struggles in C++, Rust, and mobile development (Swift/Kotlin), often tied to training data availability.
8. Engineering vs. Coding
Related: A recurring distinction between "coding" (boilerplate, standard patterns) which AI conquers, and "engineering" (novel logic, complex systems, 3D graphics) where AI supposedly still fails.
9. Security and Trust
Related: Concerns about deploying unaudited AI code, the introduction of vulnerabilities, the risks of giving agents shell access, and the difficulty of verifying AI output.
10. The Skill Issue Argument
Related: Proponents dismiss failures as "skill issues," suggesting frustration stems from poor prompting or adaptability, while skeptics argue the tools are genuinely inconsistent.
11. Cost of AI Development
Related: Analysis of the financial viability of AI coding, including hitting API rate limits, the high cost of Opus 4.5 tokens, and the potential unsustainability of VC-subsidized pricing.
12. Future of Software Products
Related: Predictions that software creation costs will drop to zero, leading to a flood of bespoke personal apps replacing commercial SaaS, but potentially creating a maintenance nightmare.
13. Human-in-the-Loop Workflows
Related: The consensus that AI requires constant human oversight, "tools in a loop," and code review to prevent hallucination loops and ensure functional software.
14. Opus 4.5 vs. Previous Models
Related: Users describe the specific model as a "step change" or "inflection point" compared to Sonnet 3.5 or GPT-4, citing better reasoning and autonomous behavior.
15. Documentation and Specification
Related: The shift from writing code to writing specs; users find that detailed markdown documentation or "plan mode" yields significantly better AI results than vague prompts.
16. AI Hallucinations and Errors
Related: Reports of AI inventing non-existent CLI tools, getting stuck in logical loops, failing at visual UI tasks, and making simple indexing errors.
17. Shift in Developer Role
Related: The idea that developers are evolving into "product managers" or "architects" who direct agents, requiring less syntax proficiency and more systems thinking.
18. Testing and Verification
Related: The reliance on test-driven development (TDD), linters, and compilers to constrain non-deterministic AI output, ensuring generated code actually runs and meets requirements.
19. Local Models vs. Cloud APIs
Related: Discussions on the viability of local models for privacy and cost savings versus the necessity of massive cloud models like Opus for complex reasoning tasks.
20. Societal Implications
Related: Broader philosophical concerns about wealth concentration, the "class war" of automation, environmental impact, and the future of work in a post-code world.
0. Does not fit well in any category
</topics>
<comments_to_classify>
[
{
"id": "46530076",
"text": "I strongly concur with your second statement. Anything other than agent mode in GH copilot feels useless to me. If I want to engage Opus through GH copilot for planning work, I still use agent mode and just indicate the desired output is whatever.md. I obviously only do this in environments lacking a better tool (Claude Code)."
}
,
{
"id": "46522210",
"text": "Check out Antigravity+Google AI Pro $20 plan+Opus 4.5. apparently the Opus limits are insanely generous (of course that could change on a dime)."
}
,
{
"id": "46521534",
"text": "I'd used both CC and Copilot Agent Mode in VSCode, but not the combination of CC + Opus 4.5, and I agree, I was happy enough with Copilot.\n\nThe gap didn't seem big, but in November (which admittedly was when Opus 4.5 was in preview on Copilot) Opus 4.5 with Copilot was awful."
}
,
{
"id": "46521191",
"text": "I suspect that's the other thing at play here; many people have only tried Copilot because it's cheap with all the other Microsoft subscriptions many companies have. Copilot frankly is garbage compared to Cursor/Claude, even with the same exact models."
}
,
{
"id": "46521109",
"text": "This was me. I have done a full 180 over the last 12 months or so, from \"they're an interesting idea, and technically impressive, but not practically useful\" to \"holy shit I can have entire days/weeks where I don't write a single line of code\"."
}
,
{
"id": "46516600",
"text": "my issue hasn't been for a long time now that the code they write works or doesn't work. My issues all stem from that it works, but does the wrong thing"
}
,
{
"id": "46519305",
"text": "> My issues all stem from that it works, but does the wrong thing\n\nIt's an opportunity, not a problem. Because it means there's a gap in your specifications and then your tests.\n\nI use Aider not Claude but I run it with Anthropic models. And what I found is that comprehensively writing up the documentation for a feature spec style before starting eliminates a huge amount of what you're referring to. It serves a triple purpose (a) you get the documentation, (b) you guide the AI and (c) it's surprising how often this helps to refine the feature itself. Sometimes I invoke the AI to help me write the spec as well, asking it to prompt for areas where clarification is needed etc."
}
,
{
"id": "46519434",
"text": "This is how Beads works, especially with Claude Code. What I do is I tell Claude to always create a Bead when I tell it to add something, or about something that needs to be added, then I start brainstorming, and even ask it to do market research what are top apps doing for x, y or z. Then ask it to update the bead (I call them tasks) and then finally when its got enough detail, I tell it, do all of these in parallel."
}
,
{
"id": "46519847",
"text": "Beads is amazing. It’s such a simple concept but elevates agentic coding to another levels"
}
,
{
"id": "46519113",
"text": "If it does the wrong thing you tell it what the right thing is and have it try again.\n\nWith the latest models if you're clear enough with your requirements you'll usually find it does the right thing on the first try."
}
,
{
"id": "46519633",
"text": "There are several rubs with that operating protocol extending beyond the \"you're holding it wrong\" claim.\n\n1) There exists a threshold, only identifiable in retrospect, past which it would have been faster to locate or write the code yourself than to navigate the LLM's correction loop or otherwise ensure one-shot success.\n\n2) The intuition and motivations of LLMs derive from a latent space that the LLM cannot actually access. I cannot get a reliable answer on why the LLM chose the approaches it did; it can only retroactively confabulate. Unlike human developers who can recall off-hand, or at least review associated tickets and meeting notes to jog their memory. The LLM prompter always documenting sufficiently to bridge this LLM provenance gap hits rub #1.\n\n3) Gradually building prompt dependency where one's ability to take over from the LLM declines and one can no longer answer questions or develop at the same velocity themselves.\n\n4) My development costs increasingly being determined by the AI labs and hardware vendors they partner with. Particularly when the former will need to increase prices dramatically over the coming years to break even with even 2025 economics."
}
,
{
"id": "46519717",
"text": "The value I'm getting from this stuff is so large that I'll take those risks, personally."
}
,
{
"id": "46521487",
"text": "Glad you found a way to be unfalsifiable! Lol"
}
,
{
"id": "46523001",
"text": "Many people - simonw is the most visible of them, but there are countless others - have given up trying to convinced folks who are determined to not be convinced, and are simply enjoying their increased productivity. This is not a competition or an argument."
}
,
{
"id": "46523344",
"text": "Maybe they are struggling to convince others because they are unable to produce evidence that is able to convince people?\n\nMy experience scrolling X and HN is a bunch of people going \"omg opus omg Claude Code I'm 10x more productive\" and that's it. Just hand wavy anecdotes based on their own perceived productivity. I'm open to being convinced but just saying stuff is not convincing. It's the opposite, it feels like people have been put under a spell.\n\nI'm following The Primeagen, he's doing a series where he is trying these tools on stream and following peoples advice on how to use them the best. He's actually quite a good programmer so I'm eager to see how it goes. So far he isn't impressed and thus neither am I. If he cracks it and unlocks significant productivity then I will be convinced."
}
,
{
"id": "46523644",
"text": ">> Maybe they are struggling to convince others because they are unable to produce evidence that is able to convince people?\n\nSimon has produced plenty of evidence over the past year. You can check their submission history and their blog: https://simonwillison.net/\n\nThe problem with people asking for evidence is that there's no level of evidence that will convince them. They will say things like \"that's great but this is not a novel problem so obviously the AI did well\" or \"the AI worked only because this is a greenfield project, it fails miserably in large codebases\"."
}
,
{
"id": "46523836",
"text": "It's true that some people will just continually move the goalposts because they are invested in their beliefs. But that doesn't mean that the skepticism around certain claims aren't relevant.\n\nNobody serious is disputing that LLM's can generate working code. They dispute claims like \"Agentic workflows will replace software developers in the short to medium term\", or \"Agentic workflows lead to 2-100x improvements in productivity across the board\". This is what people are looking for in terms of evidence and there just isn't any.\n\nThus far, we do have evidence that AI (at least in OSS) produces a 19% decrease in productivity [0]. We also have evidence that it harms our cognitive abilities [1]. Anecdotally, I have found myself lazily reaching for LLM assistance when encountering a difficult problem instead of thinking deeply about the problem. Anecdotally I also struggle to be more productive using AI-centric agents workflows in areas of expertise .\n\nWe want evidence that \"vibe engineering\" is actually more productive across the entire lifespan of a software project. We want evidence that it produces better outcomes. Nobody has yet shown that. It's just people claiming that because they vibe coded some trivial project, all of software development can benefit from this approach. Recently a principle engineer at Google claimed that Claude Code wrote their team's entire year's worth of work in a single afternoon. They later walked that claim back, but most do not.\n\nI'm more than happy to be convinced but it's becoming extremely tiring to hear the same claims being parroted without evidence and then you get called a luddite when you question it. It's also tiring when you push them on it and they blame it on the model you use, and then the agent, and then the way you handle context, and then the prompts, and then \"skill issue\". Meanwhile all they have to show is some slop that could be hand coded in a couple hours by someone familiar with the domain. I use AI, I was pretty bullish on it for the last two years, and the combination of it simply not living up to expectations + the constant barrage of what feels like a stealth marketing campaign parroting the same thing over and over (the new model is way better, unlike the other times we said that) + the amount of absolute slop code that seems to continue to increase + companies like Microsoft producing worse and worse software as they shoehorn AI into every single product (Office was renamed to Copilot 365). I've become very sensitive to it, much in the same way I was very sensitive to the claims being made by certain VC backed webdev companies regarding their product + framework in the last few years.\n\nI'm not even going to bring up the economic, social, and environmental issues because I don't think they're relevant, but they do contribute to my annoyance with this stuff.\n\n[0] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...\n[1] https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling..."
}
,
{
"id": "46523917",
"text": "> Thus far, we do have evidence that AI (at least in OSS) produces a 19% decrease in productivity\n\nI generally agree with you, but I'd be remiss if I didn't point out that it's plausible that the slow down observed in the METR study was at least partially due to the subjects lack of experience with LLMs. Someone with more experience performed the same experiment on themselves, and couldn't find a significant difference between using LLMs and not [0]. I think the more important point here is that programmers subjective assessment of how much LLMs help them is not reliable, and biased towards the LLMs.\n\n[0] https://mikelovesrobots.substack.com/p/wheres-the-shovelware..."
}
,
{
"id": "46524056",
"text": "I think we're on the same page re. that study. Actually your link made me think about the ongoing debate around IDE's vs stuff like Vim. Some people swear by IDE's and insist they drastically improve their productivity, others dismiss them or even claim they make them less productive. Sound familiar? I think it's possible these AI tools are simply another way to type code, and the differences averaged out end up being a wash."
}
,
{
"id": "46527586",
"text": "IDEs vs vim makes a lot of sense. AI really does feel like using an IDE in a certain way\n\nUsing AI for me absolutely makes it feel like I'm more productive. When I look back on my work at the end of the day and look at what I got done, it would be ludicrous to say it was multiple times the amount as my output pre-AI\n\nDespite all the people replying to me saying \"you're holding it wrong\" I know the fix to it doing the wrong thing. Specify in more detail what I want. The problem with that is twofold:\n\n1. How much to specify? As little as possible is the ideal, if we want to maximize how much it can help us. A balance here is key. If I need to detail every minute thing I may as well write the code myself\n\n2. If I get this step wrong, I still have to review everything, rethink it, go back and re-prompt, costing time\n\nWhen I'm working on production code, I have to understand it all to confidently commit. It costs time for me to go over everything, sometimes multiple iterations. Sometimes the AI uses things I don't know about and I need to dig into it to understand it\n\nAI is currently writing 90% of my code. Quality is fine . It's fun! It's magical when it nails something one-shot. I'm just not confident it's faster overall"
}
,
{
"id": "46532590",
"text": "I think this is an extremely honest perspective. It's actually kind of cool that it's gotten to the point it can write most code - albeit with a lot of handholding."
}
,
{
"id": "46521192",
"text": "I've said this multiple times:\n\nThis is why you use this AI bubble (it IS a bubble) to use the VC-funded AI models for dirt cheap prices and CREATE tools for yourself.\n\nNeed a very specific linter? AI can do it. Need a complex Roslyn analyser? AI. Any kind of scripting or automation that you run on your own machine. AI.\n\nNone of that will go away or suddenly stop working when the bubble bursts.\n\nWithin just the last 6 months I've built so many little utilities to speed up my work (and personal life) it's completely bonkers. Most went from \"hmm, might be cool to...\" to a good-enough script/program in an evening while doing chores.\n\nEven better, start getting the feel for local models. Current gen home hardware is getting good enough and the local models smart enough so you can, with the correct tooling, use them for suprisingly many things."
}
,
{
"id": "46528769",
"text": "> Even better, start getting the feel for local models. Current gen home hardware is getting good enough and the local models smart enough so you can, with the correct tooling, use them for suprisingly many things.\n\nAre there any local models that are at least somewhat comparable to the latest-and-greatest (e.g. Opus 4.5, Gemini 3), especially in terms of coding?"
}
,
{
"id": "46522505",
"text": "A risk I see with this approach is that when the bubble pops, you'll be left dependent on a bunch of tools which you don't know how to maintain or replace on your own, and won't have/be able to afford access to LLMs to do it for you."
}
,
{
"id": "46524399",
"text": "The \"tools\" in this context are literally a few hundred lines of Python or Github CI build pipeline, we're not talking about 500kLOC massive applications.\n\nI'm building tools, not complete factories :) The AI builds me a better hammer specifically for the nails I'm nailing 90% of the time. Even if the AI goes away, I still know how the custom hammer works."
}
,
{
"id": "46522989",
"text": "I thought that initially, but I don't think the skills AI weakens in me are particularly valuable\n\nLet's say AI becomes too expensive - I more or less only have to sharpen up being able to write the language. My active recall of the syntax, common methods and libraries. That's not hard or much of a setback\n\nMaybe this would be a problem if you're purely vibe coding, but I haven't seen that work long term"
}
,
{
"id": "46523408",
"text": "Open source models hosted by independent providers (or even yourself, which if the bubble pops will be affordable if you manage to pick up hardware on fire sales) are already good enough to explain most code."
}
,
{
"id": "46529888",
"text": "> 1) There exists a threshold, only identifiable in retrospect, past which it would have been faster to locate or write the code yourself than to navigate the LLM's correction loop or otherwise ensure one-shot success.\n\nI can run multiple agents at once, across multiple code bases (or the same codebase but multiple different branches), doing the same or different things. You absolutely can't keep up with that. Maybe the one singular task you were working on, sure, but the fact that I can work on multiple different things without the same cognitive load will blow you out of the water.\n\n> 2) The intuition and motivations of LLMs derive from a latent space that the LLM cannot actually access. I cannot get a reliable answer on why the LLM chose the approaches it did; it can only retroactively confabulate. Unlike human developers who can recall off-hand, or at least review associated tickets and meeting notes to jog their memory. The LLM prompter always documenting sufficiently to bridge this LLM provenance gap hits rub #1.\n\nTell the LLM to document in comments why it did things. Human developers often leave and then people with no knowledge of their codebase or their \"whys\" are even around to give details. Devs are notoriously terrible about documentation.\n\n> 3) Gradually building prompt dependency where one's ability to take over from the LLM declines and one can no longer answer questions or develop at the same velocity themselves.\n\nYou can't develop at the same velocity, so drop that assumption now. There's all kinds of lower abstractions that you build on top of that you probably can't explain currently.\n\n> 4) My development costs increasingly being determined by the AI labs and hardware vendors they partner with. Particularly when the former will need to increase prices dramatically over the coming years to break even with even 2025 economics.\n\nYou aren't keeping up with the actual economics. This shit is technically profitable, the unprofitable part is the ongoing battle between LLM providers to have the best model. They know software in the past has often been winner takes all so they're all trying to win."
}
,
{
"id": "46519459",
"text": "In a circuitous way, you can rather successfully have one agent write a specification and another one execute the code changes. Claude code has a planning mode that lets you work with the model to create a robust specification that can then be executed, asking the sort of leading questions for which it already seems to know it could make an incorrect assumption. I say 'agent' but I'm really just talking about separate model contexts, nothing fancy."
}
,
{
"id": "46519861",
"text": "Cursor's planning functionality is very similar and I have found that I can even use \"cheap\" models like their Composer-1 and get great results in the planning phase, and then turn on Sonnet or Opus to actually produce the plan. 90% of the stuff I need to argue about is during the planning phase, so I save a ton of tokens and rework just making a really good spec.\n\nIt turns out that Waterfall was always the correct method, it's just really slow ;)"
}
,
{
"id": "46529489",
"text": "Did you know that software specifications used to be almost entirely flow charts? There is something to be said for that and waterfall."
}
,
{
"id": "46519310",
"text": "Even better, have it write code to describe the right thing then run its code against that, taking yourself out of that loop."
}
,
{
"id": "46520942",
"text": "> With the latest models if you're clear enough with your requirements you'll usually find it does the right thing on the first try\n\nThat's great that this is your experience, but it's not a lot of people's. There are projects where it's just not going to know what to do.\n\nI'm working in a web framework that is a Frankenstein-ing of Laravel and October CMS. It's so easy for the agent to get confused because, even when I tell it this is a different framework, it sees things that look like Laravel or October CMS and suggests solutions that are only for those frameworks. So there's constant made up methods and getting stuck in loops.\n\nThe documentation is terrible, you just have to read the code. Which, despite what people say, Cursor is terrible at, because embeddings are not a real way to read a codebase."
}
,
{
"id": "46523525",
"text": "I'm working mostly in a web framework that's used by me and almost nobody else (the weird little ASGI wrapper buried in Datasette) and I find the coding agents pick it up pretty fast.\n\nOne trick I use that might work for you as well:\n\nClone GitHub.com/simonw/datasette to /tmp\nthen look at /tmp/docs/datasette for\ndocumentation and search the code\nif you need to\n\nTry that with your own custom framework and it might unblock things.\n\nIf your framework is missing documentation tell Claude Code to write itself some documentation based on what it learns from reading the code!"
}
,
{
"id": "46533500",
"text": "> I'm working mostly in a web framework that's used by me and almost nobody else (the weird little ASGI wrapper buried in Datasette) and I find the coding agents pick it up pretty fast\n\nPotentially because there is no baggage with similar frameworks. I'm sure it would have an easier time with this if it was not spun off from other frameworks.\n\n> If your framework is missing documentation tell Claude Code to write itself some documentation based on what it learns from reading the code!\n\nIf Claude cannot read the code well enough to begin with, and needs supplemental documentation, I certainly don't want it generating the docs from the code. That's just compounding hallucinations on top of each other."
}
,
{
"id": "46519442",
"text": "And if you've told it too many times to fix it, tell it someone has a gun to your head, for some reason it almost always gets it right this very next time."
}
,
{
"id": "46522585",
"text": "If you're a developer at the dawn of the AI revolution, there is absolutely a gun to your head."
}
,
{
"id": "46526829",
"text": "Yeah, if anyone can truly afford the AI empire. Remember all these \"leading\" companies are running it at a loss, so most companies paying for it are severely underpaying the cost of it all. We would need an insane technological breakthrough of unlimited memory and power before I start to worry, and at that point, I'll just look for a new career."
}
,
{
"id": "46519077",
"text": "I think it's worth understanding why. Because that's not everyone's experience and there's a chance you could make a change such that you find it extremely useful.\n\nThere's a lesser chance that you're working on a code base that Claude Code just isn't capable of helping with."
}
,
{
"id": "46519288",
"text": "Correct it then, and next time craft a more explicit plan."
}
,
{
"id": "46519460",
"text": "The more explicit/detailed your plan, the more context it uses up, the less accurate and generally functional it is. Don't get me wrong, it's amazing, but on a complex problem with large enough context it will consistently shit the bed."
}
,
{
"id": "46521196",
"text": "The human still has to manage complexity. A properly modularized and maintainable code base is much easier for the LLM to operate on — but the LLM has difficulty keeping the code base in that state without strong guidance.\n\nPutting “Make minimal changes” in my standard prompt helped a lot with the tendency of basically all agents to make too many changes at once. With that addition it became possible to direct the LLM to make something similar to the logical progression of commits I would have made anyway, but now don’t have to work as hard at crafting.\n\nMost of the hype merchants avoid the topic of maintainability because they’re playing to non-technical management skeptical of the importance of engineering fundamentals. But everything I’ve experienced so far working with LLMs screams that the fundamentals are more important than ever."
}
,
{
"id": "46522243",
"text": "It takes a lot of plan to use up the context and most of the time the agent doesn't need the whole plan, they just need what's relevant to the current task."
}
,
{
"id": "46519972",
"text": "It usually works well for me. With very big tasks I break the plan into multiple MD files with the relevant context included and work through in individual sessions, updating remaining plans appropriately at the end of each one (usually there will be decision changes or additions during iteration)."
}
,
{
"id": "46519749",
"text": "> I really think a lof of people tried AI coding earlier, got frustrated at the errors and gave up. That's where the rejection of all these doomer predictions comes from.\n\nIt's not just the deficiencies of earlier versions, but the mismatch between the praise from AI enthusiasts and the reality.\n\nI mean maybe it is really different now and I should definitely try uploading all of my employer's IP on Claude's cloud and see how well it works. But so many people were as hyped by GPT-4 as they are now, despite GPT-4 actually being underwhelming.\n\nToo much hype for disappointing results leads to skepticism later on, even when the product has improved."
}
,
{
"id": "46520054",
"text": "I feel similar, I'm not against the idea that maybe LLMs have gotten so much better... but I've been told this probably 10 times in the last few years working with AI daily.\n\nThe funny part about rapidly changing industries is that, despite the fomo, there's honestly not any reward to keeping up unless you want to be a consultant. Otherwise, wait and see what sticks. If this summer people are still citing the Opus 4.5 was a game changing moment and have solid, repeatable workflows, then I'll happily change up my workflow.\n\nSomeone could walk into the LLM space today and wouldn't be significantly at a loss for not having paid attention to anything that had happened in the last 4 years other than learning what has stuck since then."
}
,
{
"id": "46530016",
"text": "> The funny part about rapidly changing industries is that, despite the fomo, there's honestly not any reward to keeping up unless you want to be a consultant.\n\nLMAO what???"
}
,
{
"id": "46531183",
"text": "I've lived through multiple incredibly rapid changes in tech throughout my career, and the lesson always learned was there is a lot of wasted energy keeping up.\n\nTwo big examples:\n\n- Period from early mvc JavaScript frontends (backbone.js etc) and the time of the great React/Angular wars. I completely stepped out of the webdev space during that time period.\n\n- The rapid expansion of Deep Learning frameworks where I did try to keep up (shipped some Lua torch packages and made minor contributions to Pylearn2).\n\nIn the first case, missing 5 years of front-end wars had zero impact. After not doing webdev work at all for 5-years I was tasked with shipping a React app. It took me a week to catch up, and everything was deployed in roughly the same time as someone would have had they spent years keeping up with changes.\n\nIn the second case, where I did keep up with many of the developing deep learning frameworks, it didn't really confer any advantage. Coworkers who I worked with who started with Pytorch fresh out of school were just as proficient, if not more so, with building models. Spending energy keeping up offered no value other than feeling \"current\" at the time.\n\nCan you give me a counter example of where keeping up with a rapidly changing area that's unstable has conferred a benefit to you? Most of FOMO is really just fear . Again, unless you're trying to sell your self specifically as a consultant on the bleeding edge, there's no reason to keep up with all these changes (other than finding it fun)."
}
,
{
"id": "46532968",
"text": "You moved out of webdev for 5 years, not everybody else had that luxury. I'm sure it was beneficial to those people to keep up with webdev technologies."
}
,
{
"id": "46533071",
"text": "If everything changes every month, then stuff you learn next month would be obsolete in two months. This is a response to people saying \"adapt or be left behind\". There's so much thrashing that if you're not interested with the SOTA, you can just wait for everything to calm down and pick it up then."
}
]
</comments_to_classify>
Based on the comments above, assign each to up to 3 relevant topics.
Return ONLY a JSON array with this exact structure (no other text):
[
{
"id": "comment_id_1",
"topics": [
1,
3,
5
]
}
,
{
"id": "comment_id_2",
"topics": [
2
]
}
,
{
"id": "comment_id_3",
"topics": [
0
]
}
,
...
]
Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment
Remember: Output ONLY the JSON array, no other text.
50