llm/0c6097e3-bc76-4fbe-ab4f-ceafa2484e5f/batch-8-e89cdbc0-8ebb-43d1-9fbe-874882a3dd8c-input.json
The following is content for you to classify. Do not respond to the comments—classify them.
<topics>
1. AI Performance on Greenfield vs. Legacy
Related: Users debate whether agents excel primarily at starting new projects from scratch while struggling to maintain large, complex, or legacy codebases without breaking existing conventions.
2. Context Window Limitations and Management
Related: Discussions focus on token limits (200k), performance degradation as context fills, and strategies like compacting history, using sub-agents, or maintaining summary files to preserve long-term memory.
3. Vibe Coding and Code Quality
Related: The polarization around building apps without reading the code; critics warn of unmaintainable "slop" and technical debt, while proponents value the speed and ability to bypass syntax.
4. Claude Code and Tooling
Related: Specific praise and critique for the Claude Code CLI, its integration with VS Code and Cursor, the use of slash commands, and comparisons to GitHub Copilot's agent mode.
5. Economic Impact on Software Jobs
Related: Existential anxiety regarding the obsolescence of mid-level engineers, the potential "hollowing out" of the middle class, and the shift toward one-person unicorn teams.
6. Prompt Engineering and Configuration
Related: Strategies involving `CLAUDE.md`, `AGENTS.md`, and custom system prompts to teach the AI coding conventions, architecture, and specific skills for better output.
7. Specific Language Capabilities
Related: Anecdotal evidence regarding proficiency in React, Python, and Go versus struggles in C++, Rust, and mobile development (Swift/Kotlin), often tied to training data availability.
8. Engineering vs. Coding
Related: A recurring distinction between "coding" (boilerplate, standard patterns) which AI conquers, and "engineering" (novel logic, complex systems, 3D graphics) where AI supposedly still fails.
9. Security and Trust
Related: Concerns about deploying unaudited AI code, the introduction of vulnerabilities, the risks of giving agents shell access, and the difficulty of verifying AI output.
10. The Skill Issue Argument
Related: Proponents dismiss failures as "skill issues," suggesting frustration stems from poor prompting or adaptability, while skeptics argue the tools are genuinely inconsistent.
11. Cost of AI Development
Related: Analysis of the financial viability of AI coding, including hitting API rate limits, the high cost of Opus 4.5 tokens, and the potential unsustainability of VC-subsidized pricing.
12. Future of Software Products
Related: Predictions that software creation costs will drop to zero, leading to a flood of bespoke personal apps replacing commercial SaaS, but potentially creating a maintenance nightmare.
13. Human-in-the-Loop Workflows
Related: The consensus that AI requires constant human oversight, "tools in a loop," and code review to prevent hallucination loops and ensure functional software.
14. Opus 4.5 vs. Previous Models
Related: Users describe the specific model as a "step change" or "inflection point" compared to Sonnet 3.5 or GPT-4, citing better reasoning and autonomous behavior.
15. Documentation and Specification
Related: The shift from writing code to writing specs; users find that detailed markdown documentation or "plan mode" yields significantly better AI results than vague prompts.
16. AI Hallucinations and Errors
Related: Reports of AI inventing non-existent CLI tools, getting stuck in logical loops, failing at visual UI tasks, and making simple indexing errors.
17. Shift in Developer Role
Related: The idea that developers are evolving into "product managers" or "architects" who direct agents, requiring less syntax proficiency and more systems thinking.
18. Testing and Verification
Related: The reliance on test-driven development (TDD), linters, and compilers to constrain non-deterministic AI output, ensuring generated code actually runs and meets requirements.
19. Local Models vs. Cloud APIs
Related: Discussions on the viability of local models for privacy and cost savings versus the necessity of massive cloud models like Opus for complex reasoning tasks.
20. Societal Implications
Related: Broader philosophical concerns about wealth concentration, the "class war" of automation, environmental impact, and the future of work in a post-code world.
0. Does not fit well in any category
</topics>
<comments_to_classify>
[
{
"id": "46535314",
"text": "I get your point, but I'll just say that I did not intend my comment to be interpreted so literally.\n\nAlso, just because SOMEONE planted a flag in 2023 saying that an LLM could build an app certainly does NOT mean that \"people were not claiming that LLMs \"can't write a full feature\"\". People in this very thread are still claiming LLMs can't write features. Opinions vary."
}
,
{
"id": "46525798",
"text": "I use it on a 10 years codebase, needs to explain where to get context but successfully works 90% of time"
}
,
{
"id": "46521079",
"text": "There are two types of right/wrong ways to build: the context specific right/wrong way to build something and an overly generalized engineer specific right/wrong way to build things.\n\nI've worked on teams where multiple engineers argued about the \"right\" way to build something. I remember thinking that they had biases based on past experiences and assumptions about what mattered. It usually took an outsider to proactively remind them what actually mattered to the business case.\n\nI remember cases where a team of engineers built something the \"right\" way but it turned out to be the wrong thing. (Well engineered thing no one ever used)\n\nSometimes hacking something together messily to confirm it's the right thing to be building is the right way. Then making sure it's secure, then finally paying down some technical debt to make it more maintainable and extensible.\n\nWhere I see real silly problems is when engineers over-engineer from the start before it's clear they are building the right thing, or when management never lets them clean up the code base to make it maintainable or extensible when it's clear it is the right thing.\n\nThere's always a balance/tension, but it's when things go too far one way or another that I see avoidable failures."
}
,
{
"id": "46524494",
"text": "*I've worked on teams where multiple engineers argued about the \"right\" way to build something. I remember thinking that they had biases based on past experiences and assumptions about what mattered. It usually took an outsider to proactively remind them what actually mattered to the business case.*\n\nGosh I am so tired with that one - someone had a case that burned them in some previous project and now his life mission is to prevent that from happening ever again, and there would be no argument they will take.\n\nThen you get like up to 10 engineers on typical team and team rotation and you end up with all kinds of \"we have to do it right because we had to pull all nighter once, 5 years ago\" baked in the system.\n\nNot fun part is a lot of business/management people \"expect\" having perfect solution right away - there are some reasonable ones that understand you need some iteration."
}
,
{
"id": "46525611",
"text": ">someone had a case that burned them in some previous project and now his life mission is to prevent that from happening ever again\n\nIsn't that what makes them senior ? If you dont want that behaviour, just hire a bunch of fresh grad."
}
,
{
"id": "46525729",
"text": "No, extrapolating from one bad experience to universal approach does not make anyone senior.\n\nThere are situations where it applies and situation where it doesn't. Having the experience to see what applies in this new context is what senior (usually) means."
}
,
{
"id": "46526001",
"text": "The people I admire most talk a lot more about \"risk\" than about \"right vs. wrong\". You can do that thing that caused that all-nighter 5 years ago, it isn't \"wrong\", but it is risky, and the person who pulled that all-nighter has useful information about that risk. It often makes sense to accept risks, but it's always good to be aware that you're doing so."
}
,
{
"id": "46526980",
"text": "It's also important to consider the developers risk tolerance as well. It's all fine and dandy that the project manager is okay with the risk but what if none of the developers are? Or one senior dev is okay with it but the 3 who actually work the on-call queue are not?\n\nI don't get paid extra for after hours incidents (usually we just trade time), so it's well within my purview on when to take on extra risk. Obviously, this is not ideal, but I don't make the on-call rules and my ability to change them is not a factor."
}
,
{
"id": "46527148",
"text": "I don't think of this as a project manager's role, but an engineering manager's role. The engineers on the team (especially the senior engineers) should be identifying the risks, and the engineering managers should be deciding whether they are tolerable. That includes risks like \"the oncall is awful and morale collapses and everyone quits\".\n\nIt's certainly the case that there are managers who handle those risks poorly, but that's just bad management."
}
,
{
"id": "46526557",
"text": "Nope, not realizing something doesn't apply and not being able to take in arguments is cargo culting not being a senior."
}
,
{
"id": "46524365",
"text": "> I've worked on teams where multiple engineers argued about the \"right\" way to build something. I remember thinking that they had biases based on past experiences and assumptions about what mattered. It usually took an outsider to proactively remind them what actually mattered to the business case.\n\nMy first thought was that you probably also have different biases, priorities and/or taste. As always, this is probably very context-specific and requires judgement to know when something goes too far. It's difficult to know the \"most correct\" approach beforehand.\n\n> Sometimes hacking something together messily to confirm it's the right thing to be building is the right way. Then making sure it's secure, then finally paying down some technical debt to make it more maintainable and extensible.\n\nI agree that sometimes it is, but in other cases my experience has been that when something is done, works and is used by customers, it's very hard to argue about refactoring it. Management doesn't want to waste hours on it (who pays for it?) and doesn't want to risk breaking stuff (or changing APIs) when it works. It's all reasonable.\n\nAnd when some time passes, the related intricacies, bigger picture and initially floated ideas fade from memory. Now other stuff may depend on the existing implementation. People get used to the way things are done. It gets harder and harder to refactor things.\n\nAgain, this probably depends a lot on a project and what kind of software we're talking about.\n\n> There's always a balance/tension, but it's when things go too far one way or another that I see avoidable failures.\n\nI think balance/tension describes it well and good results probably require input from different people and from different angles."
}
,
{
"id": "46522343",
"text": "I know what you are talking about, but there is more to life than just product-market fit.\n\nHardly any of us are working on Postgres, Photoshop, blender, etc. but it's not just cope to wish we were.\n\nIt's good to think about the needs to business and the needs of society separately. Yes, the thing needs users, or no one is benefiting. But it also needs to do good for those users, and ultimately, at the highest caliber, craftsmanship starts to matter again.\n\nThere are legitimate reasons for the startup ecosystem to focus firstly and primarily on getting the users/customers. I'm not arguing against that. What I am arguing is why does the industry need to be dominated by startups in terms of the bulk of the products (not bulk of the users). It begs the question of how much societally-meaningful programming waiting to be done.\n\nI'm hoping for a world where more end users code (vibe or otherwise) and the solve their own problems with their own software. I think that will make more a smaller, more elite software industry that is more focused on infrastructure than last-mile value capture. The question is how to fund the infrastructure. I don't know except for the most elite projects, which is not good enough for the industry (even this hypothetical smaller one) on the whole."
}
,
{
"id": "46526040",
"text": "> I'm hoping for a world where more end users code (vibe or otherwise) and the solve their own problems with their own software. I think that will make more a smaller, more elite software industry that is more focused on infrastructure than last-mile value capture.\n\nYes! This is what I'm excited about as well. Though I'm genuinely ambivalent about what I want my role to be. Sometimes I'm excited about figuring out how I can work on the infrastructure side. That would be more similar to what I've done in my career thus far. But a lot of the time, I think that what I'd prefer would be to become one of those end users with my own domain-specific problems in some niche that I'm building my own software to help myself with. That sounds pretty great! But it might be a pretty unnatural or even painful change for a lot of us who have been focused for so long on building software tools for other people to use."
}
,
{
"id": "46529233",
"text": "Users will not care about the quality of your code, or the backed architecture, or your perfectly strongly typed language.\n\nThey only care about their problems and treat their computers like an appliance. They don't care if it takes 10 seconds or 20 seconds.\n\nThey don't even care if it has ads, popups, and junk.\nThey are used to bloatware and will gladly open their wallets if the tool is helping them get by.\n\nIt's an unfortunately reality but there it is, software is about money and solving problems. Unless you are working on a mission critical system that affects people's health or financial data, none of those matter much."
}
,
{
"id": "46529774",
"text": "I know the customer's couldn't care about the quality of the code they see. But the idea that they don't care about software being bad/laggy/bloated ever, because it \"still solves problems\", doesn't stand up to scrutiny as an immutable fact of the universe. Market conditions can change.\n\nI'm banking on a future that if users feel they can (perhaps vibe) code their own solutions, they are far less likely to open their wallets for our bloatware solutions. Why pay exorbitant rents for shitty SaaS if you can make your own thing ad-free, exactly to your own mental spec?\n\nI want the \"computers are new, programmers are in short supply, customer is desperate\" era we've had in my lifetime so far to come to a close."
}
,
{
"id": "46522504",
"text": "> There are legitimate reasons for the startup ecosystem to focus firstly and primarily on getting the users/customers. I'm not arguing against that. What I am arguing is why does the industry need to be dominated by startups in terms of the bulk of the products (not bulk of the users). It begs the question of how much societally-meaningful programming waiting to be done.\n\nYou slipped in \"societally-meaningful\" and I don't know what it means and don't want to debate merits/demerits of socialism/capitalism.\n\nHowever I think lots of software needs to be written because in my estimation with AI/LLM/ML it'll generate value.\n\nAnd then you have lots of software that needs to rewritten as firms/technologies die and new firms/technologies are born."
}
,
{
"id": "46522861",
"text": "I didn't mean to do some snide anticaptialism. Making new Postgreses and blenders is really hard. I don't think the startup ecosystem does a very good job, but I don't assume central planning would do a much better job either.\n\n(The method I have the most confidence in is some sort of mixed system where there is non-profit, state-planned, and startup software development all at once.)\n\nMarkets are a tool, a means to the end. I think they're very good, I'm a big fan! But they are not an excuse not to think about the outcome we want.\n\nI'm confident that the outcome I don't want is where most software developers are trying to find demand for their work, pivoting etc. it's very \"pushing a string\" or \"cart before the horse\". I want more \"pull\" where the users/benefiaries of software are better able to dictate or create themselves what they want, rather than being helpless until a pivoting engineer finds it for them.\n\nBasically start-up culture has combined theories of exogenous growth from technology change, and a baseline assumption that most people are and will remain hopelessly computer illiterate, into an ideology that assumes the best software is always \"surprising\", a paradigm shift, etc.\n\nStartups that make libraries/tools for other software developers are fortunately a good step in undermining these \"the customer is an idiot and the product will be better than they expect\" assumptions. That gives me hope we're reach a healthier mix of push and pull. Wild successes are always disruptive, but that shouldn't mean that the only success is wild, or trying to \"act disruptive before wild success\" (\"manifest\" paradigm shifts!) is always the best means to get there."
}
,
{
"id": "46526016",
"text": "I've worked in various roles, and I'm one of those people who is not computer illiterate and likes to build solutions that meet local needs.\n\nIt's got a lot easier technically to do that in recent year, and MUCH easier with AI.\n\nBut institutionally and in terms of governance it's got a lot harder. Nobody wants home-brew software anymore. Doing data management and governance is complex enough and involves enough different people that it's really hard to generate the momentum to get projects off the ground.\n\nI still think it's often the right solution and that successful orgs will go this route and retain people with the skills to make it happen. But the majority probably can't afford the time/complexity, and AI is only part of the balance that determines whether it's feasible."
}
,
{
"id": "46521573",
"text": "> ...multiple engineers argued about the \"right\" way to build something. I remember thinking that they had biases based on past experiences and assumptions about what mattered.\n\nI usually resolve this by putting on the table the consequences and their impacts upon my team that I’m concerned about, and my proposed mitigation for those impacts. The mitigation always involves the other proposer’s team picking up the impact remediation. In writing. In the SOP’s. Calling out the design decision by day of the decision to jog memories and names of those present that wanted the design as the SME’s. Registered with the operations center. With automated monitoring and notification code we’re happy to offer.\n\nOnce people are asked to put accountable skin in the sustaining operations, we find out real fast who is taking into consideration the full spectrum end to end consequences of their decisions. And we find out the real tradeoffs people are making, and the externalities they’re hoping to unload or maybe don’t even perceive."
}
,
{
"id": "46523341",
"text": "That's awesome, but I feel like half the time most people aren't in the position to add requirements so a lot of shenanigans still happens, especially in big corps"
}
,
{
"id": "46523215",
"text": "Anecdata but I’ve found Claude code with Opus 4.5 able to do many of my real tickets in real mid and large codebases at a large public startup. I’m at senior level (15+ years). It can browse and figure out the existing patterns better than some engineers on my team. It used a few rare features in the codebase that even I had forgotten about and was about to duplicate. To me it feels like a real step change from the previous models I’ve used which I found at best useless. It’s following style guides and existing patterns well, not just greenfield. Kind of impressive, kind of scary"
}
,
{
"id": "46523929",
"text": "Same anecdote for me (except I'm +/- 40 years experience). I consider my self a pretty good dev for non-web dev (GPU's, assembly, optimisation,...) and my conclusion is the same as you: impressive and scary. If the somehow the idea of what you want to do is on the web in text or in code, then Claude most likely has it. And its ability to understand my own codebases is just crazy (at my age, memory is declining and having Claude to help is just waow). Of course it fails some times, of course it need direction, but the thing it produces is really good."
}
,
{
"id": "46524134",
"text": "Scary is that the LLM might have been trained on the entire open source code ever produced - which is far beyond human comprehension - and with ever growing capability (bigger context window, more training) my gut feeling is that, it would exceed human capability in programming pretty soon. Considering 2025 was the ground breaking year for agents, can't stop imagine what would happen when it iterates in the next couple of years. I think it would evolve to be like Chess playing engines that consistently beat top Chess players in the world!"
}
,
{
"id": "46523741",
"text": "I'm seeing this as well. Not huge codebases but not tiny - 4 year old startup. I'm new there and it would have been impossible for me to deliver any value this soon.\n12 years experience; this thing is definitely amazing. Combined with a human it can be phenomenal. It also helped me tons with lots of external tools, understand what data/marketing teams are doing and even providing pretty crucial insights to our leadership that Gemini have noticed.\nI wouldn't try to completely automate the humans out of the loop though just yet, but this tech for sure is gonna downsize team numbers (and at the same time - allow many new startups to come to life with little capital that eventually might grow and hire people. So unclear how this is gonna affect jobs.)"
}
,
{
"id": "46524588",
"text": "I've also found it to keep such a constrained context window (on large codebases), that it writes a secondary block of code that already had a solution in a different area of the same file.\n\nNothing I do seems to fix that in its initial code writing steps. Only after it finishes, when I've asked it to go back and rewrite the changes, this time making only 2 or 3 lines of code, does it magically (or finally) find the other implementation and reuse it.\n\nIt's freakin incredible at tracing through code and figuring it out. I <3 Opus. However, it's still quite far from any kind of set-and-forget-it."
}
,
{
"id": "46521124",
"text": "Another thing that gets me with projects like this, there are already many examples of image converters, minesweeper clones etc that you can just fork on GitHub, the value of the LLM here is largely just stripping the copyright off"
}
,
{
"id": "46521308",
"text": "It’s kind of funny - there’s another thread up where a dev claimed a 20-50x speed up. To their credit they posted videos and links to the repo of their work.\n\nAnd when you check the work, a large portion of it was hand rolling an ORM (via an LLM). Relatively solved problem that an LLM would excel at, but also not meaningfully moving the needle when you could use an existing library. And likely just creating more debt down the road."
}
,
{
"id": "46523801",
"text": "I've hand-rolled my own ultra-light ORM because the off-the-shelf ones always do 100 things you don't need.*\n\nAnd of course the open source ones get abandoned pretty regularly. Type ORM, which a 3rd party vendor used on an app we farmed out to them, mutates/garbles your input array on a multi-line insert. That was a fun one to debug. The issue has been open forever and no one cares. https://github.com/typeorm/typeorm/issues/9058\n\nSo yeah, if I ever need an ORM again, I'm probably rolling my own.\n\n*(I know you weren't complaining about the idea of rolling your own ORM, I just wanted to vent about Type ORM. Thanks for listening.)"
}
,
{
"id": "46533794",
"text": "This is the thing that will be changing the open source and small/medium SaaS world a lot.\n\nWhy use a 3rd party dependency that might have features you don't need when you can write a hyper-specific solution in a day with an LLM and then you control the full codebase.\n\nOr why pay €€€ for a SaaS every month when you can replicate the relevant bits yourself?"
}
,
{
"id": "46521641",
"text": "Reminds me of a post I read a few days ago of someone crowing about an LLM writing for them an email format validator. They did not have the LLM code up an accompanying send-an-email-validation loop, and were blithely kept uninformed by the LLM of the scar tissue built up by experience in the industry on how curiously a deep rabbit hole email validation becomes.\n\nIf you’ve been around the block and are judicious how you use them, LLM’s are a really amazing productivity boost. For those without that judgement and taste, I’m seeing footguns proliferate and the LLM’s are not warning them when someone steps on the pressure plate that’s about to blow off their foot. I’m hopeful we will this year create better context window-based or recursive guardrails for the coding agents to solve for this."
}
,
{
"id": "46526188",
"text": "Yeah I love working with Claude Code, I agree that the new models are amazing, but I spend a decent amount of time saying \"wait, why are we writing that from scratch, haven't we written a library for that, or don't we have examples of using a third party library for it?\".\n\nThere is probably some effective way to put this direction into the claude.md, but so far it still seems to do unnecessary reimplementation quite a lot."
}
,
{
"id": "46523671",
"text": "This is a typical problem you see in autodidacts. They will recreate solutions to solved problems, trip over issues that could have been avoided, and generally do all of things you would expect someone to do if they are working with skill but no experience.\n\nLLMs accelerate this and make it more visible, but they are not the cause. It is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go."
}
,
{
"id": "46526290",
"text": "> [The cause] is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.\n\nIsn't that what \"using an LLM\" is supposed to solve in the first place?"
}
,
{
"id": "46530446",
"text": "With the right prompt the LLM will solve it in the first place. But this is an issue of not knowing what you don't know, so it makes it difficult to write the right prompt. One way around this is to spawn more agents with specific tasks, or to have an agent that is ONLY focused on finding patterns/code where you're reinventing the wheel.\n\nI often have one agent/prompt where I build things but then I have another agent/prompt where their only job is to find codesmells, bad patterns, outdated libraries, and make issues or fix these problems."
}
,
{
"id": "46527827",
"text": "1. LLMs can't watch over someone and warn them when they are about to make a mistake\n\n2. LLMs are obsequious\n\n3. Even if LLMs have access to a lot of knowledge they are very bad at contextualizing it and applying it practically\n\nI'm sure you can think of many other reasons as well.\n\nPeople who are driven to learn new things and to do things are going to use whatever is available to them in order to do it. They are going to get into trouble doing that more often than not, but they aren't going to stop. No is helping the situation by sneering at them -- they are used it to it, anyway."
}
,
{
"id": "46527344",
"text": "I am hopeful autodidacts will leverage an LLM world like they did with an Internet search world from a library world from a printed word world. Each stage in that progression compressed the time it took for them to encompass a span of comprehension of a new body of understanding before applying to practice, expanded how much they applied the new understanding to, and deepened their adoption scope of best practices instead of reinventing the wheel.\n\nIn this regard, I see LLM's as a way for us to way more efficiently encode, compress, convey and enable operational practice our combined learned experiences. What will be really exciting is watching what happens as LLM's simultaneously draw from and contribute to those learned experiences as we do; we don't need full AGI to sharply realize massive benefits from just rapidly, recursively enabling a new highly dynamic form of our knowledge sphere that drastically shortens the distance from knowledge to deeply-nuanced praxis."
}
,
{
"id": "46524261",
"text": "My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated."
}
,
{
"id": "46525869",
"text": "> My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.\n\nLol, who doesn't hate that?"
}
,
{
"id": "46526014",
"text": "I don't know, in 40 years codding I never had to ask a question there."
}
,
{
"id": "46526197",
"text": "So literally everyone in the world? Yeah, seems right!"
}
,
{
"id": "46526287",
"text": "I would love to see your closed SO questions.\n\nBut don't worry, those days are over, the LLMs it is never going to push back on your ideas."
}
,
{
"id": "46527052",
"text": "lol, I probably don't have any, actually. If I recall, I would just write comments when my question differed slightly from one already there.\n\nBut it's definitely the case that being able to go back and forth quickly with an LLM digging into my exact context, rather than dealing with the kind of judgy humorless attitude that was dominant on SO is hugely refreshing and way more productive!"
}
,
{
"id": "46523632",
"text": "It seems to me these days, any code I want to write tries to solve problems that LLMs already excel at. Thankfully my job is perhaps just 10% about coding, and I hope people like you still have some coding tasks that cannot be easily solved by LLMs.\n\nWe should not exeggarate the capabilities of LLMs, sure, but let's also not play \"don't look up\"."
}
,
{
"id": "46524046",
"text": "\"And likely just creating more debt down the road\"\n\nIn the most inflationary era of capabilities we've seen yet, it could be the right move. What's debt when in a matter of months you'll be able to clear it in one shot?"
}
,
{
"id": "46523123",
"text": "- I cloned a project from GitHub and made some minor modifications.\n\n- I used AI-assisted programming to create a project.\n\nEven if the content is identical, or if the AI is smart enough to replicate the project by itself, the latter can be included on a CV."
}
,
{
"id": "46523446",
"text": "I think I would prefer the former if I were reviewing a CV. It at least tells me they understood the code well enough to know where to make their minor tweaks. (I've spent hours reading through a repo to know where to insert/comment out a line to suit my needs.) The second tells me nothing."
}
,
{
"id": "46524741",
"text": "Its odd you don't apply the same analysis to each. The latter certainly can provide a similar trail indicating knowledge of the use case and necessary parameters to achieve it. And certainly the former doesnt preclude llm interlocking."
}
,
{
"id": "46526024",
"text": "Why do you write like that?"
}
,
{
"id": "46533786",
"text": "It would help if I had a better understanding of what you mean by \"that\".\n\nI generally write to liberate my consciousness from isolation. When doing so in a public forum I am generally doing so in response to an assertion. When responding to an assertion I am generally attempting to understand the framing which produced the assertion.\n\nI suppose you may also be speaking to the voice which is emergent. I am not very well read, so you may find my style unconventional or sloppy. I generally try not to labor too much in this regard and hope this will develop as I continue to write.\n\nI am receptive to any feedback you have for me."
}
,
{
"id": "46527099",
"text": "Do people really see a CV and read \"computer mommy made me a program\" and think it's impressive"
}
]
</comments_to_classify>
Based on the comments above, assign each to up to 3 relevant topics.
Return ONLY a JSON array with this exact structure (no other text):
[
{
"id": "comment_id_1",
"topics": [
1,
3,
5
]
}
,
{
"id": "comment_id_2",
"topics": [
2
]
}
,
{
"id": "comment_id_3",
"topics": [
0
]
}
,
...
]
Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment
Remember: Output ONLY the JSON array, no other text.
50