Summarizer

LLM Input

llm/8632d754-c7a3-4ec2-977a-2733719992fa/batch-0-deed3a87-178d-4235-b83a-47f7039c32e4-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Determinism vs. Probabilistic Output
   Related: Comparisons between compilers (deterministic, reliable) and LLMs (probabilistic, 'fuzzy'). Users debate whether 100% correctness is required for tools, with some arguing that LLMs are fundamentally different from traditional automation because they lack a 'ground truth' logic, while others argue that error rates are acceptable if the utility is high enough.
2. The Code Review Bottleneck
   Related: Concerns that generating code faster merely shifts the bottleneck to reviewing code, which is often harder and more time-consuming than writing it. Users discuss the cognitive load of verifying 'vibe code' and the risks of blindly trusting output that looks correct but contains subtle bugs or security flaws.
3. Erosion of Programming Skills
   Related: Fears that relying on AI causes developers to lose fundamental skills ('use it or lose it'), such as forgetting syntax for frameworks like RSpec. Users discuss the value of the 'Stare'—deep mental simulation of problems—and whether outsourcing thinking to machines degrades human expertise and the ability to solve novel problems without assistance.
4. Financial Barriers and Costs
   Related: Discussions about the high cost of running continuous agents (potentially hundreds of dollars a month), with some noting that the author's wealth (as a billionaire/founder) biases his perspective on affordability. Users question whether the productivity gains justify the expense for average developers or if this creates a divide based on access to compute.
5. Agentic Workflows and Harnessing
   Related: Technical strategies for controlling AI behavior, such as 'harness engineering,' using AGENTS.md files to document rules and prevent regressions, and setting up feedback loops where agents run tests to verify their own work. This includes moving beyond simple chatbots to autonomous background processes that triage issues or perform research.
6. Safety and Sandboxing
   Related: Practical concerns about giving AI agents shell access or file system permissions. Users discuss the risks of agents accidentally 'nuking' systems, installing unwanted dependencies, or running dangerous commands, and recommend solutions like running agents in containers, VMs, or using specific sandboxing tools like Leash to limit blast radius.
7. Environmental Impact
   Related: Reactions to the author's suggestion to 'always have an agent running,' with users expressing alarm at the potential energy consumption and environmental cost of millions of developers running constant background inference tasks for marginal productivity gains, described by some as 'cooking the planet.'
8. Architects vs. Builders Analogy
   Related: Extensive debate using construction analogies to describe the shift in the developer's role. Comparisons are made between architects (who design and delegate) and builders, with arguments about whether AI users are 'vibe architects' who don't understand the materials, or professional engineers utilizing modern equivalents of CAD software and heavy machinery.
9. AI as Junior Developers
   Related: The characterization of AI agents as an infinite supply of 'slightly drunken new college grads' or interns who are fast and cheap but require constant supervision. Users discuss the ratio of senior engineer time needed to review AI output and the lack of a path for these 'AI juniors' to ever become seniors.
10. Trust and Hallucination Risks
   Related: Skepticism regarding the reliability of AI, highlighted by examples like 'wind-powered cars' or bad recipes. Users argue that because LLMs predict tokens rather than understanding physics or logic, they are 'confidently stupid' and require expert humans to filter out hallucinations, making them dangerous for those lacking deep domain knowledge.
11. Productivity vs. Inefficiency
   Related: Debates over whether AI actually saves time or just feels productive. Some cite studies suggesting productivity drops (e.g., 19%), while others argue that the efficiency comes from parallelizing tasks or handling boilerplate. Users critique the lack of hard metrics in the article and the reliance on 'feeling' more efficient.
12. Corporate Process vs. Individual Flow
   Related: The distinction between individual productivity gains (solopreneurs, solo projects) and organizational reality. Users note that while AI speeds up coding, it doesn't solve organizational bottlenecks like meetings, cross-team coordination, or gathering requirements, limiting its revolutionary impact on large enterprises compared to solo work.
13. Spec Writing as the New Coding
   Related: The idea that working with agents shifts the primary task from writing syntax to writing detailed specifications and prompts. Users note that AI forces developers to be more explicit about requirements, effectively turning English specs into the source code, though some argue this is just a verbose and nondeterministic programming language.
14. Hype Cycles and Model Churn
   Related: Frustration with the rapid pace of change in the AI landscape ('honeymoon phase'). Users complain about building workflows around a specific model only for it to change or degrade ('drift') in the next update, leading to a constant need to relearn prompt engineering and tooling idiosyncrasies.
15. Local Models vs. Cloud Privacy
   Related: Concerns about uploading proprietary source code to cloud providers like Anthropic or OpenAI. Users discuss the trade-offs between using superior cloud models (Claude Code) versus privacy-preserving local models (OpenCode) or self-hosted solutions, and the difficulty of trusting AI companies with sensitive intellectual property.
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46906201",
  "text": "This is such a lovely balanced thoughtful refreshingly hype-free post to read. 2025 really was the year when things shifted and many first-rate developers (often previously AI skeptics, as Mitchell was) found the tools had actually got good enough that they could incorporate AI agents into their workflows.\n\nIt's a shame that AI coding tools have become such a polarizing issue among developers. I understand the reasons, but I wish there had been a smoother path to this future. The early LLMs like GPT-3 could sort of code enough for it to look like there was a lot of potential, and so there was a lot of hype to drum up investment and a lot of promises made that weren't really viable with the tech as it was then. This created a large number of AI skeptics (of whom I was one, for a while) and a whole bunch of cynicism and suspicion and resistance amongst a large swathe of developers. But could it have been different? It seems a lot of transformative new tech is fated to evolve this way. Early aircraft were extremely unreliable and dangerous and not yet worthy of the promises being made about them, but eventually with enough evolution and lessons learned we got the Douglas DC-3, and then in the end the 747.\n\nIf you're a developer who still doesn't believe that AI tools are useful, I would recommend you go read Mitchell's post, and give Claude Code a trial run like he did. Try and forget about the annoying hype and the vibe-coding influencers and the noise and just treat it like any new tool you might put through its paces. There are many important conversations about AI to be had, it has plenty of downsides, but a proper discussion begins with close engagement with the tools."
}
,
  
{
  "id": "46907458",
  "text": "Architects went from drawing everything on paper, to using CAD products over a generation. That's a lot of years! They're still called architects.\n\nOur tooling just had a refresh in less than 3 years and it leaves heads spinning. People are confused, fighting for or against it. Torn even between 2025 to 2026. I know I was.\n\nPeople need a way to describe it from 'agentic coding' to 'vibe coding' to 'modern AI assisted stack'.\n\nWe don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!\n\nWe don't call builders 'vibe builders' for using earth-moving machines instead of a shovel...\n\nWhen was the last time you reviewed the machine code produced by a compiler? ...\n\nThe real issue this industry is facing, is the phenomenal speed of change. But what are we really doing? That's right, programming."
}
,
  
{
  "id": "46908906",
  "text": "\"When was the last time you reviewed the machine code produced by a compiler?\"\n\nCompilers will produce working output given working input literally 100% of my time in my career. I've never personally found a compiler bug.\n\nMeanwhile AI can't be trusted to give me a recipe for potato soup. That is to say, I would under no circumstances blindly follow the output of an LLM I asked to make soup. While I have, every day of my life, gladly sent all of the compiler output to the CPU without ever checking it.\n\nThe compiler metaphor is simply incorrect and people trying to say LLMs compile English into code insult compiler devs and English speakers alike."
}
,
  
{
  "id": "46909268",
  "text": "> Compilers will produce working output given working input literally 100% of my time in my career.\n\nIn my experience this isn't true. People just assume their code is wrong and mess with it until they inadvertently do something that works around the bug. I've personally reported 17 bugs in GCC over the last 2 years and there are currently 1241 open wrong-code bugs.\n\nHere's an example of a simple to understand bug (not mine) in the C frontend that has existed since GCC 4.7: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105180"
}
,
  
{
  "id": "46910123",
  "text": "These are still deterministic bugs, which is the point the OP was making. They can be found and solved once. Most of those bugs are simply not that important, so they never get attention.\n\nLLMS on the other hand are non-deterministic and unpredictable and fuzzy by design . That makes them not ideal when trying to produce output which is provably correct - sure you can output and then laboriously check the output - some people find that useful, some are yet to find it useful.\n\nIt's a little like using Bitcoin to replace currencies - sure you can do that, but it includes design flaws which make it fundamentally unsuited to doing so. 10 years ago we had rabid defenders of these currencies telling us they would soon take over the global monetary system and replace it, nowadays, not so much."
}
,
  
{
  "id": "46911199",
  "text": "> It's a little like using Bitcoin to replace currencies [...]\n\nAt least, Bitcoin transactions are deterministic.\n\nNot many would want to use a AI currency (mostly works; always shows \"Oh, you are 100% right\" after losing one's money)."
}
,
  
{
  "id": "46913855",
  "text": "Sure bitcoin is at least deterministic, but IMO (an that of many in the finance industry) it's solving entirely the wrong problem - in practice people want trust and identity in transactions much more than they want distributed and trustless.\n\nIn a similar way LLMs seem to me to be solving the wrong problem - an elegant and interesting solution, but a solution to the wrong problem (how can I fool humans into thinking the bot is generally intelligent), rather than the right problem (how can I create a general intelligence with knowledge of the world). It's not clear to me we can jump from the first to the second."
}
,
  
{
  "id": "46913166",
  "text": "> I've personally reported 17 bugs in GCC over the last 2 years\n\nYou are an extreme outlier. I know about two dozen people who work with C(++) and not a single one of them has ever told me that they've found a compiler bug when we've talked about coding and debugging - it's been exclusively them describing PEBCAK."
}
,
  
{
  "id": "46913707",
  "text": "I knew one person reporting gcc bugs, and iirc those were all niche scenarios where it generated slightly suboptimal machine code but not otherwise observable from behavior"
}
,
  
{
  "id": "46913761",
  "text": "Right - I'm not saying that it doesn't happen, but that it's highly unusual for the majority of C(++) developers, and that some bugs are \"just\" suboptimal code generation (as opposed to functional correctness, which the GP was arguing)."
}
,
  
{
  "id": "46910173",
  "text": "This argument is disingenuous and distracts rather than addresses the point.\n\nYes, it is possible for a compiler to have a bug. No, that is I’m mo way analogous to AI producing buggy code.\n\nI’ve experienced maybe two compiler bugs in my twenty year career. I have experienced countless AI mistakes - hundreds? Thousands? Already.\n\nThese are not the same and it has the whiff of sales patter trying to address objections. Please stop."
}
,
  
{
  "id": "46911208",
  "text": "I'm not arguing that LLMs are at a point today where we can blindly trust their outputs in most applications, I just don't think that 100% correct output is necessarily a requirement for that. What it needs to be is correct often enough that the cost of reviewing the output far outweighs the average cost of any errors in the output, just like with a compiler.\n\nThis even applies to human written code and human mistakes, as the expected cost of errors goes up we spend more time on having multiple people review the code and we worry more about carefully designing tests."
}
,
  
{
  "id": "46911696",
  "text": "If natural language is used to specify work to the LLM, how can the output ever be trusted? You'll always need to make sure the program does what you want, rather than what you said."
}
,
  
{
  "id": "46912453",
  "text": "Just create a very specific and very detailed prompt that is so specific that it starts including instructions and you came up with the most expensive programming language."
}
,
  
{
  "id": "46913427",
  "text": "You trust your natural language instructions thousand times a day. If you ask for a large black coffee, you can trust that is more or less what you’ll get. Occasionally you may get something so atrocious that you don’t dare to drink, but generally speaking you trust the coffee shop knows what you want. It you insist on a specific amount of coffee brewed at a specific temperature, however, you need tools to measure.\n\nAI tools are similar. You can trust them because they are good enough, and you need a way (testing) to make sure what is produced meet your specific requirements. Of course they may fail for you, doesn’t mean they aren’t useful in other cases.\n\nAll of that is simply common sense."
}
,
  
{
  "id": "46911679",
  "text": "The challenge not addressed with this line of reasoning is the required sheer scale of output validation on the backend of LLM-generated code. Human hand-developed code was no great shakes at the validation front either, but the scale difference hid this problem.\n\nI’m hopeful what used to be tedious about the software development process (like correctness proving or documentation) becomes tractable enough with LLM’s to make the scale more manageable for us. That’s exciting to contemplate; think of the complexity categories we can feasibly challenge now!"
}
,
  
{
  "id": "46912227",
  "text": "the fact that the bug tracker exists is proving GP's point."
}
,
  
{
  "id": "46911820",
  "text": "Right, now what would you say is the probability of getting a bug in compiler output vs ai output?\n\nIt's a great tool, once it matures."
}
,
  
{
  "id": "46912271",
  "text": "Absolutely this. I am tired of that trope.\n\nOr the argument that \"well, at some point we can come up with a prompt language that does exactly what you want and you just give it a detailed spec.\" A detailed spec is called code. It's the most round-about way to make a programming language that even then is still not deterministic at best."
}
,
  
{
  "id": "46913459",
  "text": "> Meanwhile AI can't be trusted to give me a recipe for potato soup.\n\nThis just isn't true any more. Outside of work, my most common use case for LLMs is probably cooking. I used to frequently second guess them, but no longer - in my experience SOTA models are totally reliable for producing good recipes.\n\nI recognize that at a higher level we're still talking about probabilistic recipe generation vs. deterministic compiler output, but at this point it's nonetheless just inaccurate to act as though LLMs can't be trusted with simple (e.g. potato soup recipe) tasks."
}
,
  
{
  "id": "46913341",
  "text": "Compilers and processors are deterministic by design . LLMs are non-deterministic by design .\n\nIt's not apples vs. oranges. They are literally opposite of each other."
}
,
  
{
  "id": "46909706",
  "text": "This is obviously besides the point but I did blindly follow a wiener schnitzel recipe ChatGPT made me and cooked for a whole crew. It turned out great. I think I got lucky though, the next day I absolutely massacred the pancakes."
}
,
  
{
  "id": "46911358",
  "text": "Recent experiments with LLM recipes (ChatGPT): missed salt in a recipe to make rice, then flubbed whether that type of rice was recommended to be washed in the recipe it was supposedly summarizing (and lied about it, too)…\n\nProbabilistic generation will be weighted towards the means in the training data. Do I want my code looking like most code most of the time in a world full of Node.js and PHP? Am I better served by rapid delivery from a non-learning algorithm that requires eternal vigilance and critical re-evaluation or with slower delivery with a single review filtered through an meatspace actor who will build out trustable modules in a linear fashion with known failure modes already addressed by process (ie TDD, specs, integration & acceptance tests)?\n\nI’m using LLMs a lot, but can’t shake the feeling that the TCO and total time shakes out worse than it feels as you go."
}
,
  
{
  "id": "46911711",
  "text": "There was a guy a few months ago who found that telling the AI to do everything in a single PHP file actually produced significantly better results, i.e. it worked on the first try. Otherwise it defaulted to React, 1GB of node modules, and a site that wouldn't even load.\n\n>Am I better served\n\nFor anything serious, I write the code \"semi-interactively\", i.e. I just prompt and verify small chunks of the program in rapid succession. That way I keep my mental model synced the whole time, I never have any catching up to do, and honestly it just feels good to stay in the driver's seat."
}
,
  
{
  "id": "46909869",
  "text": "I genuinely admire your courage and willingness (or perhaps just chaos energy) to attempt both wiener schnitzel and pancakes for a crew, based on AI recipes, despite clearly limited knowledge of either."
}
,
  
{
  "id": "46910718",
  "text": "Everything more complex than a hello-world has bugs. Compiler bugs are uncommon, but not that uncommon. (I must have debugged a few ICEs in my career, but luckily have had more skilled people to rely on when code generation itself was wrong.)\n\nCompilers aren't even that bad. The stack goes much deeper and during your career you may be (un)lucky enough to find yourself far below compilers: https://bostik.iki.fi/aivoituksia/random/developer-debugging...\n\nNB. I've been to vfs/fs depths. A coworker relied on an oscilloscope quite frequently."
}
,
  
{
  "id": "46911143",
  "text": "I had a fun bug while building a smartwatch app that was caused by the sample rate of the accelerometer increasing when the device heated up. I had code that was performing machine learning on the accelerometer data, which would mysteriously get less accurate during prolonged operation. It turned out that we gathered most of our training data during shorter runs when the device was cool, and when the device heated up during extended use, it changed the frequencies of the recorded signals enough to throw off our model.\n\nI've also used a logic analyzer to debug communications protocols quite a few times in my career, and I've grown to rather like that sort of work, tedious as it may be.\n\nJust this week I built a VFS using FUSE and managed to kernel panic my Mac a half-dozen times. Very fun debugging times."
}
,
  
{
  "id": "46910464",
  "text": "”I've never personally found a compiler bug.”\n\nI remember the time I spent hours debugging a feature that worked on Solaris and Windows but failed to produce the right results on SGI. Turns out the SGI C++ compiler silently ignored the `throw` keyword! Just didn’t emit an opcode at all! Or maybe it wrote a NOP.\n\nAll I’m saying is, compilers aren’t perfect.\n\nI agree about determinism though. And I mitigate that concern by prompting AI assistants to write code that solves a problem, instead of just asking for a new and potentially different answer every time I execute the app."
}
,
  
{
  "id": "46909509",
  "text": "I'm trying to track down a GCC miscompilation right now ;)"
}
,
  
{
  "id": "46910567",
  "text": "I feel for you :D"
}
,
  
{
  "id": "46909673",
  "text": "> Meanwhile AI can't be trusted to give me a recipe for potato soup.\n\nBecause there isn’t a canonical recipe for potato soup."
}
,
  
{
  "id": "46912310",
  "text": "There's also no canonical way to write software, so in that sense generating code is more similar to coming up with a potato soup recipe than compiling code."
}
,
  
{
  "id": "46909751",
  "text": "That is not the issue, any potato soup recipe would be fine, the issue is that it might fetch values from different recipes and give you an abomination."
}
,
  
{
  "id": "46909851",
  "text": "This exactly, I cook as passion, and LLMs just routinely very clearly (weighted) \"average\" together different recipes to produce, in the worst case, disgusting monstrosities, or, in the best case, just a near-replica of some established site's recipe."
}
,
  
{
  "id": "46909397",
  "text": "You're correct, and I believe this is only a matter of time. Over time it has been getting better and will keep doing so."
}
,
  
{
  "id": "46910335",
  "text": "It won’t be deterministic."
}
,
  
{
  "id": "46909485",
  "text": "Maybe. But it's been 3 years and it still isn't good enough to actually trust. That doesn't raise confidence that it will ever get there."
}
,
  
{
  "id": "46909764",
  "text": "You need to put this revolution in scale with other revolutions.\n\nHow long did it take for horses to be super-seeded by cars?\n\nHow long did powertool take to become the norm for tradesmen?\n\nThis has gone unbelievably fast."
}
,
  
{
  "id": "46910144",
  "text": "I think things can only be called revolutions in hindsight - while they are going on it's hard to tell if they are a true revolution, an evolution or a dead-end. So I think it's a little premature to call Generative AI a revolution.\n\nAI will get there and replace humans at many tasks, machine learning already has, I'm not completely sure that generative AI will be the route we take, it is certainly superficially convincing, but those three years have not in fact seen huge progress IMO - huge amounts of churn and marketing versions yes, but not huge amounts of concrete progress or upheaval. Lots of money has been spent for sure! It is telling for me that many of the real founders at OpenAI stepped away - and I don't think that's just Altman, they're skeptical of the current approach.\n\nPS Superseded."
}
,
  
{
  "id": "46911608",
  "text": ">super-seeded\n\nCute eggcorn there."
}
,
  
{
  "id": "46911844",
  "text": "> Compilers will produce working output given working input literally 100% of my time in my career. I've never personally found a compiler bug.\n\nFirst compilers were created in the fifties. I doubt those were bug-free.\n\nGive LLMs some fifty or so years, then let's see how (un)reliable they are."
}
,
  
{
  "id": "46908202",
  "text": "> We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!\n\n> We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel...\n\n> When was the last time you reviewed the machine code produced by a compiler?\n\nSure, because those are categorically different. You are describing shortcuts of two classes: boilerplate (library of things) and (deterministic/intentional) automation. Vibe coding doesn't use either of those things. The LLM agents involved might use them, but the vibe coder doesn't.\n\nVibe coding is delegation , which is a completely different class of shortcut or \"tool\" use. If an architect delegates all their work to interns, directs outcomes based on whims not principals, and doesn't actually know what the interns are delivering, yeah, I think it would be fair to call them a vibe architect.\n\nWe didn't have that term before, so we usually just call those people \"arrogant pricks\" or \"terrible bosses\". I'm not super familiar but I feel like Steve Jobs was pretty famously that way - thus if he was an engineer, he was a vibe engineer. But don't let this last point detract from the message, which is that you're describing things which are not really even similar to vibe coding."
}
,
  
{
  "id": "46910102",
  "text": "I think you are right in placing emphasis on delegation.\n\nThere’s been a hypothesis floating around that I find appealing. Seemingly you can identify two distinct groups of experienced engineers. Manager, delegator, or team lead style senior engineers are broadly pro-AI. The craftsman, wizard, artist, IC style senior engineers are broadly anti-AI.\n\nBut coming back to architects, or most professional services and academia to be honest, I do think the term vibe architect as you define it is exactly how the industry works. An underclass of underpaid interns and juniors do the work, hoping to climb higher and position themselves towards the top of the ponzi-like pyramid scheme."
}
,
  
{
  "id": "46912892",
  "text": "Architects still need to learn to draw manually quite well to pass exams and stuff."
}
,
  
{
  "id": "46911597",
  "text": "> We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!\n\nArchitect's copy-pasting is equivalent to a software developer reusing a tried and tested code library. Generating or writing new code is fundamentally different and not at all comparable.\n\n> We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel...\n\nWe would call them \"vibe builders\" if their machines threw bricks around randomly and the builders focused all of their time on engineering complex scaffolding around the machines to get the bricks flying roughly in the right direction.\n\nBut we don't because their machines, like our compilers and linters, do one job and they do it predictably. Most trades spend obscene amounts of money on tools that produce repeatable results.\n\n> That's a lot of years! They're still called architects.\n\nBecause they still architect, they don't subcontract their core duties to architecture students overseas and just sign their name under it.\n\nI find it fitting and amusing that people who are uncritical towards the quality of LLM-generated work seem to make the same sorts of reasoning errors that LLMs do. Something about blind spots?"
}
,
  
{
  "id": "46911694",
  "text": "Very likely, yes. One day we'll have a clearer understanding of how minds generalize concepts into well-trodden paths even when they're erroneous, and it'll probably shed a lot of light onto concepts like addiction."
}
,
  
{
  "id": "46910207",
  "text": "Reasoning by analogy is usually a bad idea, and nowhere is this worse than talking about software development.\n\nIt’s just not analogous to architecture, or cooking, or engineering. Software development is just its own thing. So you can’t use analogy to get yourself anywhere with a hint of rigour.\n\nThe problem is, AI is generating code that may be buggy, insecure, and unmaintainable. We have as a community spent decades trying to avoid producing that kind of code. And now we are being told that productivity gains mean we should abandon those goals and accept poor quality, as evidenced by MoltBook’s security problems.\n\nIt’s a weird cognitive dissonance and it’s still not clear how this gets resolved."
}
,
  
{
  "id": "46911714",
  "text": "Now then, Moltbook is a pathological case. Either it remains a pathological case or our whole technological world is gonna stumble HARD as all the fundamental things collapse.\n\nI prefer to think Moltbook is a pathological case and unrepresentative, but I've also been rethinking a sort of game idea from computer-based to entirely paper/card based (tariffs be damned) specifically for this reason. I wish to make things that people will have even in the event that all these nice blinky screens are ruined and go dark."
}
,
  
{
  "id": "46912728",
  "text": "Just the first system that was coded by AI could think of. Note this is unrelated to the fact that its users are LLMs - the problem was in the development of Moltbook itself."
}
,
  
{
  "id": "46908313",
  "text": "Don't take this as criticizing LLMs as a whole, but architects also don't call themselves engineers. Engineers are an entirely distinct set of roles that among other things validate the plan in its totality, not only the \"new\" 1/5th. Our job spans both of these.\n\n\"Architect\" is actually a whole career progression of people with different responsibilities. The bottom rung used to be the draftsmen, people usually without formal education who did the actual drawing. Then you had the juniors, mid-levels, seniors, principals, and partners who each oversaw different aspects. The architects with their name on the building were already issuing high level guidance before the transition instead of doing their own drawings.\n\nWhen was the last time you reviewed the machine code produced by a compiler?\n\nLast week, to sanity check some code written by an LLM."
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

50

← Back to job