llm/065c6e83-d0d5-4aca-be3d-92768a8a3506/batch-4-145dbbb2-6115-4c6b-9253-4761d116ec5a-input.json
The following is content for you to classify. Do not respond to the comments—classify them.
<topics>
1. Not Novel or Revolutionary
Related: Many commenters argue this workflow is standard practice, not radically different. References to existing tools like Kiro, OpenSpec, SpecKit, and Antigravity that already implement spec-driven development. Claims the approach was documented 2+ years ago in Cursor forums.
2. LLMs as Junior Developers
Related: Analogy comparing LLMs to unreliable interns with boundless energy. Discussion of treating AI like junior developers requiring supervision, documentation, and oversight. The shift from coder to software manager role.
3. AI-Generated Article Concerns
Related: Multiple commenters suspect the article itself was written by AI, noting characteristic style and patterns. Debate about whether AI-written content should be evaluated differently or dismissed outright.
4. Magic Words and Prompt Engineering
Related: Skepticism about whether words like 'deeply' and 'in great details' actually affect LLM behavior. Discussion of attention mechanisms, emotional prompting research, and whether prompt techniques are superstition or cargo cult.
5. Planning vs Just Coding
Related: Debate about whether extensive planning overhead eliminates time savings. Some argue writing specs takes longer than writing code. Others counter that planning prevents compounding errors and technical debt.
6. Spec-Driven Development Tools
Related: References to existing frameworks: OpenSpec, SpecKit, BMAD-METHOD, Kiro, Antigravity. Discussion of how these tools formalize the research-plan-implement workflow described in the article.
7. Context Window Management
Related: Strategies for handling large codebases and context limits. Maintaining markdown files for subsystems, using skills, aggressive compaction. Concerns about context rot and performance degradation.
8. Waterfall Methodology Comparison
Related: Commenters note the approach resembles waterfall development with detailed upfront planning. Discussion of whether this contradicts agile principles or represents rediscovering proven methods.
9. Test-Driven Development Integration
Related: Suggestions to add comprehensive tests to the workflow. Writing tests before implementation, using tests as verification. Arguments that test coverage enables safer refactoring with AI.
10. Single Session vs Multiple Sessions
Related: Author's claim of running entire workflows in single long sessions without performance degradation. Others recommend clearing context between phases for better results.
11. Determinism and Reproducibility
Related: Concerns about non-deterministic LLM outputs. Discussion of whether software engineering can accommodate probabilistic tools. Comparisons to gambling and slot machines.
12. Token Cost Considerations
Related: Discussion of workflow being token-heavy and expensive. Comparisons between Claude subscription tiers. Arguments that simpler approaches save money while achieving similar results.
13. Annotation Workflow Details
Related: Questions about how to format inline annotations for Claude to recognize. Techniques like TODO prefixes, HTML comments, and clear separation between human and AI-written content.
14. Subagent Architecture
Related: Using multiple agents for different phases: planning, implementation, review. Red team/blue team approaches. Dispatching parallel agents for independent tasks.
15. Reference Implementation Technique
Related: Using existing code from open source projects as examples for Claude. Questions about licensing implications. Claims this dramatically improves output quality.
16. Claude vs Other Models
Related: Comparisons between Claude, Codex, Gemini, and other models. Discussion of model-specific behaviors and optimal prompting strategies. Using multiple models in complementary roles.
17. Greenfield vs Existing Codebases
Related: Observation that most AI coding articles focus on greenfield development. Different challenges when working with legacy code and established patterns.
18. Human Review Requirements
Related: Debate about whether all AI-generated code must be reviewed line-by-line. Questions about trust, liability, and whether AI can eventually be trusted without oversight.
19. Productivity Claims Skepticism
Related: Questions about actual time savings versus perceived productivity. References to studies showing AI sometimes makes developers less productive. Concerns about false progress.
20. Documentation as Side Benefit
Related: Plans and research documents serve as valuable documentation for future maintainers. Version controlling plan files in git. Using plans to understand architectural decisions later.
0. Does not fit well in any category
</topics>
<comments_to_classify>
[
{
"id": "47111524",
"text": "I've been running AI coding workshops for engineers transitioning from traditional development, and the research phase is consistently the part people skip — and the part that makes or breaks everything.\n\nThe failure mode the author describes (implementations that work in isolation but break the surrounding system) is exactly what I see in workshop after workshop. Engineers prompt the LLM with \"add pagination to the list endpoint\" and get working code that ignores the existing query builder patterns, duplicates filtering logic, or misses the caching layer entirely.\n\nWhat I tell people: the research.md isn't busywork, it's your verification that the LLM actually understands the system it's about to modify. If you can't confirm the research is accurate, you have no business trusting the plan.\n\nOne thing I'd add to the author's workflow: I've found it helpful to have the LLM explicitly list what it does NOT know or is uncertain about after the research phase. This surfaces blind spots before they become bugs buried three abstraction layers deep."
}
,
{
"id": "47109805",
"text": "I don’t use plan.md docs either, but I recognise the underlying idea: you need a way to keep agent output constrained by reality.\n\nMy workflow is more like scaffold -> thin vertical slices -> machine-checkable semantics -> repeat.\n\nConcrete example: I built and shipped a live ticketing system for my club (Kolibri Tickets). It’s not a toy: real payments (Stripe), email delivery, ticket verification at the door, frontend + backend, migrations, idempotency edges, etc. It’s running and taking money.\n\nThe reason this works with AI isn’t that the model “codes fast”. It’s that the workflow moves the bottleneck from “typing” to “verification”, and then engineers the verification loop:\n\n-keep the spine runnable early (end-to-end scaffold)\n\n-add one thin slice at a time (don’t let it touch 15 files speculatively)\n\n-force checkable artifacts (tests/fixtures/types/state-machine semantics where it matters)\n\n-treat refactors as normal, because the harness makes them safe\n\nIf you run it open-loop (prompt -> giant diff -> read/debug), you get the “illusion of velocity” people complain about. If you run it closed-loop (scaffold + constraints + verifiers), you can actually ship faster because you’re not paying the integration cost repeatedly.\n\nPlan docs are one way to create shared state and prevent drift. A runnable scaffold + verification harness is another."
}
,
{
"id": "47109988",
"text": "Now that code is cheap, I ensured my side project has unit/integration tests (will enforce 100% coverage), Playwright tests, static typing (its in Python), scripts for all tasks. Will learn mutation testing too (yes, its overkill). Now my agent works upto 1 hour in loops and emits concise code I dont have to edit much."
}
,
{
"id": "47107999",
"text": "I actually don't really like a few of things about this approach.\n\nFirst, the \"big bang\" write it all at once. You are going to end up with thousands of lines of code that were monolithically produced. I think it is much better to have it write the plan and formulate it as sensible technical steps that can be completed one at a time. Then you can work through them. I get that this is not very \"vibe\"ish but that is kind of the point. I want the AI to help me get to the same point I would be at with produced code AND understanding of it, just accelerate that process. I'm not really interested in just generating thousands of lines of code that nobody understands.\n\nSecond, the author keeps refering to adjusting the behaviour, but never incorporating that into long lived guidance. To me, integral with the planning\nprocess is building an overarching knowledge base. Every time you're telling it\nthere's something wrong, you need to tell it to update the knowledge base about\nwhy so it doesn't do it again.\n\nFinally, no mention of tests? Just quick checks? To me, you have to end up with\ncomprehensive tests. Maybe to the author it goes without saying, but I find it is\nintegral to build this into the planning. Certain stages you will want certain\ntypes of tests. Some times in advance of the code (so TDD style) other times\nbuilt alongside it or after.\n\nIt's definitely going to be interesting to see how software methodology evolves\nto incorporate AI support and where it ultimately lands."
}
,
{
"id": "47108066",
"text": "The articles approach matches mine, but I've learned from exactly the things you're pointing out.\n\nI get the PLAN.md (or equivalent) to be separated into \"phases\" or stages, then carefully prompt (because Claude and Codex both love to \"keep going\") it to only implement that stage, and update the PLAN.md\n\nTests are crucial too, and form another part of the plan really. Though my current workflow begins to build them later in the process than I would prefer..."
}
,
{
"id": "47107077",
"text": "This all looks fine for someone who can't code, but for anyone with even a moderate amount of experience as a developer all this planning and checking and prompting and orchestrating is far more work than just writing the code yourself.\n\nThere's no winner for \"least amount of code written regardless of productivity outcomes.\", except for maybe Anthropic's bank account."
}
,
{
"id": "47107146",
"text": "I really don't understand why there are so many comments like this.\n\nYesterday I had Claude write an audit logging feature to track all changes made to entities in my app. Yeah you get this for free with many frameworks, but my company's custom setup doesn't have it.\n\nIt took maybe 5-10 minutes of wall-time to come up with a good plan, and then ~20-30 min for Claude implement, test, etc.\n\nThat would've taken me at least a day, maybe two. I had 4-5 other tasks going on in other tabs while I waited the 20-30 min for Claude to generate the feature.\n\nAfter Claude generated, I needed to manually test that it worked, and it did. I then needed to review the code before making a PR. In all, maybe 30-45 minutes of my actual time to add a small feature.\n\nAll I can really say is... are you sure you're using it right? Have you _really_ invested time into learning how to use AI tools?"
}
,
{
"id": "47107199",
"text": "Same here. I did bounce off these tools a year ago. They just didn't work for me 60% of the time. I learned a bit in that initial experience though and walked away with some tasks ChatGPT could replace in my workflow. Mainly replacing scripts and reviewing single files or functions.\n\nFast forward to today and I tried the tools again--specifically Claude Code--about a week ago. I'm blown away. I've reproduced some tools that took me weeks at full-time roles in a single day. This is while reviewing every line of code. The output is more or less what I'd be writing as a principal engineer."
}
,
{
"id": "47109491",
"text": "> The output is more or less what I'd be writing as a principal engineer.\n\nI certainly hope this is not true, because then you're not competent for that role. Claude Code writes an absolutely incredible amount of unecessary and superfluous comments, it's makes asinine mistakes like forgetting to update logic in multiple places. It'll gladly drop the entire database when changing column formats, just as an example."
}
,
{
"id": "47110481",
"text": "I’m not sure what you're doing or if you’ve tried the tools recently but this isn’t even close to my experience."
}
,
{
"id": "47107247",
"text": "Trust me I'm very impressed at the progress AI has made, and maybe we'll get to the point where everything is 100% correct all the time and better than any human could write. I'm skeptical we can get there with the LLM approach though.\n\nThe problem is LLMs are great at simple implementation, even large amounts of simple implementation, but I've never seen it develop something more than trivial correctly. The larger problem is it's very often subtly but hugely wrong. It makes bad architecture decisions, it breaks things in pursuit of fixing or implementing other things. You can tell it has no concept of the \"right\" way to implement something. It very obviously lacks the \"senior developer insight\".\n\nMaybe you can resolve some of these with large amounts of planning or specs, but that's the point of my original comment - at what point is it easier/faster/better to just write the code yourself? You don't get a prize for writing the least amount of code when you're just writing specs instead."
}
,
{
"id": "47107360",
"text": "This is exactly what the article is about. The tradeoff is that you have to throughly review the plans and iterate on them, which is tiring. But the LLM will write good code faster than you, if you tell it what good code is."
}
,
{
"id": "47107698",
"text": "Exactly; the original commenter seems determined to write-off AI as \"just not as good as me\".\n\nThe original article is, to me, seemingly not that novel. Not because it's a trite example, but because I've begun to experience massive gains from following the same basic premise as the article. And I can't believe there's others who aren't using like this.\n\nI iterate the plan until it's seemingly deterministic, then I strip the plan of implementation, and re-write it following a TDD approach. Then I read all specs, and generate all the code to red->green the tests.\n\nIf this commenter is too good for that, then it's that attitude that'll keep him stuck. I already feel like my projects backlog is achievable, this year."
}
,
{
"id": "47107843",
"text": "Strongly agree about the deterministic part. Even more important than a good design, the plan must not show any doubt, whether it's in the form of open questions or weasel words. 95% of the time those vague words mean I didn't think something through, and it will do something hideous in order to make the plan work"
}
,
{
"id": "47110260",
"text": "My experience has so far been similar to the root commenter - at the stage where you need to have a long cycle with planning it's just slower than doing the writing + theory building on my own.\n\nIt's an okay mental energy saver for simpler things, but for me the self review in an actual production code context is much more draining than writing is.\n\nI guess we're seeing the split of people for whom reviewing is easy and writing is difficult and vice versa."
}
,
{
"id": "47109585",
"text": "Several months ago, just for fun, I asked Claude (the web site, not Claude Code) to build a web page with a little animated cannon that shoots at the mouse cursor with a ballistic trajectory. It built the page in seconds, but the aim was incorrect; it always shot too low. I told it the aim was off. It still got it wrong. I prompted it several times to try to correct it, but it never got it right. In fact, the web page started to break and Claude was introducing nasty bugs.\n\nMore recently, I tried the same experiment, again with Claude. I used the exact same prompt. This time, the aim was exactly correct. Instead of spending my time trying to correct it, I was able to ask it to add features. I've spent more time writing this comment on HN than I spent optimizing this toy. https://claude.ai/public/artifacts/d7f1c13c-2423-4f03-9fc4-8...\n\nMy point is that AI-assisted coding has improved dramatically in the past few months. I don't know whether it can reason deeply about things, but it can certainly imitate a human who reasons deeply. I've never seen any technology improve at this rate."
}
,
{
"id": "47109479",
"text": "> but I've never seen it develop something more than trivial correctly.\n\nWhat are you working on? I personally haven't seen LLMs struggle with any kind of problem in months. Legacy codebase with great complexity and performance-critical code. No issue whatsoever regardless of the size of the task."
}
,
{
"id": "47107272",
"text": ">I've never seen it develop something more than trivial correctly.\n\nThis is 100% incorrect, but the real issue is that the people who are using these llms for non-trivial work tend to be extremely secretive about it.\n\nFor example, I view my use of LLMs to be a competitive advantage and I will hold on to this for as long as possible."
}
,
{
"id": "47107308",
"text": "The key part of my comment is \"correctly\".\n\nDoes it write maintainable code? Does it write extensible code? Does it write secure code? Does it write performant code?\n\nMy experience has been it failing most of these. The code might \"work\", but it's not good for anything more than trivial, well defined functions (that probably appeared in it's training data written by humans). LLMs have a fundamental lack of understanding of what they're doing, and it's obvious when you look at the finer points of the outcomes.\n\nThat said, I'm sure you could write detailed enough specs and provide enough examples to resolve these issues, but that's the point of my original comment - if you're just writing specs instead of code you're not gaining anything."
}
,
{
"id": "47107411",
"text": "I find “maintainable code” the hardest bias to let go of. 15+ years of coding and design patterns are hard to let go.\n\nBut the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms. So maintainable has to evolve from good human design patterns to good AI patterns.\n\nSpecs are worth it IMO. Not because if I can spec, I could’ve coded anyway. But because I gain all the insight and capabilities of AI, while minimizing the gotchas and edge failures."
}
,
{
"id": "47108128",
"text": "> But the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms. So maintainable has to evolve from good human design patterns to good AI patterns.\n\nHow do you square that with the idea that all the code still has to be reviewed by humans? Yourself, and your coworkers"
}
,
{
"id": "47108230",
"text": "I picture like semi conductors; the 5nm process is so absurdly complex that operators can't just peek into the system easily. I imagine I'm just so used to hand crafting code that I can't imagine not being able to peek in.\n\nSo maybe it's that we won't be reviewing by hand anymore? I.e. it's LLMs all the way down. Trying to embrace that style of development lately as unnatural as it feels. We're obv not 100% there yet but Claude Opus is a significant step in that direction and they keep getting better and better."
}
,
{
"id": "47108619",
"text": "Then who is responsible when (not if) that code does horrible things? We have humans to blame right now. I just don’t see it happening personally because liability and responsibility are too important"
}
,
{
"id": "47109134",
"text": "For some software, sure but not most.\n\nAnd you don’t blame humans anyways lol. Everywhere I’ve worked has had “blameless” postmortems. You don’t remove human review unless you have reasonable alternatives like high test coverage and other automated reviews."
}
,
{
"id": "47109954",
"text": "We still have performance reviews and are fired. There’s a human that is responsible.\n\n“It’s AI all the way down” is either nonsense on its face, or the industry is dead already."
}
,
{
"id": "47109351",
"text": "> But the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms\n\nI don't find that LLMs are any more likely than humans to remember to update all of the places it wrote redundant functions. Generally far less likely, actually. So forgive me for treating this claim with a massive grain of salt."
}
,
{
"id": "47107715",
"text": "To answer all of your questions:\n\nyes, if I steer it properly.\n\nIt's very good at spotting design patterns, and implementing them. It doesn't always know where or how to implement them, but that's my job.\n\nThe specs and syntactic sugar are just nice quality of life benefits."
}
,
{
"id": "47107384",
"text": "You’d be building blocks which compound over time. That’s been my experience anyway.\n\nThe compounding is much greater than my brain can do on its own."
}
,
{
"id": "47111139",
"text": "> In all, maybe 30-45 minutes of my actual time to add a small feature\n\nWhy would this take you multiple days to do if it only took you 30m to review the code? Depends on the problem, but if I’m able to review something the time it’d take me to write it is usually at most 2x more worst case scenario - often it’s about equal.\n\nI say this because after having used these tools, most of the speed ups you’re describing come at the cost of me not actually understanding or thoroughly reviewing the code. And this is corroborated by any high output LLM users - you have to trust the agent if you want to go fast.\n\nWhich is fine in some cases! But for those of us who have jobs where we are personally responsible for the code, we can’t take these shortcuts."
}
,
{
"id": "47107569",
"text": "> Yesterday I had Claude write an audit logging feature to track all changes made to entities in my app. Yeah you get this for free with many frameworks, but my company's custom setup doesn't have it.\n\nBut did you truly think about such feature? Like guarantees that it should follow (like how do it should cope with entities migration like adding a new field) or what the cost of maintaining it further down the line. This looks suspiciously like drive-by PR made on open-source projects.\n\n> That would've taken me at least a day, maybe two.\n\nI think those two days would have been filled with research, comparing alternatives, questions like \"can we extract this feature from framework X?\", discussing ownership and sharing knowledge,.. Jumping on coding was done before LLMs, but it usually hurts the long term viability of the project.\n\nAdding code to a project can be done quite fast (hackatons,...), ensuring quality is what slows things down in any any well functioning team."
}
,
{
"id": "47107229",
"text": "I mean, all I can really say is... if writing some logging takes you one or two days, are you sure you _really_ know how to code?"
}
,
{
"id": "47107442",
"text": "Ever worked on a distributed system with hundreds of millions of customers and seemingly endless business requirements?\n\nSome things are complex."
}
,
{
"id": "47107238",
"text": "You're right, you're better than me!\n\nYou could've been curious and ask why it would take 1-2 days, and I would've happily told you."
}
,
{
"id": "47107349",
"text": "I'll bite, because it does seem like something that should be quick in a well-architected codebase. What was the situation? Was there something in this codebase that was especially suited to AI-development? Large amounts of duplication perhaps?"
}
,
{
"id": "47107450",
"text": "It's not particularly interesting.\n\nI wanted to add audit logging for all endpoints we call, all places we call the DB, etc. across areas I haven't touched before. It would have taken me a while to track down all of the touchpoints.\n\nGranted, I am not 100% certain that Claude didn't miss anything. I feel fairly confident that it is correct given that I had it research upfront, had multiple agents review, and it made the correct changes in the areas that I knew.\n\nAlso I'm realizing I didn't mention it included an API + UI for viewing events w/ pretty deltas"
}
,
{
"id": "47108874",
"text": "Well someone who says logging is easy never knows the difficulty of deciding \"what\" to log. And audit log is different beast altogether than normal logging"
}
,
{
"id": "47109155",
"text": "Audit logging is different than developer logging… companies will have entire teams dedicated to audit systems."
}
,
{
"id": "47107431",
"text": "We're not as good at coding as you , naturally."
}
,
{
"id": "47108452",
"text": "I'd find it deeply funny if the optimal vibe coding workflow continues to evolve to include more and more human oversight, and less and less agent autonomy, to the point where eventually someone makes a final breakthrough that they can save time by bypassing the LLM entirely and writing the code themselves. (Finally coming full circle.)"
}
,
{
"id": "47109395",
"text": "You mean there will be an invention to edit files directly instead of giving the specific code and location you want it to be written into the prompt?"
}
,
{
"id": "47107453",
"text": "Researching and planning a project is a generally usefully thing. This is something I've been doing for years, and have always had great results compared to just jumping in and coding. It makes perfect sense that this transfers to LLM use."
}
,
{
"id": "47107896",
"text": "Well it's less mental load. It's like Tesla's FSD. Am I a better driver than the FSD? For sure. But is it nice to just sit back and let it drive for a bit even if it's suboptimal and gets me there 10% slower, and maybe slightly pisses off the guy behind me? Yes, nice enough to shell out $99/mo. Code implementation takes a toll on you in the same way that driving does.\n\nI think the method in TFA is overall less stressful for the dev. And you can always fix it up manually in the end; AI coding vs manual coding is not either-or."
}
,
{
"id": "47107389",
"text": "Since Opus 4.5, things have changed quite a lot. I find LLMs very useful for discussing new features or ideas, and Sonnet is great for executing your plan while you grab a coffee."
}
,
{
"id": "47107117",
"text": "Most of these AI coding articles seem to be about greenfield development.\n\nThat said, if you're on a serious team writing professional software there is still tons of value in always telling AI to plan first, unless it's a small quick task. This post just takes it a few steps further and formalizes it.\n\nI find Cursor works much more reliably using plan mode, reviewing/revising output in markdown, then pressing build. Which isn't a ton of overhead but often leads to lots of context switching as it definitely adds more time."
}
,
{
"id": "47107214",
"text": "I partly agree with you. But once you have a codebase large enough, the changes become longer to even type in, once figured out.\n\nI find the best way to use agents (and I don't use claude) is to hash it out like I'm about to write these changes and I make my own mental notes, and get the agent to execute on it.\n\nAgents don't get tired, they don't start fat fingering stuff at 4pm, the quality doesn't suffer. And they can be parallelised.\n\nFinally, this allows me to stay at a higher level and not get bogged down of \"right oh did we do this simple thing again?\" which wipes some of the context in my mind and gets tiring through the day.\n\nAlways, 100% review every line of code written by an agent though. I do not condone committing code you don't 'own'.\n\nI'll never agree with a job that forces developers to use 'AI', I sometimes like to write everything by hand. But having this tool available is also very powerful."
}
,
{
"id": "47107269",
"text": "I want to be clear, I'm not against any use of AI. It's hugely useful to save a couple of minutes of \"write this specific function to do this specific thing that I could write and know exactly what it would look like\". That's a great use, and I use it all the time! It's better autocomplete. Anything beyond that is pushing it - at the moment! We'll see, but spending all day writing specs and double-checking AI output is not more productive than just writing correct code yourself the first time, even if you're AI-autocompleting some of it."
}
,
{
"id": "47107618",
"text": "For the last few days I've been working on a personal project that's been on ice for at least 6 years. Back when I first thought of the project and started implementing it, it took maybe a couple weeks to eke out some minimally working code.\n\nThis new version that I'm doing (from scratch with ChatGPT web) has a far more ambitious scope and is already at the \"usable\" point. Now I'm primarily solidifying things and increasing test coverage. And I've tested the key parts with IRL scenarios to validate that it's not just passing tests; the thing actually fulfills its intended function so far. Given the increased scope, I'm guessing it'd take me a few months to get to this point on my own, instead of under a week, and the quality wouldn't be where it is. Not saying I haven't had to wrangle with ChatGPT on a few bugs, but after a decent initial planning phase, my prompts now are primarily \"Do it\"s and \"Continue\"s. Would've likely already finished it if I wasn't copying things back and forth between browser and editor, and being forced to pause when I hit the message limit."
}
,
{
"id": "47107656",
"text": "This is a great come-back story. I have had a similar experience with a photoshop demake of mine.\n\nI recommend to try out Opencode with this approach, you might find it less tiring than ChatGPT web (yes it works with your ChatGPT Plus sub)."
}
,
{
"id": "47108865",
"text": "I think it comes down to \"it depends\". I work in a NIS2 regulated field and we're quite callenged by the fact that it means we can't give AI's any sort of real access because of the security risk. To be complaint we'd have to have the AI agent ask permission for every single thing it does, before it does it, and foureye review it. Which is obviously never going to happen. We can discuss how bad the NIS2 foureye requirement works in the real world another time, but considering how easy it is to break AI security, it might not be something we can actually ever use. This makes sense on some of the stuff we work on, since it could bring an entire powerplant down. On the flip-side AI risks would be of little concern on a lot of our internal tools, which are basically non-regulated and unimportant enough that they can be down for a while without costing the business anything beyond annoyances.\n\nThis is where our challenges are. We've build our own chatbot where you can \"build\" your own agent within the librechat framework and add a \"skill\" to it. I say \"skill\" because it's older than claude skills but does exactly the same. I don't completely buy the authors:\n\n> “deeply”, “in great details”, “intricacies”, “go through everything”\n\nbit, but you can obviously save a lot of time by writing a piece of english which tells it what sort of environment you work in. It'll know that when I write Python I use UV, Ruff and Pyrefly and so on as an example. I personally also have a \"skill\" setting that tells the AI not to compliment me because I find that ridicilously annoying, and that certainly works. So who knows? Anyway, employees are going to want more. I've been doing some PoC's running open source models in isolation on a raspberry pi (we had spares because we use them in IoT projects) but it's hard to setup an isolation policy which can't be circumvented.\n\nWe'll have to figure it out though. For powerplant critical projects we don't want to use AI. But for the web tool that allows a couple of employees to upload three excel files from an external accountant and then generate some sort of report on them? Who cares who writes it or even what sort of quality it's written with? The lifecycle of that tool will probably be something that never changes until the external account does and then the tool dies. Not that it would have necessarily been written in worse quality without AI... I mean... Have you seen some of the stuff we've written in the past 40 years?"
}
,
{
"id": "47108142",
"text": "There is a miscommunication happening, this entire time we all had surprisingly different ideas about what quality of work is acceptable which seems to account for differences of opinion on this stuff."
}
]
</comments_to_classify>
Based on the comments above, assign each to up to 3 relevant topics.
Return ONLY a JSON array with this exact structure (no other text):
[
{
"id": "comment_id_1",
"topics": [
1,
3,
5
]
}
,
{
"id": "comment_id_2",
"topics": [
2
]
}
,
{
"id": "comment_id_3",
"topics": [
0
]
}
,
...
]
Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment
Remember: Output ONLY the JSON array, no other text.
50