llm/122b8d72-a8a3-4fcf-8eca-6a52786d1a8b/batch-0-3be04262-6721-4dc6-890d-76b420ebb801-input.json
The following is content for you to classify. Do not respond to the comments—classify them.
<topics>
1. Lack of Concrete Evidence
Related: Commenters repeatedly criticize the article for providing no examples, code, projects, costs, or specifics about what was actually built, calling it empty hype and platitudes without substance or proof of claims
2. Author Credibility Concerns
Related: Multiple commenters point to the author's previous blog post praising the Rabbit R1 as evidence of poor technical judgment and tendency toward unfounded enthusiasm for new technology
3. AI Coding Tool Limitations
Related: Discussion of how AI tools work well for simple, repetitive, or locally-scoped tasks but fail with complex systems, large codebases, and non-trivial problems requiring significant human guidance
4. Greenfield vs Legacy Projects
Related: Observations that AI coding excels at new projects under 10,000 lines of code but struggles maintaining consistency and avoiding regressions in larger, established codebases
5. Astroturfing Suspicions
Related: Multiple commenters suspect pro-AI posts are marketing campaigns or astroturfing given the billions invested in AI, with some noting suspicious voting patterns and repetitive promotional content
6. AI-Generated Content Detection
Related: Many suspect the blog post itself was written by AI, citing lack of specifics, excessive em-dashes, and generic promotional language characteristic of LLM-generated slop
7. Manager Fantasy Critique
Related: Skepticism about the desire to become a 'super manager' rather than hands-on developer, with some viewing it as CEO cosplay or escapism from actual technical work
8. Productivity Illusion
Related: Discussion of whether AI tools create actual productivity gains or merely the feeling of productivity, with some noting impressive-looking output that lacks substance or quality
9. Security Concerns
Related: Significant worry about OpenClaw's security vulnerabilities, prompt injection risks, and the danger of giving AI agents access to production systems, emails, and sensitive data
10. Skills and Learning Curve
Related: Debate over whether effective AI tool usage requires significant skill development, with some arguing poor results indicate user skill issues while others see fundamental tool limitations
11. Real World Use Cases
Related: Commenters share legitimate use cases including utility scripts, exploring unfamiliar codebases, setup automation, and learning new tools, distinguishing these from transformative claims
12. Cost and Accessibility
Related: Discussion of the financial barriers including expensive subscriptions, Mac Mini hardware, and token costs that contradict claims of democratizing technology
13. AI Hype Cycle
Related: Observations that we're at the apex of AI hype, with predictions the bubble will pop and more realistic assessments will emerge over time
14. Context Window Problems
Related: Technical discussion of how AI agents lose coherence as context grows, with compaction causing confusion and requiring human redirection
15. Testing and Verification
Related: Emphasis on the need for humans to verify AI output, run tests, and maintain quality control since AI cannot reliably check its own work
16. Language-Specific Performance
Related: Observations that AI performs better with some programming languages like Python and JavaScript compared to Java, Scala, or enterprise frameworks
17. Engineering vs Management
Related: Philosophical debate about why engineers want to become managers, whether it's about power, career progression, avoiding obsolescence, or building bigger things
18. Model Selection Matters
Related: Discussion of significant quality differences between AI models, with frontier models like Opus and GPT-5.2 performing notably better than cheaper alternatives
19. Workflow Integration Tips
Related: Practical advice including using AGENTS.md files, breaking tasks into smaller chunks, brainstorming with agents, and having separate contexts for review and implementation
20. Vibe Coding Skepticism
Related: Criticism of fully autonomous AI coding without understanding the output, with warnings about technical debt, logical errors, and unmaintainable code accumulation
0. Does not fit well in any category
</topics>
<comments_to_classify>
[
{
"id": "46938720",
"text": "There's an odd trend with these sorts of posts where the author claims to have had some transformative change in their workflow brought upon by LLM coding tools, but also seemingly has nothing to show for it. To me, using the most recent ChatGPT Codex (5.3 on \"Extra High\" reasoning), it's incredibly obvious that while these tools are surprisingly good at doing repetitive or locally-scoped tasks, they immediately fall apart when faced with the types of things that are actually difficult in software development and require non-trivial amounts of guidance and hand-holding to get things right. This can still be useful, but is a far cry from what seems to be the online discourse right now.\n\nAs a real world example, I was told to evaluate Claude Code and ChatGPT codex at my current job since my boss had heard about them and wanted to know what it would mean for our operations. Our main environment is a C# and Typescript monorepo with 2 products being developed, and even with a pretty extensive test suite and a nearly 100 line \"AGENTS.md\" file, all models I tried basically fail or try to shortcut nearly every task I give it, even when using \"plan mode\" to give it time to come up with a plan before starting. To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the \"thinking\" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.\n\nIt almost feels like this is some \"open secret\" which we're all pretending isn't the case too, since if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed. I don't mean to sound dismissive, but I really do feel like I'm going crazy here."
}
,
{
"id": "46938814",
"text": "You're not going crazy. That is what I see as well. But, I do think there is value in:\n\n- driving the LLM instead of doing it yourself. - sometimes I just can't get the activation energy and the LLM is always ready to go so it gives me a kickstart\n\n- doing things you normally don't know. I learned a lot of command like tools and trucks by seeing what Claude does. Doing short scripts for stuff is super useful. Of course, the catch here is if you don't know stuff you can't drive it very well. So you need to use the things in isolation.\n\n- exploring alternative solutions. Stuff that by definition you don't know. Of course, some will not work, but it widens your horizon\n\n- exploring unfamiliar codebases. It can ingest huge amounts of data so exploration will be faster. (But less comprehensive than if you do it yourself fully)\n\n- maintaining change consistency. This I think it's just better than humans. If you have stuff you need to change at 2 or 3 places, you will probably forget. LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)"
}
,
{
"id": "46939048",
"text": "For me the biggest benefit from using LLMs is that I feel way more motivated to try new tools because I don't have to worry about the initial setup.\n\nI'd previously encountered tools that seemed interesting, but as soon as I tried getting it to run I found myself going down an infinite debugging hole. With an LLM I can usually explain my system's constraints and the best models will give me a working setup from which I can begin iterating. The funny part is that most of these tools are usually AI related in some way, but getting a functional environment often felt impossible unless you had really modern hardware."
}
,
{
"id": "46939469",
"text": "Same. This weekend, I built a Flutter app and a Wails app just to compare the two. Would have never done either on my own due to the up front boilerplate— and not knowing (nor really wishing to know) Dart."
}
,
{
"id": "46940298",
"text": "I did the same thing but with react and supabase. I wouldn’t have done this on my own because of the react drudgery."
}
,
{
"id": "46940829",
"text": "Cool! With openclaw or with Claude?"
}
,
{
"id": "46938853",
"text": ">driving the LLM instead of doing it yourself. - sometimes I just can't get the activation energy and the LLM is always ready to go so it gives me a kickstart\n\nThere is a counter issue though, realizing mid session that the model won’t be able to deliver that last 10%, and now you have to either grok a dump of half finished code or start from scratch."
}
,
{
"id": "46941615",
"text": "I wonder about this.\n\nIf (and it's a big if) the LLM gives you something that kinda, sorta, works, it may be an easier task to keep that working, and make it work better, while you refactor it, than it would have been to write it from scratch.\n\nThat is going to depend a lot on the skillset and motivation of the programmer, as well as the quality of the initial code dump, but...\n\nThere's a lot to be said for working code. After all, how many prototypes get shipped?"
}
,
{
"id": "46939580",
"text": "> - maintaining change consistency. This I think it's just better than humans. If you have stuff you need to change at 2 or 3 places, you will probably forget. LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)\n\nI use Claude Code a decent amount, and I actually find that sometimes this can be the opposite for me. Sometimes it is actually missing other areas that the change will impact and causing things to break. Sometimes when I go to test it I need to correct it and point out it missed something or I notice when in the planning phase that it is missing something.\n\nHowever I do find if you use a more powerful opus model when planning, it does consider things fully a lot better than it used to. This is actually one area I have been seeing some very good improvements as the models and tooling improves.\n\nIn fact, I actually hope that these AI tools keep getting better at the point you mention, as humans also have a \"context limit\". There are only so many small details I can remember about the codebase so it is good if AI can \"remember\" or check these things.\n\nI guess a lot of the AI can also depend on your codebase itself, how you prompt it, and what kind of agents file you have. If you have a robust set of tests for your application you can very easily have AI tools check their work to ensure things aren't being broken and quickly fix it before even completing the task. If you don't have any testing more could be missed. So I guess it's just like a human in some sense. If you have a crappy codebase for the AI to work with, the AI may also sometimes create sloppy work."
}
,
{
"id": "46940774",
"text": "> LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)\n\nI think it makes sense? Unlike small details which are certain to be explicitly part of the training data, \"big picture stuff\" feels like it would mostly be captured only indirectly."
}
,
{
"id": "46940358",
"text": "I tend to be surprised in the variance of reported experiences with agentic flows like Claude Code and Codex CLI.\n\nIt's possible some of it is due to codebase size or tech stack, but I really think there might be more of a human learning curve going on here than a lot of people want to admit.\n\nI think I am firmly in the average of people who are getting decent use out of these tools. I'm not writing specialized tools to create agents of agents with incredibly detailed instructions on how each should act. I haven't even gotten around to installing a Playwright mcp (probably my next step).\n\nBut I've:\n\n- created project directories with soft links to several of my employer's repos, and been able to answer several cross-project and cross-team questions within minutes, that normally would have required \"Spike/Disco\" Jira tickets for teams to investigate\n\n- interviewed codebases along with product requirements to come up with very detailed Jira AC, and then,.. just for the heck of it, had the agent then use that AC to implement the actual PR. My team still code-reviewed it but agreed it saved time\n\n- in side projects, have shipped several really valuable (to me) features that would have been too hard to consider otherwise, like... generating pdf book manuscripts for my branching-fiction creating writing club, and launching a whole new website that has been mired in a half-done state for years\n\nReally my only tricks are the basics: AGENTS.md, brainstorm with the agent, continually ask it to write markdown specs for any cohesive idea, and then pick one at a time to implement in commit-sized or PR-sized chunks. GPT-5.2 xhigh is a marvel at this stuff.\n\nMy codebases are scala, pekko, typescript/react, and lilypond - yeah, the best models even understand lilypond now so I can give it a leadsheet and have it arrange for me two-hand jazz piano exercises.\n\nI generally think that if people can't reach the above level of success at this point in time, they need to think more about how to communicate better with the models. There's a real \"you get out of it what you put into it\" aspect to using these tools."
}
,
{
"id": "46940428",
"text": "Is it annoying that I tell it to do something and it does about a third of it? Absolutely.\n\nCan I get it to finish by asking it over and over to code review its PR or some other such generic prompt to weed out the skips and scaffolding? Also yes.\n\nBasically these things just need a supervisor looking at the requirements, test results, and evaluating the code in a loop. Sometimes that's a human, it can also absolutely be an LLM. Having a second LLM with limited context asking questions to the worker LLM works. Moreso when the outer loop has code driving it and not just a prompt."
}
,
{
"id": "46940441",
"text": "I guess this is another example - I literally have not experienced what you described in... several weeks, at least."
}
,
{
"id": "46940474",
"text": "I often ask for big things.\n\nFor example I'm working on some virtualization things where I want a machine to be provisioned with a few options of linux distros and BSDs. In one prompt I asked for this list to be provisioned so a certain test of ssh would complete, it worked on it for several hours and now we're doing the code review loop. At first it gave up on the BSDs and I had to poke it to actually finish with an idea it had already had, now I'm asking it to find bugs and it's highlighting many mediocre code decisions it has made. I haven't even tested it so I'm not sure if it's lying about anything working yet."
}
,
{
"id": "46941329",
"text": "I usually talk with the agent back and forth for 15 min, explicitly ask, \"what corner cases do we need to consider, what blind spots do I have?\" And then when I feel like I've brain vomited everything + send some non-sensitive copy and paste and ask it for a CLAUDE/AGENTS.md and that's sufficient to one-shot 98% of cases"
}
,
{
"id": "46941535",
"text": "Yeah I usually ask what open questions it has, versus when it thinks it is ready to implement."
}
,
{
"id": "46940846",
"text": "The thing I've learned is that it doesn't do well at the big things (yet).\n\nI have to break large tasks into smaller tasks, and limit the context and scope.\n\nThis is the thing that both Superpowers and Ralph [0] do well when they're orchestrating; the plans are broken down enough so that the actual coding agent instance doesn't get overwhelmed and lost.\n\nIt'll be interesting to see what Claude Code's new 1m token limit does to this. I'm not sure if the \"stupid zone\" is due to approaching token limits, or to inherent growth in complexity in the context.\n\n[0] these are the two that I've experimented with, there are others."
}
,
{
"id": "46941019",
"text": "It's like a little kid, you tell it to do the dishes and it does half of them and then runs away."
}
,
{
"id": "46940639",
"text": "ah, so cool. Yeah that is definitely bigger than what I ask for. I'd say the bigger risk I'm dealing with right now is that while it passes all my very strict linting and static analysis toolsets, I neglected to put detailed layered-architecture guidelines in place, so my code files are approaching several hundred lines now. I don't actually know if the \"most efficient file size\" for an agent is the same as for a human, but I'd like them to be shorter so I can understand them more easily."
}
,
{
"id": "46941003",
"text": "Tell it to analyze your codebase for best practices and suggest fixes.\n\nTell it to analyze your architecture, security, documentation, etc. etc. etc. Install claude to do review on github pull requests and prompt it to review each one with all of these things.\n\nJust keep expanding your imagination about what you can ask it to do, think of it more like designing an organization and pinning down the important things and providing code review and guard rails where it needs it and letting it work where it doesn't."
}
,
{
"id": "46941584",
"text": "> To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the \"thinking\" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.\n\nThis is the challenge I also face, it's not always obvious when a change I want will be properly understood by the LLM. Sometimes it one shots it, then others I go back and forth until I could have just done it myself. If we have to get super detailed in our descriptions, at what point are we just writing in some ad-hoc \"programming language\" that then transpiles to the actual program?"
}
,
{
"id": "46940166",
"text": "I can’t speak for anyone else, but Claude Code has been transformative for me.\n\nI can’t say it’s led to shipping “high quality projects”, but it has let me accomplish things I just wouldn’t have had time for previously.\n\nI’ve been wanting to develop a plastic -> silicone -> plaster -> clay mold making process for years, but it’s complex and mold making is both art and science. It would have been hundreds of hours before, with maybe 12 hours of Claude code I’m almost there (some nagging issues… maybe another hour).\n\nAnd I had written some home automation stuff back with Python 2.x a decade ago; it was never worth the time to refamiliarize myself with in order to update, which led to periodic annoyances. 20 minutes, and it’s updated to all the latest Python 3.x and modern modules.\n\nFor me at least, the difference between weeks and days, days and hours, and hours and minutes has allowed me to do things I just couldn’t justify investing time in before. Which makes me happy!\n\nSo maybe some folks are “pretending”, or maybe the benefits just aren’t where you’re expecting to see them?"
}
,
{
"id": "46940321",
"text": "I’m trying to pivot my career from web/business app dev entirely into embedded, despite the steep learning curve, many new frameworks and tool chains, because I now have a full-time infinitely patient tutor, and I dare say it’s off to a pretty good start so far."
}
,
{
"id": "46940573",
"text": "If you want to get into embedded you’d be better suited learning how to use an o-scope, a meter, and asm/c. If you’re using any sort of hardware that isn’t “mainstream” you’ll be pretty bummed at the results from an LLM."
}
,
{
"id": "46941161",
"text": "If it’s okay with you, I’m going to very intentionally do my initial learning on mainstream hardware before moving on to anything beyond that."
}
,
{
"id": "46940506",
"text": "Sounds like you only tried it on small projects."
}
,
{
"id": "46940831",
"text": "At work I use it on giant projects, but it’s less impressive there’s\n\nMy mold project is around 10k lines of code, still small.\n\nBut I don’t actually care about whether LLMs are good or bad or whatever. All I care is that I am am completing things that I wasn’t able to even start before. Doesn’t really matter to me if that doesn’t count for some reason."
}
,
{
"id": "46940580",
"text": "That’s where it really shines. I have a backlog of small projects (-1-2kLOC type state machines , sensors, loggers) and instead of spending 2-3 days I can usually knock them out in half a day. So they get done. On these projects, it is an infinity improvement because I simply wouldn’t have done them, unable to justify the cost.\n\nBut on bigger stuff, it bogs down and sometimes I feel like I’m going nowhere. But it gets done eventually, and I have better structured, better documented code. Not because it would be better structured and documented if I left it to its ow devices, but rather it is the best way to get performance out of LLM assistance in code.\n\nThe difference now is twofold: First, things like documentation are now -effortless-. Second, the good advice you learned about meticulously writing maintainable code no longer slows you down, now it speeds you up."
}
,
{
"id": "46940545",
"text": "> I’ve been wanting to develop a plastic -> silicone -> plaster -> clay mold making process for years, but it’s complex and mold making is both art and science. It would have been hundreds of hours before, with maybe 12 hours of Claude code I’m almost there (some nagging issues… maybe another hour).\n\nThat’s so nebulous and likely just plain wrong. I have some experience with silicone molds and casting silicone and other materials. I have no idea how you’d accurately estimate it would take hundreds of hours. But the mostly likely reason you’ve had results is that you just did it.\n\nThis sounds very very much like confirmation bias. “I started drinking pine needle tea and then 5 days later my cold got better!”\n\nI use AI, it’s useful for lots of things, but this kind of anecdote is terrible evidence."
}
,
{
"id": "46940845",
"text": "You may just be more knowledgeable than me. For me, even getting to algorithmic creation of 4-6 part molds, plus alternating negatives / positives in the different mediums, was insurmountable.\n\nI’m willing to believe that I’m just especially clueless and this is not a meaningful project to an expert. But hey, I’m printing plastic negatives to make silicone positives to make plaster negatives to slip cast, which is what I actually do care about."
}
,
{
"id": "46940987",
"text": "It might be role-specific. I'm a solutions engineer. A large portion of my time is spent making demos for customers. LLMs have been a game-changer for me, because not only can I spit out _more_ demos, but I can handle more edge cases in demos that people run into. E.g. for example, someone wrote in asking how to use our REST API with Python.\n\nI KNOW a common issue people run into is they forget to handle rate limits, but I also know more JavaScript than Python and have limited time, so before I'd\nwrite:\n\n```\n# NOTE: Make sure to handle the rate limit! This is just an example. See example.com/docs/javascript/rate-limit-example for a js example doing this.\n```\n\nUnsurprisingly, more than half of customers would just ignore the comment, forget to handle the rate limit, and then write in a few months later. With Claude, I just write \"Create a customer demo in Python that handles rate limits. Use example.com/docs/javascript/rate-limit-example as a reference,\" and it gets me 95% of the way there.\n\nThere are probably 100 other small examples like this where I had the \"vibe\" to know where the customer might trip over, but not the time to plug up all the little documentation example holes myself. Ideally, yes, hiring a full-time person to handle plugging up these holes would be great, but if you're resource constrained paying Anthropic for tokens is a much faster/cheaper solution in the short term."
}
,
{
"id": "46941540",
"text": "I’m working on a solo project, a location-based game platform that includes games like Pac-Man you play by walking paths in a park. If I cut my coding time to zero, that might make me go two or three times faster. There is a lot of stuff that is not coding. Designing, experimenting, testing, redesigning, completely changing how I do something, etc. There is a lot more to doing a project than just coding. I am seeing a big speed up, but that doesn’t mean I can complete the project in a week. (These projects are never really a completed anyway, until you give up on it)."
}
,
{
"id": "46941008",
"text": "As others have said, the benefit is speed, not quality. And in my experience you get a lot more speed if you’re willing to settle for less quality.\n\nBut the reason you don’t see a flood of great products is that the managerial layer has no idea what to do with massively increased productivity (velocity). Ask even a Google what they’d do with doubly effective engineers and the standard answer is to lay half of them off."
}
,
{
"id": "46938796",
"text": "There's got to be some quantity of astroturfing going on, given the players and the dollar amounts at stake."
}
,
{
"id": "46939366",
"text": "Some ? I'd be shocked if it's less than 70% of everything AI-related in here.\n\nFor example a lot of pro-OpenAI astroturfing really wanted you to know that 5.3 scored better than opus on terminal-bench 2.0 this week, and a lot of Anthropic astroturfing likes to claim that all your issues with it will simply go away as soon as you switch to a $200/month plan (like you can't try Opus in the cheaper one and realise it's definitely not 10x better)."
}
,
{
"id": "46940165",
"text": "You can try opus in the cheaper one if you enable extra usage, though."
}
,
{
"id": "46941565",
"text": "And they are currently giving away $50 worth of extra usage if you subscribed to Pro before Feb 4."
}
,
{
"id": "46940512",
"text": "\"some\", where \"some\" is scaled to match the overwhelmingly unprecedented amount of money being thrown behind all this. plus all of this is about a literal astroturfing machine , capable of unprecedented scale and ability to hide, which it's extremely clearly being used for at scale elsewhere / by others.\n\nso yeah, it wouldn't surprise me if it was well over most. I don't actually claim that it is over half here, I've run across quite a few of these kinds of people in real life as well. but it wouldn't surprise me."
}
,
{
"id": "46938780",
"text": "Pretty much every software engineer I've talked to sees it more or less like you do, with some amount of variance on exactly where you draw the line of \"this is where the value prop of an LLM falls off\". I think we're just awash in corporate propaganda and the output of social networks, and \"it's good for certain things, mixed for others\" is just not very memetic."
}
,
{
"id": "46939314",
"text": "I wish this was true. My experience is co-workers who do lip service as to treating LLM like a baby junior dev, only to near-vibe every feature and entire projects, without spending so much as 10 mins to think on their own first."
}
,
{
"id": "46941483",
"text": "I like it because it lets me shoot off a text about making a plot I think about on the bus connecting some random data together. It’s nice having Claude code essentially anywhere. I do think that this is a nice big increment because of that. But also it suffers the large code base problems everyone else complains about. Tbh I think if its context window was ten times bigger this would be less of an issue. Usually compacting seems to be when it starts losing the thread and I have to redirect it."
}
,
{
"id": "46939256",
"text": "At my work I interview a lot of fresh grads and interns. I have been doing that consistently for last 4 years. During the interviews I always ask the candidates to show and tell, share their screen and talk about their projects and work at school and other internships.\n\nSince last few months, I have seen a notable difference in the quality and extent of projects these students have been able to accomplish. Every project and website they show looks polished, most of those could be a full startup MVP pre AI days.\n\nThe bar has clearly been raised way high, very fast with AI."
}
,
{
"id": "46939468",
"text": "I’ve had the same experience with the recent batch of candidates for a Junior Software Engineer position we just filled. Their projects looked impressive on the surface and seemed very promising.\n\nOnce we got them into a technical screening, most fell apart writing code. Our problem was simple: using your preferred programming language, model a shopping cart object that has the ability to add and remove items from the cart and track the cart total.\n\nWe were shocked by how incapable most candidates were in writing simple code without their IDEs tab completion capability. We even told them to use whatever resources they normally used.\n\nThe whole experience left us a little surprised."
}
,
{
"id": "46940191",
"text": "In my opinion, it has always been the “easy” part of development to make a thing work once. The hard thing is to make a thousand things work together over time with constantly changing requirements, budgets, teams, and org structures.\n\nFor the former, greenfield projects, LLMs are easily a 10x productivity improvement. For the latter, it gets a lot more nuanced. Still amazingly useful in my opinion, just not the hands off experience that building from scratch can be now."
}
,
{
"id": "46939057",
"text": "I find these agents incredibly useful for eliminating time spent on writing utility scripts for data analysis or data transformation.\nBut... I like coding, getting relegated to being a manager 100%? Sounds like a prison to me not freedom.\n\nThat they are so good at the things I like to do the least and still terrible at the things at which I excel. That's just gravy.\n\nBut I guess this is in line with how most engineers transition to management sometime in their 30s."
}
,
{
"id": "46939212",
"text": "> if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed.\n\nThe headline gain is speed. Almost no-one's talking about quality - they're moving too fast to notice the lack."
}
,
{
"id": "46939238",
"text": "Matches my experience pretty well. FWIW, this is the opinion that I hear most frequently in real life conversation. I only see the magical revelation takes online -- and I see a lot of them."
}
,
{
"id": "46941080",
"text": "I'd be curious if a middle layer like this [0] could be helpful? I've been working on it for some time (several iterations now, going back and forth between different ideas) and am hoping to collect some feedback.\n\n[0] https://github.com/deepclause/deepclause-sdk"
}
,
{
"id": "46939602",
"text": "We're at the apex of the hype cycle. I think it'll die down in a year and we'll get a better picture of how people have integrated the tools\n\nEven if it's not straight astroturfing I think people are wowed and excited and not analyzing it with a clear head"
}
,
{
"id": "46939350",
"text": "> ... but also seemingly has nothing to show for it\nThis x1000, I find it so ridiculous.\n\nusually when someone hypes it up it's things like, \"i have it text my gf good morning every day!!\", or \"it analyzed every single document on my computer and wrote me a poem!!\""
}
]
</comments_to_classify>
Based on the comments above, assign each to up to 3 relevant topics.
Return ONLY a JSON array with this exact structure (no other text):
[
{
"id": "comment_id_1",
"topics": [
1,
3,
5
]
}
,
{
"id": "comment_id_2",
"topics": [
2
]
}
,
{
"id": "comment_id_3",
"topics": [
0
]
}
,
...
]
Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment
Remember: Output ONLY the JSON array, no other text.
50