Summarizer

LLM Input

llm/e6f7e516-f0a0-4424-8f8f-157aae85c74e/batch-1-196a5cdb-46a0-40da-a246-24f465f0a6f7-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Reasoning vs. Pattern Matching
   Related: Debates on whether LLMs truly think or merely predict tokens based on training data. Includes comparisons to human cognition, the definition of "reasoning" as argument production versus evaluation, and the argument that LLMs are "lobotomized" without external loops or formalization.
2. AI-Assisted Coding Reality
   Related: Divergent experiences with tools like Claude Code and Codex. While some report massive productivity boosts and shipping entire features solo, others describe "lazy" AI, subtle logic bugs in generated tests (e.g., SQL query validation), and the danger of unverified code bloat.
3. The AI Economic Bubble
   Related: Comparisons to the dot-com crash, with arguments that current valuation relies on "science fiction fantasies" and hype rather than revenue. Counter-arguments suggest the infrastructure (datacenters, GPUs) provides real value similar to the fiber build-out, even if a market correction is imminent.
4. Workforce Displacement and Automation
   Related: Fears and anecdotes regarding job security, including a "Staff SWE" preferring AI to coworkers and contractors losing bids to smaller, AI-equipped teams. Discussions cover the automation of "bullshit jobs," the potential for a "winner take all" economy, and management incentives to cut labor costs.
5. Definition of Agentic Success
   Related: Disagreement over whether AI "joined the workforce." Some argue failing to replace humans entirely (the "secretary" model) is a failure of 2025 predictions, while others claim deep integration as a tool (automating loops, drafting emails) constitutes a successful, albeit different, type of joining.
6. Verification and Hallucination Risks
   Related: The critical need for external validation mechanisms. Commenters note that coding agents succeed because compilers/linters act as truth-checkers, whereas open-ended tasks (spreadsheets, emails) lack rigorous feedback loops, making hallucinations and "truthy" errors dangerous and hard to detect.
7. Impact on Skill and Learning
   Related: Concerns about the long-term effects on human expertise. Topics include "skill atrophy" where juniors bypass learning fundamentals, the educational crisis evidenced by Chegg's collapse, and the difficulty of debugging AI code without deep institutional knowledge or "muscle memory" of the system.
8. Corporate Hype vs. Utility
   Related: Cynicism toward executive predictions (Altman, Hinton) viewed as efforts to pump stock prices or attract investment. Users contrast "corporate puffery" and "vaporware" with the practical, often mundane utility of AI in specific B2B workflows like insurance claim processing or data extraction.
9. Integration into Legacy Systems
   Related: The challenge of applying AI to real-world, messy environments versus greenfield demos. Discussion includes the difficulty of getting agents to work with proprietary codebases, expensive dependencies, lack of documentation for obscure vendor tools, and the failure of browser agents on standard web forms.
10. Formalization of Natural Language
   Related: Theoretical discussions on overcoming LLM limitations by mapping natural language to formal logic or proof systems (like Lean). Skeptics argue human language is too "mushy" or context-dependent for this to be a silver bullet for AGI or perfect reasoning.
11. Medical and Specialized Fields
   Related: Debates on AI in radiology and medicine. While some see potential in automated reporting and "second opinions" to catch errors, professionals argue that current models struggle with complex cases, over-report issues, and lack the nuance required for high-stakes diagnostics.
12. The Secretary vs. Replacement Model
   Related: The shift in expectations from AI as an autonomous employee to AI as a productivity-enhancing assistant. Users describe workflows where humans act as orchestrators or managers of AI output rather than performing the rote work, effectively reviving the role of the personal secretary.
13. Software Engineering Evolution
   Related: Predictions that the discipline is shifting from "writing code" to "managing entropy" and system design. Some view this as empowering "cowboy devs" to move fast, while others fear a future of unmaintainable "vibe coded" software that no human fully understands.
14. Productivity Metrics and Paradoxes
   Related: Skepticism regarding "2x productivity" claims. Commenters argue that generating more code doesn't equal value, noting that debugging, communicating, and context-gathering are the real bottlenecks, and that AI might simply be increasing the volume of low-quality output or "slop."
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46511032",
  "text": "Yes, but only 15-20 years later in the extent expected for 2000. The technology wasn't there yet, just like with LLMs."
}
,
  
{
  "id": "46511559",
  "text": "Right, one could say it was actually the smartphone that made the Internet as successful as it was being pitched to be in the late 90’s."
}
,
  
{
  "id": "46514544",
  "text": "From where I sit, smartphones completely ruined the internet.\n\nIf you're defining \"successful\" in the sense of people-making-a-lot-of-money I completely agree.\n\nIf you're talking about the internet in 2005 vs 2025, smartphones completely ruined the internet. At one point I had half my high school using HTML in their AIM profiles because they thought mine was cool.\n\nNow kids can hardly even type on an actual, physical keyboard."
}
,
  
{
  "id": "46514569",
  "text": "We're in complete agreement."
}
,
  
{
  "id": "46511783",
  "text": "The technology was there all along, and it's ironic that we're discussing it on this website of all. Look up ViaWeb."
}
,
  
{
  "id": "46512231",
  "text": "The tech was there. The websites worked fine. The issue was number of internet users was too low. Look at number of internet users today vs 1999…"
}
,
  
{
  "id": "46513456",
  "text": "Yes, but generally not for the companies who were closely involved in the dot-com bubble; Amazon is probably literally the only exception of any significance.\n\nAnd, well, not all bubbles go as well as that. See the cryptocurrency one, say. Or any of a number of _previous_ AI bubbles."
}
,
  
{
  "id": "46511830",
  "text": "The startups/businesses that provided value survived the bubble bursting."
}
,
  
{
  "id": "46510949",
  "text": "I had a job at a small investment firm at the time in college and to me it is nothing like the dotcom bubble.\n\nThe dotcom bubble was the \"new economy\", the old economy had changed forever and was dead. No one thought it was a bubble. Even when the bubble popped it took until 9/11 to wake us up from the mass hysteria.\n\nI can't think of another \"bubble\" that practically everyone thinks we are in a bubble. To the point that I think many would find it irrational to believe we are not in a bubble. That is not what bubble is. A bubble is the madness of crowds, not the wisdom of crowds and the crowd certainly believes we are in a bubble."
}
,
  
{
  "id": "46511389",
  "text": "25 years time. \"I remember the LLM bubble, everyone knew we were in a bubble but they carried on as if the music would never stop. Don't worry, our situation is nothing like that, there is no talk of a bubble.\""
}
,
  
{
  "id": "46512299",
  "text": "Have you considered that your investment firm and other peers at the time were in an information bubble?\n\nIn fact outside of tech if the dotcom bubble wasn’t being discussed it’s because most folks—being not, or barely, online yet—weren’t paying any attention to it per se . The bubble they cared about was the broader stock market bubble, which was definitely widely perceived as a bubble."
}
,
  
{
  "id": "46511634",
  "text": "> practically everyone thinks we are in a bubble\n\nVery untrue, economy doesn't happen on online forums in echo chambers but out there. Every major company invests into AI however they can for the classical FOMO emotion.\n\nThis is how movers and decision makers think. No CEO thinks - this will crash, so lets invest into it massively and spread our company finances more thinly when the SHTF comes."
}
,
  
{
  "id": "46511142",
  "text": "\"the old economy had changed forever and was dead.\"\n\nEhem - what is the difference compered to now? Wasn't programmers obsolete by 6mths ago and nobody would work so we do need UBI?\n\nHowever your point that if everybody are thinking there is buble there is none is valid. Ironically your whole post undermine this point. And you are not alone in your analysis. General bubble wisdom is not settled as one might think.\n\nPlus famous Alan Greenspan \"irrational exuberance\" was in 1996. And AFAIK in 1999 everybody know there is buble but it busted only in 2000. On top of that I have seen overlying plots of stock prices now and before dot com suggesting there is 1-2 years of increases still to go."
}
,
  
{
  "id": "46511541",
  "text": "> Ehem - what is the difference compered to now? Wasn't programmers obsolete by 6mths ago and nobody would work so we do need UBI?\n\nYou're applying an arbitrary time constraint to the realization of AI's promise in order to rubbish it. This is a logical mistake common among critics: not yet, so never. It doesn't seem as if there is a near limit to the tech's development. Until that changes, the potential for job wipeouts and societal upheaval is real, whether in 5 or 50 years."
}
,
  
{
  "id": "46512048",
  "text": "Sorry but that was not my point. My point was refuting of the thesis (in the comment that I am replying to) that nobody was making grand claims about AI contrary to grand claims about internet pre dot com. Obviously in both cases there were/are grand claims made."
}
,
  
{
  "id": "46510426",
  "text": "regardless if you were around or not, this take has been repeated a quintillion times by now"
}
,
  
{
  "id": "46510580",
  "text": "Because it's the correct take"
}
,
  
{
  "id": "46511756",
  "text": "Doesn't mean it's a valuable take.\n\nIt doesn't even significantly matter whether it's a bubble or not, but whether its a \"bad\" bubble.\n\nI think Steve Eisman (of housing bubble fame) recently made the argument that it's probably a bubble, but it doesn't seem to have the hallmarks it would have to turn it into a crisis. e.g. no broad immediate exposure for the general populace (as in housing/crypto bubbles)."
}
,
  
{
  "id": "46513268",
  "text": "> It doesn't even significantly matter whether it's a bubble or not, but whether its a \"bad\" bubble.\n\nthere are billions and billions of dollars invested in there -- it matter significantly to a lot of people.\n\nthe bubble popping may trash the US and possibly global economy. \"it doesn't matter\" has to be one of the worst AI takes I've seen..."
}
,
  
{
  "id": "46510733",
  "text": "Oh my god this has to be peak HN comment\n\nRight alongside \"is anyone surprised\" or nitpicking over small details that don't matter"
}
,
  
{
  "id": "46511111",
  "text": "Or \"nice project, I've done exactly the same 10 years before...\" Or \"cool project, but why haven't you tried [insert obscure software]\".\n\nThe list goes on."
}
,
  
{
  "id": "46511599",
  "text": "the \"why not xxxx?\" comments really are the height of disrespect, ignoring someone else's effort to instead show how smart they are, while being lazy about it. I bet 9/10 people who make such comments never even look at the original project in any depth, let alone try it out in anger."
}
,
  
{
  "id": "46511986",
  "text": "There could instead be warm fellow-feeling where everyone maintains respectful silence about alternatives, everybody with a new project gets a lovely ego boost, and I remain uninformed about what else exists."
}
,
  
{
  "id": "46513079",
  "text": "If that doesn't come in the form of a discussion grounded in the original post, I could just as well have asked chatgpt and wouldn't have known the difference."
}
,
  
{
  "id": "46516636",
  "text": "all about you, all the time, right?"
}
,
  
{
  "id": "46508870",
  "text": "> So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes.\n\nA lot of the predictions come from interviews and presentations with top tech executives. Their job is to increase the perceived value of their product, not to offer an objective assessment.\n\nI've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs.\n\nI have also gotten a lot of value out of Cembalest's recent \"eyes on the market\", which looks at the economic side of this AI push."
}
,
  
{
  "id": "46510486",
  "text": "In the gap between reality and executive speak around LLMs, I’m wondering about motives.\n\nGetting executives, junior devs, HR, and middle management hooked onto an advice and document template machine owned and operated by your corporation would seemingly have a huge upside for an entity like Microsoft. Their infatuation might be more about how profitable such arrangements would be versus any meaningful productivity improvement for developers.\n\nLike, in ways that BizTalk, Dynamics, and SharePoint attempt to capture business processes onto a pay-for-play MS stack, and all benefit when being pitched to non-technical customers, Copilot provides an ever evolving sycophantic exec-centred channel to push and entangle it all as MS sees fit.\n\nHaving all parts of your business divulge in real-time through saveable chats every part of your business, strategy, tooling, and process to MS servers and Azure services is itself a pretty stunning arrangement. Imagining those same services directly selling busy customers entangling integrations, or trendy azure services, through freewheeling MCP-like glue, all inline in that customers own business processes? It sounds like tech exec nirvana, automated self-directed sales.\n\nThey don’t need job deleting sentience to make the share price go up and rationalize this LLM push. They are far more aware of the limitations than we…"
}
,
  
{
  "id": "46510861",
  "text": "> They are far more aware of the limitations than we…\n\nEvery individual only cares about their paycheck and promotion. They will happily ignore their knowledge of the limitations if it means squeezing out an extra resume bulletpoint, paycheck or promotion, even if it causes the company to go bankrupt down the line (by that time they would've jumped ship somewhere else anyway)."
}
,
  
{
  "id": "46513399",
  "text": "Weren't we trusting MS with our business info before AI with cloud platforms like Azure? Not saying \"eh don't worry about it\" - I'm just asking why I should be more worried than before. These businesses need subscribers which means they need our trust.."
}
,
  
{
  "id": "46511679",
  "text": "> I've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs.\n\nI normally see things the same way you do, however I did have a conversation with a podiatrist yesterday that gave me food for thought. His belief is that certain medical roles will disappear as they'll become redundant. In his case, he mentioned radiology and he presented his case as thus:\n\nA consultant gets a report + X-Ray from the radiologist. They read the report and confirm what they're seeing against the images. They won't take the report blindly. What changes is that machines have been learning to interpret the images and are able to use an LLM to generate the report. These reports tend not to miss things but will over-report issues. As a consultant will verify the report for themselves before operating, they no longer need the radiologist. If the machine reports a non-existent tumour, they'll see there's no tumour."
}
,
  
{
  "id": "46513474",
  "text": "I doubt this simply because of the inertia of medicine. The industry still does not have a standardized method for handling automated claims like banking. It gets worse for services that require prior authorization; they settle this over the phone ! These might sound like irrelevant ranting, but my point is that they haven't even addressed the low-hanging fruits, let alone complex ailments like cancer."
}
,
  
{
  "id": "46514187",
  "text": "IMO prior authorization needing to be done on the phone is a feature, not a bug. It intentionally wastes a doctor's time so they are less incentivized to advocate for their patients and this frustration saves the insurance companies money."
}
,
  
{
  "id": "46514819",
  "text": "Heard. I do wonder why hospitals haven't automated their side though. Regardless, the recent prior auth situation is a trainwreck. If I were dictator, insurance companies would be non-profit and required to have a higher loss ratio.\n\n2 quibbles: 1) a more ethical system would still need triage-style rationing given a finite budget, 2) medical providers are also culpable given the eye-watering prices for even trivial services."
}
,
  
{
  "id": "46511893",
  "text": "I've seen this sort of thing a few times. \"Yes, I'm sure AI can do that other job that's not mine over there.\". Now maybe foot doctors work closer to radiologists than I'm aware of. But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field. Apparently there are one or two incredibly easy tasks that they can sort of do, but it comprises a very small amount of the job of an actual radiologist."
}
,
  
{
  "id": "46512109",
  "text": "> But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field.\n\nJust so I understand correctly: is it over-reporting problems that aren't there or is it missing blindingly obvious problems? The latter is obviously a problem and, I agree, would completely invalidate it as a useful tool. The former sounded, the way it was explained to me, more like a matter of degrees."
}
,
  
{
  "id": "46512455",
  "text": "I'm afraid I don't have the details. I was reading about certain lung issues the AI was doing a good job on and thought, \"oh well that's it for radiology.\" But the radiologist chimed in with, \"yeah that's the easiest thing we do and the rates are still not acceptable, meanwhile we keep trying to get it to do anything harder and the success rates are completely unworkable.\""
}
,
  
{
  "id": "46512083",
  "text": "AI luminary and computer scientist Geoffrey Hinton predicted in 2016 that AI would be able to do all of the things radiologists can do within five years. We're still not even close. He was full of shit and now almost 10 years later he's changed his prediction, though still pretending he was right, by moving the goal posts. His new prediction is that radiologists will use AI to be more efficient and accurate, half suggesting he meant that all along. He didn't. He was simply bullshitting, bluffing, making an educated wish.\n\nThis is the nonsense we're living through, predictions, guesses, promises that cannot possibly be fulfilled and which will inevitably change to something far less ambitious and with much longer timelines and everyone will shrug it off as if we weren't being mislead by a bunch of fraudsters."
}
,
  
{
  "id": "46510784",
  "text": "> Their job is to increase the perceived value of their product\n\nI don't agree. Your job cannot be \"lie to the customer.\" They may see this as the easy way to get more money and justify their comfy position, but it is not their job."
}
,
  
{
  "id": "46512749",
  "text": "Elon proved that \"corporate puffery\" is more valuable than any product you make or could make. The job of CEOs now is to generate science fiction fantasies and sell those to the public."
}
,
  
{
  "id": "46513041",
  "text": "I overstated. To clarify, I think a major purpose of those public interviews/conferences is to increase the perceived value of the product. Entrepreneurship, essentially. Taking chances, speculating, thinking about possibilities, etc."
}
,
  
{
  "id": "46510883",
  "text": "No one said anything about lying.\n\nTheir job is to make the company successful. Part of success is raising funds and boosting share price.\n\nThat is their job, and how do you imagine they can do that?\n\nSound kind of glum and down about the company prospects?\n\nDo not make be laugh.\n\nEven if the company is literally haemorrhaging cash and has < week of runway left, senior executives are often so far up their own basses and surrounded by yes men, that they often honestly believe they can turn things around.\n\nIts often not about will-fully lying.\n\nIts just delusional belief and faith in something that is very unlikely.\n\n(Last minute turn arounds and DSA do exist, but like lottery players, seeing the very few people who do win and mimicking them does not make you into a winner; most of the time)"
}
,
  
{
  "id": "46509263",
  "text": "Snake oil salesmen will predict that \"this will be the year when the snake oil you buy from us will cure all ailments\". Nothing new under the sun."
}
,
  
{
  "id": "46511976",
  "text": "Interestingly enough, my understanding is that some snakes in Asia can be used to produce oil that helps joint problems.\n\nAmerican snakes weren't useful for this.\n\nSo something that was sort of useful in a niche application was co-opted by people who didn't know how to make it work and then ultra hyped.\n\nThe parallels are spot on."
}
,
  
{
  "id": "46509934",
  "text": "And publications can expect more readers for breathless hype articles than for sober analyses."
}
,
  
{
  "id": "46508634",
  "text": "I recall someone saying stories of LLMs doing something useful to \"I have a Canadian girlfriend\" stories. Not trying to discredit or be a pessimist, can anyone elaborate how exactly they use these agents while working in interdependent projects in multi-team settings in e.g. regulated industries?"
}
,
  
{
  "id": "46509517",
  "text": "I’m strictly talking about “Agentic” coding here:\n\nThey are not a silver bullet or truly “you don’t need to know how to code anymore” tools. I’ve done a ton of work with Claude code this year. I’ve gone from a “maybe one ticket a week” tier React developer to someone who’s shipped entire new frontend feature sets, while also managing a team. I’ve used LLM to prototype these features rapidly and tear down the barrier to entry on a lot of simple problems that are historically too big to be a single-dev item, and clear out the backlog of “nice to haves” that compete with the real meat and bread of my business. This prototyping and “good enough” development has been massively impactful in my small org, where the hard problems come from complex interactions between distributed systems, monitoring across services, and lots of low-level machine traffic. LLM’s let me solve easy problems and spend my most productive hours working with people to break down the hard problems into easy problems that I can solve later or pass off to someone on my team to help.\n\nI’ve also used LLM to get into other people’s codebases, refactor ancient tech debt, shore up test suites from years ago that are filled with garbage and copy/paste. On testing alone, LLM are super valuable for throwing edge cases at your code and seeing what you assumed vs. what an entropy machine would throw at it.\n\nLLM absolutely are not a 10x improvement in productivity on their own. They 100% cannot solve some problems in a sensible, tractable way, and they frequently do stupid things that waste time and would ruin a poor developer’s attempts at software engineering. However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business.\n\nSoftware as a discipline has shifted so far from “build functional, safe systems that solve problems” to “I make 200k bike shedding JIRA tickets that require an army of product people to come up with and manage” that LLM can be valuable if only for their capabilities to role-compress and give people with a sense of ownership the tools they need to operate like a whole team would 10 years ago."
}
,
  
{
  "id": "46510977",
  "text": "> However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business.\n\nThis argument gets repeated frequently, but to me it seems to be missing final, actionable conclusion.\n\nIf one \"doesn't know Kubernetes\", what exactly are they supposed to do now, having LLM at hand, in a professional setting? They still \"can't\" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading.\n\nAssuming we are not expecting people to operate with implicit delegation of responsibility to the LLM (something that is ultimately not possible anyway - taking blame is a privilege human will keep for a foreseeable future), I guess the argument in the form as above collapses to \"it's easier to learn new things now\"?\n\nBut this does not eliminate (or reduce) a need for specialization of knowledge on the employee side, and there is only so much you can specialize in.\n\nThe bottleneck maybe shifted right somewhat (from time/effort of the learning stage to the cognition and the memory limits of an individual), but the output on the other side of the funnel (of learn->understand->operate->take-responsibility-for) didn't necessary widen that much, one could argue."
}
,
  
{
  "id": "46511717",
  "text": "> If one \"doesn't know Kubernetes\", what exactly are they supposed to do now, having LLM at hand, in a professional setting? They still \"can't\" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading.\n\nThis is the fundamental problem that all these cowboy devs do not even consider. They talk about churning out huge amounts of code as if it was an intrinsically good thing. Reminds me of those awful VB6 desktop apps people kept churning out. Vb6 sure made tons of people nx productive but it also led to loads of legacy systems that no one wanted to touch because they were built by people who didn't know what they were doing . LLMs-for-Code are another tool under the same category."
}
,
  
{
  "id": "46516032",
  "text": "It really depends on whether coding agents is closer to \"compiler\" or not. Very few amongst us verify assembly code. If the program runs and does the thing, we just assume it did the right thing."
}
,
  
{
  "id": "46511842",
  "text": "I don’t think the conclusion is right. Your org might still require enough React knowledge to keep you gainfully employed as a pure React dev but if all you did was changing some forms, this is now something pretty much anyone can do. The value of good FE architecture increased if anything since you will be adding code quicker. Making sure the LLM doesn’t stupidly couple stuff together is quite important for long term success"
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

50

← Back to job