Summarizer

LLM Input

llm/e6f7e516-f0a0-4424-8f8f-157aae85c74e/batch-3-0c90704d-19ff-4372-86dd-9988bbe83bda-input.json

prompt

The following is content for you to classify. Do not respond to the comments—classify them.

<topics>
1. Reasoning vs. Pattern Matching
   Related: Debates on whether LLMs truly think or merely predict tokens based on training data. Includes comparisons to human cognition, the definition of "reasoning" as argument production versus evaluation, and the argument that LLMs are "lobotomized" without external loops or formalization.
2. AI-Assisted Coding Reality
   Related: Divergent experiences with tools like Claude Code and Codex. While some report massive productivity boosts and shipping entire features solo, others describe "lazy" AI, subtle logic bugs in generated tests (e.g., SQL query validation), and the danger of unverified code bloat.
3. The AI Economic Bubble
   Related: Comparisons to the dot-com crash, with arguments that current valuation relies on "science fiction fantasies" and hype rather than revenue. Counter-arguments suggest the infrastructure (datacenters, GPUs) provides real value similar to the fiber build-out, even if a market correction is imminent.
4. Workforce Displacement and Automation
   Related: Fears and anecdotes regarding job security, including a "Staff SWE" preferring AI to coworkers and contractors losing bids to smaller, AI-equipped teams. Discussions cover the automation of "bullshit jobs," the potential for a "winner take all" economy, and management incentives to cut labor costs.
5. Definition of Agentic Success
   Related: Disagreement over whether AI "joined the workforce." Some argue failing to replace humans entirely (the "secretary" model) is a failure of 2025 predictions, while others claim deep integration as a tool (automating loops, drafting emails) constitutes a successful, albeit different, type of joining.
6. Verification and Hallucination Risks
   Related: The critical need for external validation mechanisms. Commenters note that coding agents succeed because compilers/linters act as truth-checkers, whereas open-ended tasks (spreadsheets, emails) lack rigorous feedback loops, making hallucinations and "truthy" errors dangerous and hard to detect.
7. Impact on Skill and Learning
   Related: Concerns about the long-term effects on human expertise. Topics include "skill atrophy" where juniors bypass learning fundamentals, the educational crisis evidenced by Chegg's collapse, and the difficulty of debugging AI code without deep institutional knowledge or "muscle memory" of the system.
8. Corporate Hype vs. Utility
   Related: Cynicism toward executive predictions (Altman, Hinton) viewed as efforts to pump stock prices or attract investment. Users contrast "corporate puffery" and "vaporware" with the practical, often mundane utility of AI in specific B2B workflows like insurance claim processing or data extraction.
9. Integration into Legacy Systems
   Related: The challenge of applying AI to real-world, messy environments versus greenfield demos. Discussion includes the difficulty of getting agents to work with proprietary codebases, expensive dependencies, lack of documentation for obscure vendor tools, and the failure of browser agents on standard web forms.
10. Formalization of Natural Language
   Related: Theoretical discussions on overcoming LLM limitations by mapping natural language to formal logic or proof systems (like Lean). Skeptics argue human language is too "mushy" or context-dependent for this to be a silver bullet for AGI or perfect reasoning.
11. Medical and Specialized Fields
   Related: Debates on AI in radiology and medicine. While some see potential in automated reporting and "second opinions" to catch errors, professionals argue that current models struggle with complex cases, over-report issues, and lack the nuance required for high-stakes diagnostics.
12. The Secretary vs. Replacement Model
   Related: The shift in expectations from AI as an autonomous employee to AI as a productivity-enhancing assistant. Users describe workflows where humans act as orchestrators or managers of AI output rather than performing the rote work, effectively reviving the role of the personal secretary.
13. Software Engineering Evolution
   Related: Predictions that the discipline is shifting from "writing code" to "managing entropy" and system design. Some view this as empowering "cowboy devs" to move fast, while others fear a future of unmaintainable "vibe coded" software that no human fully understands.
14. Productivity Metrics and Paradoxes
   Related: Skepticism regarding "2x productivity" claims. Commenters argue that generating more code doesn't equal value, noting that debugging, communicating, and context-gathering are the real bottlenecks, and that AI might simply be increasing the volume of low-quality output or "slop."
0. Does not fit well in any category
</topics>

<comments_to_classify>
[
  
{
  "id": "46511374",
  "text": "Wow - quite the escalation for a pretty innocuous conversation."
}
,
  
{
  "id": "46510697",
  "text": "Good point! You should generate a website for them with \"why ai is not good\" articles. Have it explore all possible angles. Make it detective style story with appealing characters."
}
,
  
{
  "id": "46511344",
  "text": "I would also take those studies with a grain of salt at this point, or at least taking into consideration that a model from even a few months ago might have significant enough results from the current frontier models.\n\nAnd in my personal experience it definitely helps in some tasks, and as someone who doesn't actually enjoy the actual coding part that much, it also adds some joy to the job.\n\nRecently I've also been using it to write design docs, which is another aspect of the job that I somewhat dreaded."
}
,
  
{
  "id": "46511788",
  "text": "I think the bigger part of those studies was actually that they were a clear sign that whatever productivity coefficient people were imagining back then was clearly a figment of their imagination, so it's useful to take that lesson with you forward. If people are saying they're 2 times productive with LLMs, it's still likely the case that a large part of that is hyperbole, whatever model they're working with.\n\nIt's the psychology of it that's important, not the tool itself; people are very bad at understanding where they're spending their time and cannot accurately assess the rate at which they work because of it."
}
,
  
{
  "id": "46512127",
  "text": "Which part of the job do you not hate? Writing design docs and code is pretty much the job."
}
,
  
{
  "id": "46512736",
  "text": "I like coming up with the system design and the low level pseudo code, but actually translating it to the specific programming language and remembering the exact syntax or whatnot I find pretty uninspiring.\n\nSame with design docs more or less, translating my thoughts into proper and professional English adds a layer I don't really enjoy (since I'm not exactly great at it), or stuff like formatting, generating a nice looking diagram, etc.\n\nJust today I wrote a pretty decent design doc that took me two hours instead of the usual week+ slog/procrastination, and it was actually fairly enjoyable."
}
,
  
{
  "id": "46506234",
  "text": "> The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems.\n\nBoth of these agents launched mid-2025."
}
,
  
{
  "id": "46506255",
  "text": "don't forget Aider from 2023"
}
,
  
{
  "id": "46506328",
  "text": "Still working hard and now we also have Aider-ce."
}
,
  
{
  "id": "46506502",
  "text": "I’m confused. Claude is a few years old."
}
,
  
{
  "id": "46506606",
  "text": "The parent comment specifically referenced Claude Code, which launched in Feb 2025 [1] and went GA May 2025 [2]. Codex also launched May 2025 [3].\n\n[1] https://www.anthropic.com/news/claude-3-7-sonnet\n\n[2] https://www.anthropic.com/news/claude-4\n\n[3] https://openai.com/index/introducing-codex/"
}
,
  
{
  "id": "46506590",
  "text": "but not Claude Code. it was released just this summer (I guess?)"
}
,
  
{
  "id": "46508930",
  "text": "That makes me wonder how much of the article is true, and how much was hallucinated by an AI."
}
,
  
{
  "id": "46508738",
  "text": "I don't see how AI can bring about 10%+ annual economic growth, let alone infinite abundance, without somehow crossing the bit-to-atom interface. Without a breakthrough in general-purpose robotics - which feels decades away - agents will just be confined to optimizing B2B SaaS. Human utility is rooted in the physical environment. I find digital abundance incredibly uninspiring."
}
,
  
{
  "id": "46510154",
  "text": "I'm mostly a fan of AI coding tools, but I think you're basically right about this.\n\nI think we'll see more specialized models for narrow tasks (think AlphaFold for other challenges in drug discovery, for example) as well, but those will feel like individual, costly, high impact discoveries rather than just generic \"AI\".\n\nOur world is human-shaped and ultimately people who talk of \"AGI\" secretly mean an artificial human.\n\nI believe that \"intelligence\", the way the word is actually used by people, really just means \"skillful information processing in pursuit of individual human desires\".\n\nAs such, it will never be \"solved\" in any other way than to build an artificial human."
}
,
  
{
  "id": "46510934",
  "text": "No, when you bring in the genetic algorithm (something LLM AI can be adjacent to by the scale of information it deals in) you can go beyond human intelligence. I work with GA coding tools pretty regularly. Instead of prompting it becomes all about devising ingenious fitness functions, while not having to care if they're contradictory.\n\nIf superhuman intelligence is solved it'll be in the form of building a more healthy society (or, if you like, a society that can outcompete other societies). We've already seen this sort of thing by accident and we're currently seeing extensive efforts to attack and undermine societies through exploiting human intelligence.\n\nTo a genetic algorithm techie that is actually one way to spur the algorithm to making better societies, not worse ones: challenge it harder. I guess we'll see if that translates to life out here in the wild, because the challenge is real."
}
,
  
{
  "id": "46515762",
  "text": "> If superhuman intelligence is solved it'll be in the form of building a more healthy society (or, if you like, a society that can outcompete other societies).\n\nMaybe so, but the point I'm trying to make is this needs to look nothing like sci-fi ASI fantasies, or rather, it won't look and feel like that before we get the humanoid AI robots that the GP mentioned.\n\nYou can have humans or human institutions using more or less specialized tools that together enable the system to act much more intelligently.\n\nThere doesn't need to be a single system that individually behaves like a god - that's a misconception that comes from believing that intelligence is something like a computational soul, where if you just have more of it you'll eventually end up with a demigod."
}
,
  
{
  "id": "46512564",
  "text": "The troubling thing here is: what is a \"better\" society? As you said, it's just the one that outcompetes the other societies on the globe. We'd like to believe such a thing is an egalitarian \"healthy\" liberal society, but it's just as likely to be some form of enslaved/boot stomping on face society. Some think people won't accept this, but given human history I'm pretty sure they will. I think these sorts of societies are more of a local minima, but they only need to grant enough of a short term boost to unseat the other major powers. Once competition is out of the way they'll probably survive as a bloated mess for quite some time. The price of entry is so high they won't have to worry about being unseated by competition unless they really screw the pooch. I think this is the troubling conclusion a lot of people, including those in power, are reaching."
}
,
  
{
  "id": "46515636",
  "text": "It's worth thinking about, but why hasn't this already happened? Or maybe it already has, and if so, what about AI specifically is it that will make it suddenly much worse?"
}
,
  
{
  "id": "46513446",
  "text": "I tend to agree.\n\nI'm certain neanderthals were just calmer, more empathetic. And then we came along and abused that until they were all gone.\n\nWe're still animals on this planet. We just sing about our conquests afterwards."
}
,
  
{
  "id": "46512764",
  "text": "Robots are coming along. While they may not be human level for a while they are close to being useful for general production."
}
,
  
{
  "id": "46506030",
  "text": "> But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.\n\n> So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes. The impacts of the technologies that already exist are already more than enough to concern us for now…\n\nSPOT ON, let us all take inspiration. \"The impacts of the technologies that already exist are already more than enough to concern us for now\"!"
}
,
  
{
  "id": "46506103",
  "text": "a stellar piece, Cal, as always. short and straight to the point.\n\nI believe that Codex and the likes took off (in comparison to e.g. \"AI\" browsers) because the bottleneck there was not reasoning about code, it was about typing and processing walls of text. for a human, the interface of e.g. Google Calendar is ± intuitive. for a LLM, any graphical experience is an absolute hellscape from performance standpoint.\n\nCLI tools, which LLMs love to use, output text and only text, not images, not audio, not videos. LLMs excel at text, hence they are confined to what text can do. yes, multimodal is a thing, but you lose a lot of information and/or context window space + speed.\n\nLLMs are a flawed technology for general, true agents. 99% of the time, outside code, you need eyes and ears. we have only created a self-writing paper yet."
}
,
  
{
  "id": "46506428",
  "text": "Codex and the like took off because there existed a \"validator\" of its work - a collection of pre-existing non-LLM software - compilers, linters, code analyzers etc. And the second factor is very limited and defined grammar of programming languages. Under such constraints it was much easier to build a text generator which will validate itself using external tools in a loop, until generated stream makes sense.\n\nAnd the other \"successful\" industry being disrupted is the one where there is no need validate output, because errors are ok or irrelevant. A text not containing much factual data, like fiction or business-lingo or spam. Or pictures, where it doesn't matter which color is a specific pixel, a rough match will do just fine.\n\nBut outside of those two options, not many other industries can use at scale an imprecise word or media generator. Circular writing and parsing of business emails with no substance? Sure. Not much else."
}
,
  
{
  "id": "46508650",
  "text": "This is the reasoning deficit. Models are very good at generating large quantities of truthy outputs, but are still too stupid to know when they've made a serious mistake. Or, when they are informed about a mistake they sometimes don't \"get it\" and keep saying \"you're absolutely right!\" while doing nothing to fix the problem.\n\nIt's a matter of degree, not a qualitative difference. Humans have the exact same flaws, but amateur humans grow into expert humans with low error rates (or lose their job and go to work in KFC), whereas LLMs are yet to produce a true expert in anything because their error rates are unacceptably high."
}
,
  
{
  "id": "46506842",
  "text": "Besides the ability to deal with text, I think there are several reasons why coding is an exceptionally good fit for LLMs.\n\nOnce LLMs gained access to tools like compilers, they started being able to iterate on code based on fast, precise and repeatable feedback on what works and what doesn't, be it failed tests or compiler errors. Compare this with tasks like composing a powerpoint deck, where feedback to the LLM (when there is one) is slower and much less precise, and what's \"good\" is subjective at best.\n\nAnother example is how LLMs got very adept at reading and explaining existing code. That is an impressive and very useful ability, but code is one of the most precise ways we, as humans, can express our intent in instructions that can be followed millions of times in a nearly deterministic way (bugs aside). Our code is written in thoroughly documented languages with a very small vocabulary and much easier grammar than human languages. Compare this to taking notes in a zoom call in German and trying to make sense of inside jokes, interruptions and missing context.\n\nBut maybe most importantly, a developer must be the friendliest kind of human for an LLM. Breaking down tasks in smaller chunks, carefully managing and curating context to fit in \"memory\", orchestrating smaller agents with more specialized tasks, creating new protocols for them to talk to each others and to our tools.... if it sounds like programming, it's because it is."
}
,
  
{
  "id": "46509589",
  "text": "LLMs are good at coding (well, kinda, sometimes) because programmers gave away their work away for free and created vast training data."
}
,
  
{
  "id": "46510187",
  "text": "I don’t think “giving away” has much to do with it.\n\nI mean we did give away code as training data but we also know that AI companies just took pirated books and media too.\n\nSo I don’t think gifting has much to do with it.\n\nNext all the Copilot users will be “giving away” all their business processes and secrets to Microsoft to clone."
}
,
  
{
  "id": "46511729",
  "text": "I agree with that. For code, most of it was in a \"public space\" similar to driving down a street and training the model on trees and signs etc. The property is not yours but looking at it doesn't require ownership."
}
,
  
{
  "id": "46509478",
  "text": "It was not a well thought out piece and it is discounting the agentic progress that has happened.\n\n>The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems.\n\nIt is easy to forget that Claude Code CAME OUT in 2025. The models and agents released in 2025 really DID prove how powerful and capable they are. The predictions were not really wrong. I AM using code agents in a literal fire and forget way.\n\nClaude Code is a hugely capable agentic interface for sovling almost any kind of problem or project you want to solve for personal use. I literally use it as the UX for many problems. It is essentially a software that can modify itself on the fly.\n\nMost people haven't really grasped the dramatic paradigm shift this creates. I haven't come up with a great analogy for it yet, but the term that I think best captures how it feels to work with claude code as a primary interface is \"intelligence engine\".\n\nI'll use an example, I've created several systems harnessed around Claude Code, but the latest one I built is for stock porfolio management (This was primarily because it is a fun problem space and something I know a bit about). Essentially you just used Claude Code to build tools for itself in a domain. Let me show how this played out in this example.\n\nClaude and I brainstorma general flow for the process and roles. Then figure out what data each role would need, research what providers have the data at a reasonable price.\n\nI purchase the API keys and claude wires up tools (in this case python scripts and documentation for the agents for about 140 api endpoints), then builds the agents and also creates an initial vesrion of the \"skill\" that will invoke the process that looks something like this:\n\nMacro Economist/Strategist -> Fact Checker -> Securities Sourcers -> Analysts (like 4 kinds) -> Fact Checker/Consolidator -> Portfolio Manager\n\nObviously it isn't 100% great on the first pass and I have to lean on expertise I have in building LLM applications, but now I have a Claude Code instance that can orchestrate this whole research process and also handle ad-hoc changes on the fly.\n\nNow I have evolved this system through about 5 significant iterations, but I can do it \"in the app\". If I don't like how part of it is working, I just have the main agent rewire stuff on the fly. This is a completely new way of working on problems."
}
,
  
{
  "id": "46514501",
  "text": "I think it depends on what \"join\" means. I see no reason why it has to be \"replace a human\". People used to have secretaries back in the day, we don't anymore, we all do our own thing, but in a way, LLMs are our secretaries of sorts now. Or our personal executive assitants, even if you're not an executive.\n\nI don't know what else LLMs need to do? get on the payroll? People are using them heavily. You can't even google things easily without triggering an LLM response.\n\nI think the current millenial and older generation is too used to the pre-LLM way of things, so the resistance will be there for a long time to come. but kids doing homeworks with LLMs will rely on them heavily once they're in the work force.\n\nI don't know how people are not as fascinated and excited about this. I keep watching older scifi content, and LLMs are now doing for us what \"futuristic computer persona\" did in older scifi.\n\nEasy example: You no longer need copywriters because of LLMs. You had spell/grammar checkers before, but they didn't \"understand\" context and recommend different phrasing, and check for things like continuity and rambling on."
}
,
  
{
  "id": "46515273",
  "text": "> You no longer need copywriters because of LLMs\n\nYou absolutely do still need copyeditors for anything you actually care about."
}
,
  
{
  "id": "46506630",
  "text": "\"People using AI\" had a meaningful change when they \"joined the workforce\" in 2025.\n\nWe may not have gotten fully-autonomous employees, but human employees using AI are doing way more than they could before, both in depth and scale.\n\nClaude Code is basically a full-time \"employee\" on my (profitable) open source projects, but it's still a tool I use to do all the work. Claude Code is basically a full-time \"employee\" at my job, but it's still a tool I use to do all the work. My workload has shifted to high-level design decisions instead of writing the code, which is kind of exactly what would have happened if AI \"joined the workforce\" and I had a bunch of new hires under me.\n\nI do recognize this article is largely targeted at non-dev workforces though, where it _largely_ holds up but most of my friends outside of the tech world have either gotten new jobs thanks to increased capability through AI or have severely integrated AI into whatever workflows they're doing at work (again, as a tool) and are excelling compared to employees who don't utilize AI."
}
,
  
{
  "id": "46506660",
  "text": "> human employees using AI are doing way more than they could before, both in depth and scale\n\nFunny how that doesn't show up in any productivity or economic metrics..."
}
,
  
{
  "id": "46509015",
  "text": "Bit too soon to tell, no? Claude Code wasn't released until the latter half of Q2, offering little time for it to show up in those figures, and Q3 data is only preliminary right now. Moreover, it seems to be the pairing with Opus 4.5 that lends some credence to the claims. However, it was released in Q4. We won't have that data for quite a while. And like Claude Code, it came late in the quarter, so realistically we really need to wait on Q1 2026 figures, which hasn't happened yet and won't really start to appear until summertime and beyond.\n\nThat said, I expect you are right that we won't see it show up. Even if we assume the claim is true in every way for some people, it only works for exceptional visionaries who were previously constrained by typing speed, which is a very, very, very small segment of the developer population. Any gains that small group realize will be an unrecognizable blip amid everything else. The vast majority of developers need all that typing time and more to have someone come up with their next steps. Reducing the typing time for them doesn't make them any more productive. They were never limited by typing speed in the first place."
}
,
  
{
  "id": "46509649",
  "text": "Humans are doing a bit more, specifically around 20% more.\n\nAI generates output that must be thoroughly check for most software engineering purposes. If you’re not checking the output, then quality and accuracy must not matter. For quick prototyping that’s mostly true. Not for real engineering."
}
,
  
{
  "id": "46509759",
  "text": "> Claude Code is basically a full-time \"employee\" on my (profitable) open source projects,\n\nWhat fulltime employee works for 30 minutes and then stops working for the next 5 hours and 30 minutes like Claude does?"
}
,
  
{
  "id": "46508746",
  "text": "Can you elaborate on this more? What would be a task you would use claude code for, and what would accomplishing the task look like?"
}
,
  
{
  "id": "46506714",
  "text": "> I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.\n\nSo well put.\n\nLLMs are useful for a great many things. It's just that being the best new product of the recent years, maybe even defining a decade doesn't cut it. It has to be the century-defining, world-ending, FOMO-inducing massive thing to put Skynet to shame and justify investments in trillion dollars. It's either AI joining the workforce soon, or Nvidia and OpenAI aren't that valuable.\n\nI guess it manages to maximize shareholder value, and make AI feel like a disappointment."
}
,
  
{
  "id": "46505970",
  "text": "A previous company I worked for is San Francisco was very anti remote, but they announced on linked in that they are ok with remote engineers suddenly. It seems it’s still a workers market at least in SF. I’d AI could do it or even reduced head count I don’t think that would be the case."
}
,
  
{
  "id": "46509973",
  "text": "I think a practical measure still useful right now, which does capture a lot of the \"non-performance\" capabilities of an employee, is as follows:\n\n\"Why has my job not been outsourced yet, since it is far cheaper?\" Those are probably the same reasons why AI won't take your job this year.\n\nRaw coding metrics are a very small part of being a cog in a company, which is not me saying it will never happen. Just me saying that thos focus on coding performance kind of misses the forest for the trees."
}
,
  
{
  "id": "46511017",
  "text": "The adoption of AI tools for software development will probably not result in sudden layoffs but rather on harder to measure changes, like smaller teams being able to tackle significantly more ambitious projects than before.\n\nI suspect that another kind of impact is already happening in organisations where AI adoption is uneven: suddenly some employees appear to be having a lot more leisure time while apparently keeping the same productivity as before."
}
,
  
{
  "id": "46508756",
  "text": "what company?"
}
,
  
{
  "id": "46512755",
  "text": "If you think about the real-world and the key bottleneck with most creative work projects (this includes software), it's usually context (in the broadest sense of the word).\n\nHumans are good at this because they are truly multi-modal and can interact through many different channels to gather additional context to do the requisite task at hand. Given incomplete requirements or specs, they can talk to co-workers, look up old documents from a previous release, send a Slack or Teams message, setup a Zoom meeting with stakeholders, call customers, research competitors, buy a competitors product and try it out while taking notes of where it falls short, make a physical site visit to see the context in which the software is to be used and environmental considerations for operation.\n\nPoint is that humans doing work have all sorts of ways to gather and compile more context before acting or while they are acting that an LLM does not and in some cases cannot have without the assistance of a human. This process in the real world can unfurl over days or weeks or in response to new inputs and our expectation of how LLMs work doesn't align with this.\n\nLLMs can sort of do this, but more often than not, the failure of LLMs is that we are still very bad at providing proper and sufficient context to the LLM and the LLMs are not very good at requesting more context or reacting to new context, changing plans, changing directions, etc. We also have different expectations of LLMs and we don't expect the LLM to ask \"Can you provide a layout and photo of where the machine will be set up and the typical operating conditions?\" and then wait a few days for us to gather that context for it before continuing."
}
,
  
{
  "id": "46505910",
  "text": "The response to the Sal Khan op-ed resonated with me, along with other parts of this article. Something I’ve been digging more into is some of the figures around proposed job losses from AI. I think I even posted a simulation paper last week.\n\nAfter posting that, I came across numerous papers which critique Frey & Osborne’s approach, who are some of the forefathers for the AI job losses figures we see banded around commonly these days. One such paper is here but i can dig out others: https://melbourneinstitute.unimelb.edu.au/__data/assets/pdf_...\n\nIt has made me very cautious around bold statements on AI - and I was already at the cautious end."
}
,
  
{
  "id": "46506076",
  "text": "Job losses aren’t directly tied to productivity, in the short term it’s all about expectations. Many companies are laying people off and then trying to get staff back when it doesn’t work. How much of this is hype and how much is sustained is difficult to determine right now."
}
,
  
{
  "id": "46506485",
  "text": "It never made sense to blame AI in the first place for tech layoffs. You have a new tool that you think can supercharge your employees, make them ~10x productive, be leveraged to disrupt all sorts of industries, and have the workforce best suited to learn and use these tools to their full potential. You think the value of labor may soon collapse, but there are piles of money to be made before that happens.\n\nIf you truly believed that, you would be spinning up new projects and offshoots as this is a serious arms race with a ton of potential upside (not just in developing AI, but in leveraging it to build things cheaper). Allegedly every dollar you spent on an engineer is potentially worth 10x(?) what it was a couple years ago. Meaning your profit per engineer could soar, but tech companies decided they don't want more profit? AI is mostly solved and the value of labor has already collapsed? Or AI is a nice band-aid to prop up a smaller group of engineers while we weather the current economic/political environment and most CXO's don't believe there are piles of money to be had by leveraging AI now or the near future."
}
,
  
{
  "id": "46506638",
  "text": "> you would be spinning up new projects and offshoots\n\nIf the engineers can 10x their output, this actually exposes the product leadership since I find it unlikely that they can 10x the number of revenue generating projects or 10x their product spec development."
}
,
  
{
  "id": "46506740",
  "text": "AI improvements will chase the bottlenecks\n\nif product spec begins to hamper the dev process, guess what'll be the big focus on e.g. that year's YC"
}
,
  
{
  "id": "46509208",
  "text": "I’ve had this same thought, although less well-articulated:\n\nAI is supposedly going to obviate the need for white collar workers, and the best all the CEOs can come up with is the exact current status quo minus the white collar workers?"
}

]
</comments_to_classify>

Based on the comments above, assign each to up to 3 relevant topics.

Return ONLY a JSON array with this exact structure (no other text):
[
  
{
  "id": "comment_id_1",
  "topics": [
    1,
    3,
    5
  ]
}
,
  
{
  "id": "comment_id_2",
  "topics": [
    2
  ]
}
,
  
{
  "id": "comment_id_3",
  "topics": [
    0
  ]
}
,
  ...
]

Rules:
- Each comment can have 0 to 3 topics
- Use 1-based topic indices for matches
- Use index 0 if the comment does not fit well in any category
- Only assign topics that are genuinely relevant to the comment

Remember: Output ONLY the JSON array, no other text.

commentCount

50

← Back to job