Summarizer

LLM Input

llm/5888b8dc-b96e-4444-9c3c-465dde409e92/topic-0-f4c9a2c9-e9ce-4cf9-9bdc-df691491355a-input.json

prompt

You are a comment summarizer. Given a topic and a list of comments tagged with that topic, write a single paragraph summarizing the key points and perspectives expressed in the comments.

TOPIC: AI productivity claims skepticism

COMMENTS:
1. "You're holding it wrong"

99% of an LLM's usefulness vanishes, if it behaves like an addled old man.

"What's that sonny? But you said you wanted that!"

"Wait, we did that last week? Sorry let me look at this again"

"What? What do you mean, we already did this part?!"

2. Wrong mental model. Addled old men can't write code 1000x faster than any human.

3. I'd prefer 1x "wrong stuff" than wrong stuff blasted 1000x. How is that helpful?

Further, they can't write code that fast, because you have to spend 1000x explaining it to them.

4. A simple "how do I access x in y framework in the intended way" shouldnt require any more context.

instead of telling me about z option it keeps hallucinating something that doesnt exist and even says its in the docs when it isnt.

Literally just wasting my time

5. they have a data bank the size of the internet so they can
pull hints that sometimes surprise even experienced devs.

That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." I just discovered another victim: the Renesas forums. Cloudflare is blocking me from accessing the site completely, the only site I've ever had this happen to. But I'm glad you're able to have your fun.

it might turn out the balance is something like 25% handmade - 75% LLM made.

Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software.

6. > That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers."

Yes, but I can't stop them, can you?

> But I'm glad you're able to have your fun.

Unfortunately I have to be practical.

> Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software.

Almost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs.

The hope that they'll run out of relevant material is slim.

Oh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT.

I have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places.

7. On top of that there's a not insignificant chance you've actually just stolen the code through an automated copyright whitewashing system. That these people believe they're adding value while never once checking if the above is true really disappoints me with the current direction of technology.

LLMs don't make everyone better, they make everything a copy.

The upwards transfer of wealth will continue.

8. Actually, the invention of the printing press in 1450 created a similar disruption, economic panic and institutional fear similar to what we're experiencing now:

For centuries, the production of books was the exclusive domain of professional scribes and monks. To them, the printing press was an existential threat.

Job Displacement: Scribes in Paris and other major cities reportedly went on strike or petitioned for bans, fearing they would be driven into poverty.

The "Purity" Argument: Some critics argued that hand-copying was a spiritual act that instilled discipline, whereas the press was "mechanical" and "soulless."

Aesthetic Elitism: Wealthy bibliophiles initially looked down on printed books as "cheap" or "ugly" compared to hand-illuminated manuscripts. Some collectors even refused to allow printed books in their libraries to maintain their prestige.

Sound familiar?

From "How the Printing Press Reshaped Associations" -- https://smsonline.net.au/blog/how-the-printing-press-reshape... and

"How the Printing Press Changed the World" -- https://www.koolchangeprinting.com/post/how-the-printing-pre...

9. The point being missed is the printing press led to tens of millions of jobs and billions of dollars in revenue.

So far, when a new technology is introduced that people were initially afraid of, end up creating a whole new set of jobs and industries.

10. Respect to you. I ran out of energy to correct people's dated misconceptions. If they want to get left behind, it's not my problem.

11. At some point no-one is going to have to argue about this. I'm guessing a bit here, but my guess is that within 5 years, in 90%+ jobs, if you're not using an AI assistant to code, you're going to be losing out on jobs. At that point, the argument over whether they're crap or not is done.

I say this as someone who has been extremely sceptical over their ability to code in deep, complicated scenarios, but lately, claude opus is surprising me. And it will just get better.

12. Surely searching "centre a div" takes less time than prompting and waiting for a response...

13. Search “centre a div” in Google

Wade through ads

Skim a treatise on the history of centering content

Skim over the “this question is off topic / duplicate” noise if Stack Overflow

Find some code on the page

Try to map how that code will work in the context of your other layout

Realize it’s plain CSS and you’re looking for Tailwind

Keep searching

Try some stuff until it works

Or…

Ask LLM. Wait 20-30 seconds. Move on to the next thing.

14. > Search “centre a div” in Google

Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble.

15. Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article ( https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo... ) as the first result without relying on spicy autocomplete.

Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material.

16. LLMs work very well for a variety of software tasks — we have lots of experience around the industry now.

If you haven’t been convinced by pure argument in 2026 then you probably won’t be. But the great thing is you don’t have to take anyone’s word for it.

This isn’t crypto, where everyone using it has a stake in its success.
You can just try it, or not.

17. That's a lot of words to say "trust me bruh" which is kind of poetic given that's the entire model (no pun intended) that LLMs work on.

18. Can't believe this car bubble has lasted so long. It's gonna pop any decade now!

19. I’ve noticed this too at work.

If keep the change’s focused I can iterate far faster with ideas because it can type faster than I can.

20. This is exactly the case. Businesses in the past wouldn't automate some process because they couldn't afford to develop it. Now they can! Which frees up resources to tackle something else on the backlog. It's pretty exciting.

21. Aren't you afraid it's gonna be a race to the bottom ? the software industry is now whoever pays gemini to deploy something prompted in a few days. Everybody can, so the market will be inundated by a lot of people, and usually this makes for a bad market (a few shiny one gets 90% of the share while the rest fight for breadcrumbs)

I'm personally more afraid that stupid sales oriented will take my job instead of losing it to solid teams of dedicated expert that invested a lot of skills in making something on their own. it seems like value inversion

22. My prediction is that software will be so cheap that very soon, economy of scale gives way to maximum customization which means everyone writes their own software. There will be no software market in the future.

23. Anything that can be done in 2 days now with an LLM was low hanging fruit to begin with.

24. ‘Why were they long term?’ is what you need to ask. Code has become essentially free in relative terms, both in time and money domains. What stands out now is validation - LLMs aren’t oracles for better or worse, complex code still needs to be tested and this takes time and money, too. In projects where validation was a significant percentage of effort (which is every project developed by more than two teams) the speed up from LLM usage will be much less pronounced… until they figure out validation, too; and they just might with formal methods.

25. anything nontrivial is still long term, nothing has changed

26. > I felt in love with the process to be honest. I complained my wife yesterday: "my only problem now is that I don't have enough time and money to pay all the servers", because it opened to me the opportunities to develop and deploy a lot of new ideas.

What opportunities? You aren't going to make any money with anything you vibe coded because, even the people you are targeting don't vibe code it, the minute you have even a risk of gaining traction someone else is going to vibe code it anyway .

And even if that didn't happen you're just reducing the signal/noise ratio; good luck getting your genuinely good product out there when the masses are spammed by vibe-coded alternatives.

When every individual can produce their own software, why do you think that the stuff produced by you is worth paying for?

27. > That might be true, but it doesn't have to be immediately true. It's an arbitrage problem: seeing a gap, knowing you can apply this new tool to make a new entrant, making an offering at a price that works for you, and hoping others haven't found a cheaper way or won the market first. In other words, that's all business as usual.

I'm hearing what you are saying, but the "business as usual" way almost always requires some money or some time (which is the same thing). The ones that don't (performance arts, for example) average a below-minimum-wage pay!

IOW, when the cost of production is almost zero, the market adjusts very quickly to reflect that. What happens then is that a few lottery ticket winners make bank, and everyone else does it for free (or close to it).

You're essentially hoping to be one of those lottery ticket winners.

> How does Glad sell plastic bags when there are thousands of other companies producing plastic bags, often for far, far less?

The cost of production of plastic bags is not near zero, and the requirements for producing plastic bags (i.e. cloning the existing products) include substantial capital.

You're playing in a different market, where the cost of cloning your product is zero.

There's quite a large difference between operating in a market where there is a barrier (capital, time and skill) and operating in a market where there are no capital, time or skill barriers.

The market you are in is not the same as the ones you are comparing your product to. The better comparison is artists, where even though there is a skill and time barrier, the clear majority of the producers do it as a hobby, because it doesn't pay enough for them to do it as a job.

28. You're overestimating people's willingness to write code even if they don't have to do it. Most people just don't want to do it even if AI made is easy to do so. Not sure who you're talking to but most people I know that aren't programmers have zero interest in writing their own software even if they could do it using prompts only.

29. I don't like it. It lets "management" ignore their actual jobs - the ones that are nominally so valuable that they get paid more than most engineers, remember - and instead either splash around in the kiddie pool, or go jump into the adult pool and then almost drown and need an actual engineer to bail them out. (The kiddie pool is useless side project, the adult pool is the prod codebase, and drowning is either getting lost in the weeds of "it compiles and I'm done! Now how do I merge and how do I know if I'm not going to break prod?" or just straight up causing an incident and they're apologizing profusely for ruining the oncall's evening except that both of them know they're gonna do it again in 2 weeks).

I really don't know how often I have to tell people, especially former engineers who SHOULD KNOW THIS (unless they were the kind of fail-upwards pretenders): the code is not the slow part! (Sorry, I'm not yelling at you , reader. I'm yelling at my CEO.)

30. > If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did "cook" it.

The person who actually cooked it cooked it. Being the "catalyst" doesn't make you the creator, nor does it mean you get to claim that you did the work.

Otherwise you could say you "cooked a meal" every time you went to MacDonald's.

31. > Even with refinement and back-and-forth prompting, I’m easily 10x more productive

Developers notoriously overestimate the productivity gains of AI, especially because it's akin to gambling every time you make a prompt, hoping for the AI's output to work.

I'd be shocked if the developer wasn't actually less productive.

32. For personal projects, 10x is a lower bound. This year alone I got several projects done that had been on my mind for years .

The baseline isn't what it would have taken had I set aside time to do it.[1] The baseline is reality . I'm easily getting 10x more projects done than in the past.

For work, I totally agree with you.

[1] Although it's often true even in this case. My first such project was done in 15 minutes. Conceptually it was an easy project. Had I known all the libraries, etc out would have taken about an hour. But I didn't, and the research alone would have taken hours.

And most of the knowledge acquired from that research would likely be useless.

33. I accept there are productivity gains, but it's hard to take "10x" seriously. It's such a tired trope. Is no one humble enough to be a meager 2.5x engineer?

34. Even 2.5x is absurd. If they said 1.5x I might believe them.

35. I'm building an AI agent for Godot, and in paid user testing we found the median speed up time to complete a variety of tasks[0] was 2x. This number was closer to 10x for less experienced engineers

[0] tasks included making games from scratch and resolving bugs we put into template projects. There's no perfect tasks to test on, but this seemed sufficient

36. As you get deeper beyond the starter and bootstrap code it definitely takes a different approach to get value.

This is in part because context limits of large code bases and because the knowledge becomes more specialized and the LLM has no training on that kind of code.

But people are making it work, it just isn't as black and white.

37. That’s the issue, though, isn’t it? Why isn’t it black and white? Clear massive productivity gains at Google or MS and their dev armies should be visible from orbit.

Just today on HN I’ve seen claims of 25x and 10x and 2x productivity gains. But none of it starting with well calibrated estimations using quantifiable outcomes, consistent teams, whole lifecycle evaluation, and apples to apples work.

In my own extensive use of LLMs I’m reminded of mouse versus command line testing around file navigation. Objective facts and subjective reporting don’t always line up, people feel empowered and productive while ‘doing’ and don’t like ‘hunting’ while uncertain… but our sense of the activity and measurable output aren’t the same.

I’m left wondering why a 2x Microsoft of OpenAI would ever sell their competitive advantage to others. There’s infinite money to be made exploiting such a tech, but instead we see highschool homework, script gen, and demo ware that is already just a few searches away and downloadable.

LLMs are in essence copy and pasting existing work while hopping over uncomfortable copyright and attribution qualms so devs feel like ‘product managers’ and not charlatans. Is that fundamentally faster than a healthy stack overflow and non-enshittened Google? Over a product lifecycle? … ‘sometimes, kinda’ in the absence of clear obvious next-gen production feels like we’re expecting a horse with a wagon seat built in to win a Formula 1 race.

38. I'm sure there is plenty of language parsers written in Haskell in the training data. Regardless, the question isn't if LLMs can generate code (they clearly can), it's if agentic workflows are superior to writing code by hand.

39. I estimated that i was 1.2x when we only had tab completion models. 1.5x would be too modest. I've done plenty of ~6-8 hour tasks in ~1-2 hours using llms.

40. Indeed. I just did a 4-6 month refactor + migration project in less than 3 weeks.

41. I recently used AI to help build the majority of a small project (database-driven website with search and admin capabilities) and I'd confidently say I was able to build it 3 to 5 times faster with AI. For context, I'm an experienced developer and know how to tweak the AI code when it's wonky and the AI can't be coerced into fixing its mistakes.

42. 10x probably means “substantial gain”. There is no universal unit of gain.

However if the difference is between doing a project vs not doing is, then the gain is much more than 10x.

43. There is no x is because LLM performance is non deterministic. You get slop out at varying degrees of quality and so your job shifts from writing to debugging.

44. I don't know what to tell you, it's just true. I have done what was previously days of BI/SQL dredging and visualizing in 20 minutes. You can be shocked and skeptical but it doesn't make it not true.

45. Numbers don't matter if it makes you "feel" more productive.

I've started and finished way more small projects i was too lazy to start without AI. So infinitely more productive?

Though I've definitely wasted some time not liking what AI generated and started a new chat.

46. And you find yourself less productive?

47. Perhaps it is a skill issue. But I don't really see the point of trying when it seems like the gains are marginal. If agent workflows really do start offering 2x+ level improvements then perhaps I'll switch over, in the meantime I won't have to suffer mental degradation from constant LLM usage.

48. > I'd be shocked if the developer wasn't actually less productive

I agree 10x is a very large number and it's almost certainly smaller—maybe 1.5x would be reasonable. But really? You would be shocked if it was above 1.0x? This kind of comment always strikes me as so infantilizing and rude, to suggest that all these developers are actually slower with AI, but apparently completely oblivious to it and only you know better.

49. I would never suggest that only I know better. Plenty of other people are observing the same thing, and there is also research backing it up.

Maybe shocked is the wrong term. Surprised, perhaps.

50. There are simply so many counterexamples out there of people who have developed projects in a small fraction of the time it would take manually. Whether or not AI is having a positive effect on productivity on average in the industry is a valid question, but it's a statistical one. It's ridiculous to argue that AI has a negative effect on productivity in every single individual case.

51. It's all talk and no evidence.

52. We’re seeing no external indicators of large productivity gains. Even assuming that productivity gains in large corporations are swallowed up by inefficiencies, you’d expect externally verifiable metrics to show a 2x or more increase in productivity among indie developers and small companies.

So far it’s just crickets.

53. This author simultaneously admits, he cannot hold the system in his head, but then also claims he’s not vibecoding, and I assert that these are two conflicting positions and you cannot simultaneously hold both positions

I am also doing my pattern recognition. It seems that a common pattern is people claiming it sped me up by x! (and then there’s no AB test, n=1)

54. Either the projects he's working on are side projects, and in that case I don't see why he would need to use the complex pipelines, just Vanilla JS and PHP still work super fine, even better nowadays actually, or the projects are professional ones and then to ship code written by AI is extremely dangerous and he should have resources (time and people) to do things properly without AI. So, I'm clearly not convinced.

55. Maybe it is „very” professional, so he is part of one of hundreds of teams and he is creating micro parts of big system and with such setup he is easily hiding in ocean of very low performing people.
In many big setups there are so-called microservices that in reality are picoservices doing function of 1-2 method and 1-2 tables in db.

Either way - the setup looks nice and is one of very few that really shows how to make things work. A lot of people say about 5-10x improvements not showing even the prompts, because probably they made some 2 model CRUD that probably can be already made with 20 lines of code in Django.

56. Going in 2026, the frontend has many good options, but AI is not one of them.

We have many typesafe (no, not TypeScript!) options with rock solid dev tooling, and fast compilers.

AI is just a badaid, its not the road you want to travel.

57. I have this suspicion that the people who say they have 10x productivity gains from AI might largely see improvements from a workflow change which fixes their executive dysfunction. Back in the day I never had any issue just sitting down and coding something out for 4 hours straight. So I don’t think LLMs feel quite as big for me. But I can see the feeling of offloading effort to a computer when you have trouble getting started on a sub-task being a good trick to keep your brain engaged.

I’ve personally seen LLMs be huge time savers on specific bugs, for writing tests, and writing boilerplate code. They’re huge for working in new frameworks that roughly map to one you already know. But for the nitty gritty that ends up being most of the work on a mature product where all of the easy stuff is already done they don’t provide as big of a multiplier.

58. LLMs as a body double for executive dysfunction is a great insight. I see chronic examples of corporate-sponsored executive dysfunction: striped calendars, constant pings and interruptions, emergency busywork, fire drills. It's likely that LLMs aren't creating productivity as much as they're removing starting inhibition and helping to maintain the thread through context switching. What's presented as a magical tool, which LLMs can be in the areas you mentioned, is also presented as a panacea for situations that simply don't promote good programming hygiene.

59. Strong agree! Forget all those studies that say “but developers are slower” or whatever — I’m actually building way more hobby projects and having way more fun now. And work is way more fun and easier. And my node_modules folder size is dropping like crazy!

60. it is fun again because we can remove ourselves completely from it?
seems like web enthusiast are always the first to drop ship huh.
"llms good because I no longer have to interface with this steaming pile of shit that web development has become", not because the web ecosystem has improved by any metric.

61. > I’m easily 10x more productive with AI than without it.

So you've shipped 10x the products? Can you list them?

62. > Clicks, expecting some new spec or framework that actually made web dev fun again

> Looks inside

> "AI has entered the chat"

What did I even expect. I wonder how many clickbait posts of this type are gonna make the HN front page.

63. Web development is perhaps "fun" again if you consider PHP 4 and jQuery as "fun". A "problem" arises for those of us who prefer Ruby, Rails, and HotWire.

I'm not gonna lie, I use AI every day (in the form of Grammarly). But LLMs and so-called "agents" are less valuable to me, even if they would help me to produce more "output".

It will be interesting to me to discover the outcome of this bifurcation!

64. We need better chatbots to fix the bugs from the current chatbots that fixed the bugs from the previous chatbots when they fixed the bugs from the previous generation of chatbots that…..

Just give Sam Altman more and more of your money and he’ll make a more advanced chatbot to fix the chatbot he sold you that broke everything.

You don’t even need to own a computer, just install an app on your phone to do it all. It doesn’t matter that regular people have been completely priced out of personal computing when GPT is just gonna do all the computing anymore anyway.

Clearly a sustainable way forward for the industry.

65. >>Starting a new project once felt insurmountable. Now, it feels realistic again.

Honestly, this does not give me confidence in anything else you said. If you can't spin up a new project on your own in a few minutes, you may not be equipped to deal with or debug whatever AI spins up for you.

>>When AI generates code, I know when it’s good and when it’s not. I’v seen the good and the bad, and I can iterate from there. Even with refinement and back-and-forth prompting, I’m easily 10x more productive

Minus a baseline, it's hard to tell what this means. 10x nothing is nothing. How am I supposed to know what 1x is for you, is there a 1x site I can look at to understand what 10x would mean? My overall feeling prior to reading this was "I should hire this guy", and after reading it my overwhelming thought was "eat a dick, you sociopathic self-aggrandizing tool." Moreover, if you have skill which you feel is augmented by these tools, then you may want to lean more heavily on that skill now if you think that the tool itself makes everyone capable of writing the same amazing code you do. Because it sounds like you will be unemployed soon if not already, as a casualty of the nonsense engine you're blogging about and touting.

Write a concise, engaging paragraph (3-5 sentences) that captures the main ideas, notable perspectives, and overall sentiment of these comments regarding the topic. Focus on the most interesting and representative viewpoints. Do not use bullet points or lists - write flowing prose.

topic

AI productivity claims skepticism

commentCount

65

← Back to job