Summarizer

LLM Input

llm/fa6df919-50f4-440a-804d-6a9d3e9721d8/topic-1-3f83278a-6b21-4cf2-9770-216ea94d4966-input.json

prompt

The following is content for you to summarize. Do not respond to the comments—summarize them.

<topic>
Productivity Claims Skepticism # Debates over whether 10x productivity gains are real or exaggerated, with critics noting lack of controlled studies and potential for gambling-like dopamine hits from prompting
</topic>

<comments_about_topic>
1. Wrong mental model. Addled old men can't write code 1000x faster than any human.

2. I'd prefer 1x "wrong stuff" than wrong stuff blasted 1000x. How is that helpful?

Further, they can't write code that fast, because you have to spend 1000x explaining it to them.

3. I too have found this. However, I absolutely love being able to mock up a larger idea in 30 minutes to assess feasibility as a proof of concept before I sink a few hours into it.

4. Surely searching "centre a div" takes less time than prompting and waiting for a response...

5. > Search “centre a div” in Google

Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble.

6. LLMs work very well for a variety of software tasks — we have lots of experience around the industry now.

If you haven’t been convinced by pure argument in 2026 then you probably won’t be. But the great thing is you don’t have to take anyone’s word for it.

This isn’t crypto, where everyone using it has a stake in its success.
You can just try it, or not.

7. That's a lot of words to say "trust me bruh" which is kind of poetic given that's the entire model (no pun intended) that LLMs work on.

8. I’ve noticed this too at work.

If keep the change’s focused I can iterate far faster with ideas because it can type faster than I can.

9. This is exactly the case. Businesses in the past wouldn't automate some process because they couldn't afford to develop it. Now they can! Which frees up resources to tackle something else on the backlog. It's pretty exciting.

10. Anything that can be done in 2 days now with an LLM was low hanging fruit to begin with.

11. ‘Why were they long term?’ is what you need to ask. Code has become essentially free in relative terms, both in time and money domains. What stands out now is validation - LLMs aren’t oracles for better or worse, complex code still needs to be tested and this takes time and money, too. In projects where validation was a significant percentage of effort (which is every project developed by more than two teams) the speed up from LLM usage will be much less pronounced… until they figure out validation, too; and they just might with formal methods.

12. some long term projects were due to the tons of details in source code, but some were due to inherent complexity and how to model something that works, no matter what the files content will be

13. anything nontrivial is still long term, nothing has changed

14. I don't like it. It lets "management" ignore their actual jobs - the ones that are nominally so valuable that they get paid more than most engineers, remember - and instead either splash around in the kiddie pool, or go jump into the adult pool and then almost drown and need an actual engineer to bail them out. (The kiddie pool is useless side project, the adult pool is the prod codebase, and drowning is either getting lost in the weeds of "it compiles and I'm done! Now how do I merge and how do I know if I'm not going to break prod?" or just straight up causing an incident and they're apologizing profusely for ruining the oncall's evening except that both of them know they're gonna do it again in 2 weeks).

I really don't know how often I have to tell people, especially former engineers who SHOULD KNOW THIS (unless they were the kind of fail-upwards pretenders): the code is not the slow part! (Sorry, I'm not yelling at you , reader. I'm yelling at my CEO.)

15. Yes, people who were at best average engineers and those that atrophied at their skill through lack of practice seem to be the biggest AI fanboys in my social media.

It's telling, isn't it?

16. > Even with refinement and back-and-forth prompting, I’m easily 10x more productive

Developers notoriously overestimate the productivity gains of AI, especially because it's akin to gambling every time you make a prompt, hoping for the AI's output to work.

I'd be shocked if the developer wasn't actually less productive.

17. For personal projects, 10x is a lower bound. This year alone I got several projects done that had been on my mind for years .

The baseline isn't what it would have taken had I set aside time to do it.[1] The baseline is reality . I'm easily getting 10x more projects done than in the past.

For work, I totally agree with you.

[1] Although it's often true even in this case. My first such project was done in 15 minutes. Conceptually it was an easy project. Had I known all the libraries, etc out would have taken about an hour. But I didn't, and the research alone would have taken hours.

And most of the knowledge acquired from that research would likely be useless.

18. I accept there are productivity gains, but it's hard to take "10x" seriously. It's such a tired trope. Is no one humble enough to be a meager 2.5x engineer?

19. Even 2.5x is absurd. If they said 1.5x I might believe them.

20. I'm building an AI agent for Godot, and in paid user testing we found the median speed up time to complete a variety of tasks[0] was 2x. This number was closer to 10x for less experienced engineers

[0] tasks included making games from scratch and resolving bugs we put into template projects. There's no perfect tasks to test on, but this seemed sufficient

21. That’s the issue, though, isn’t it? Why isn’t it black and white? Clear massive productivity gains at Google or MS and their dev armies should be visible from orbit.

Just today on HN I’ve seen claims of 25x and 10x and 2x productivity gains. But none of it starting with well calibrated estimations using quantifiable outcomes, consistent teams, whole lifecycle evaluation, and apples to apples work.

In my own extensive use of LLMs I’m reminded of mouse versus command line testing around file navigation. Objective facts and subjective reporting don’t always line up, people feel empowered and productive while ‘doing’ and don’t like ‘hunting’ while uncertain… but our sense of the activity and measurable output aren’t the same.

I’m left wondering why a 2x Microsoft of OpenAI would ever sell their competitive advantage to others. There’s infinite money to be made exploiting such a tech, but instead we see highschool homework, script gen, and demo ware that is already just a few searches away and downloadable.

LLMs are in essence copy and pasting existing work while hopping over uncomfortable copyright and attribution qualms so devs feel like ‘product managers’ and not charlatans. Is that fundamentally faster than a healthy stack overflow and non-enshittened Google? Over a product lifecycle? … ‘sometimes, kinda’ in the absence of clear obvious next-gen production feels like we’re expecting a horse with a wagon seat built in to win a Formula 1 race.

22. I'm sure there is plenty of language parsers written in Haskell in the training data. Regardless, the question isn't if LLMs can generate code (they clearly can), it's if agentic workflows are superior to writing code by hand.

23. I estimated that i was 1.2x when we only had tab completion models. 1.5x would be too modest. I've done plenty of ~6-8 hour tasks in ~1-2 hours using llms.

24. Indeed. I just did a 4-6 month refactor + migration project in less than 3 weeks.

25. I recently used AI to help build the majority of a small project (database-driven website with search and admin capabilities) and I'd confidently say I was able to build it 3 to 5 times faster with AI. For context, I'm an experienced developer and know how to tweak the AI code when it's wonky and the AI can't be coerced into fixing its mistakes.

26. What's the link?

27. 10x probably means “substantial gain”. There is no universal unit of gain.

However if the difference is between doing a project vs not doing is, then the gain is much more than 10x.

28. I don't know what to tell you, it's just true. I have done what was previously days of BI/SQL dredging and visualizing in 20 minutes. You can be shocked and skeptical but it doesn't make it not true.

29. Numbers don't matter if it makes you "feel" more productive.

I've started and finished way more small projects i was too lazy to start without AI. So infinitely more productive?

Though I've definitely wasted some time not liking what AI generated and started a new chat.

30. > Numbers don't matter

Yes that's already been well established.

31. No but I don't use it to generate code usually.

I gave agents a solid go and I didn't feel more productive, just became more stupid.

32. Perhaps it is a skill issue. But I don't really see the point of trying when it seems like the gains are marginal. If agent workflows really do start offering 2x+ level improvements then perhaps I'll switch over, in the meantime I won't have to suffer mental degradation from constant LLM usage.

33. I appreciate your reply. A lot of people just say how wonderful and revolutionary LLMs are, but when asked for more concrete stuff they give vague answers or even worse, berate you for being skeptical/accuse you of being a luddite.

Your list gives me a starting point and I'm sure it can even be expanded. I do use LLMs the way you suggested and find them pretty useful most of the time - in chat mode. However, when using them in "agent mode" I find them far less useful.

34. One of my favorite engineers calls AI a "wish fulfillment slot machine."

35. Mmm, I do a lot of frontend work but I find writing the frontend code myself is faster. That seems to be mostly what everyone says it's good for. I find it useful for other stuff like writing mini scripts, figuring out arguments for command line tools, reviewing code, generating dumb boilerplate code, etc. Just not for actually writing code.

36. > I'd be shocked if the developer wasn't actually less productive

I agree 10x is a very large number and it's almost certainly smaller—maybe 1.5x would be reasonable. But really? You would be shocked if it was above 1.0x? This kind of comment always strikes me as so infantilizing and rude, to suggest that all these developers are actually slower with AI, but apparently completely oblivious to it and only you know better.

37. I would never suggest that only I know better. Plenty of other people are observing the same thing, and there is also research backing it up.

Maybe shocked is the wrong term. Surprised, perhaps.

38. There are simply so many counterexamples out there of people who have developed projects in a small fraction of the time it would take manually. Whether or not AI is having a positive effect on productivity on average in the industry is a valid question, but it's a statistical one. It's ridiculous to argue that AI has a negative effect on productivity in every single individual case.

39. It's all talk and no evidence.

40. We’re seeing no external indicators of large productivity gains. Even assuming that productivity gains in large corporations are swallowed up by inefficiencies, you’d expect externally verifiable metrics to show a 2x or more increase in productivity among indie developers and small companies.

So far it’s just crickets.

41. For most of my AI uses, I already have an implementation in mind. The prompt is small enough that most of the time, the agent would get it 90% there. In a way, it's basically an advanced autocomplete.

I think this is quite nice cause it doesn't feel like code review. It's more of a: did it do it? Yes? Great. Somewhat? Good enough, i can work from there. And when it doesn't work, I just scrap that and re-prompt or implement it manually.

But I do agree with what you say. When someone uses AI without making the code their own, it's a nightmare. I've had to review some PRs where I feel like I'm prompting AI rather than an engineer. I did wonder if they simply put my reviews directly to some agent...

42. I've come to realise that not only do I hate reading stuff written by AI. I also hate reading stuff praising AI. They all say the same thing. It's just boring.

43. Same here. I wrote this comment before I saw yours: https://news.ycombinator.com/item?id=46496990

It really brings no value. I'm not learning anything new here. And the discussion around it is always the same thing.

44. Maybe it is „very” professional, so he is part of one of hundreds of teams and he is creating micro parts of big system and with such setup he is easily hiding in ocean of very low performing people.
In many big setups there are so-called microservices that in reality are picoservices doing function of 1-2 method and 1-2 tables in db.

Either way - the setup looks nice and is one of very few that really shows how to make things work. A lot of people say about 5-10x improvements not showing even the prompts, because probably they made some 2 model CRUD that probably can be already made with 20 lines of code in Django.

45. This author simultaneously admits, he cannot hold the system in his head, but then also claims he’s not vibecoding, and I assert that these are two conflicting positions and you cannot simultaneously hold both positions

I am also doing my pattern recognition. It seems that a common pattern is people claiming it sped me up by x! (and then there’s no AB test, n=1)

46. “I can reliably reproduce their coding standards, tone of voice, tactics, and processes.”

Doesn’t he mean the “AI tool” can reliably reproduce his friends coding practices? Hilariously ironic if so.

47. I kinda feel the same way, don't get me wrong, I'm a developer at soul level, I absolutely love programming, but I love more getting shit done, automating things, taking the human out of the equation and putting the computer to do it, AI lets me do that. I work in cybersecurity as a WAF admin, my job is 100% that, but I'm also the only developer so anything that needs to be scripted or developed I get to do it. One week I created 4 different scripts with Gemini Canvas to automate some tedious work, it took my I don't know, 3 hours? Instead of 1 or 2 weeks? Yeah sign me in.

48. its also trading one problem for another. when manually coding you understand with little mental effort what you want to achieve, the nuances and constraints, how something interacts with other moving parts, and your problem is implementing the solution

when generating a solution, you need to explain in excruciating detail the things that you just know effortlessly. its a different kind of work, but its still work, and its more annoying and less rewarding than just implementing the solution yourself

49. I have this suspicion that the people who say they have 10x productivity gains from AI might largely see improvements from a workflow change which fixes their executive dysfunction. Back in the day I never had any issue just sitting down and coding something out for 4 hours straight. So I don’t think LLMs feel quite as big for me. But I can see the feeling of offloading effort to a computer when you have trouble getting started on a sub-task being a good trick to keep your brain engaged.

I’ve personally seen LLMs be huge time savers on specific bugs, for writing tests, and writing boilerplate code. They’re huge for working in new frameworks that roughly map to one you already know. But for the nitty gritty that ends up being most of the work on a mature product where all of the easy stuff is already done they don’t provide as big of a multiplier.

50. Strong agree! Forget all those studies that say “but developers are slower” or whatever — I’m actually building way more hobby projects and having way more fun now. And work is way more fun and easier. And my node_modules folder size is dropping like crazy!

51. One thing is true: now I go to the bar with the other guys in the group, drink whatever and let Claude or Codex do the work while I supervise, then merge PR in the morning... I wish I was kidding, but for non critical projects this is now a reality

52. I work at most 3-4 hours a day, and my work is prompting Cursor. Certainly an improvement over suffering 8 hours a day, but still not quite what I'm looking for.

53. I spent probably 150-200 hours coding a money management tool in 2022.

This evening, I worked with Claude to make an AI-assisted money manager that is better than the 2022 version I so carefully crafted.

I had nothing at all this morning and now I have a full database with all my transactions and really strong reporting.

The word “developer” is about to get a lot more expansive and I think that’s cool.

54. More related to the title, i've found the same.

I was always an aggressive pixel-pusher, so web dev took me AGES.

But with shadcn + llms I'm flying through stuff, no lie, 5-20x faster than I was before.

And i dont hate it anymore

55. > I’m easily 10x more productive with AI than without it.

So you've shipped 10x the products? Can you list them?

56. Of course its fun. Making slop _is_ very fun. Its a low-effort dopamine-driven way of producing things. Learning is uncomfortable. Improving things using only your braincells can be very difficult and time consuming.

57. Laziness, or job search, or parenting, or health issues, or caregiving, or something else. It's not a binary stay-current-or-you're-lazy situation, it's that the entire industry is moving to shorter timelines, smaller teams, and more technical complexity for web projects simultaneously. LLMs are a huge dopamine hit for short term gains when you're spinning plates day after day. The question is what the ecosystem will look like when everybody's been using LLMs as a stopgap for an extended period of time.

58. Web development is perhaps "fun" again if you consider PHP 4 and jQuery as "fun". A "problem" arises for those of us who prefer Ruby, Rails, and HotWire.

I'm not gonna lie, I use AI every day (in the form of Grammarly). But LLMs and so-called "agents" are less valuable to me, even if they would help me to produce more "output".

It will be interesting to me to discover the outcome of this bifurcation!

59. >>Starting a new project once felt insurmountable. Now, it feels realistic again.

Honestly, this does not give me confidence in anything else you said. If you can't spin up a new project on your own in a few minutes, you may not be equipped to deal with or debug whatever AI spins up for you.

>>When AI generates code, I know when it’s good and when it’s not. I’v seen the good and the bad, and I can iterate from there. Even with refinement and back-and-forth prompting, I’m easily 10x more productive

Minus a baseline, it's hard to tell what this means. 10x nothing is nothing. How am I supposed to know what 1x is for you, is there a 1x site I can look at to understand what 10x would mean? My overall feeling prior to reading this was "I should hire this guy", and after reading it my overwhelming thought was "eat a dick, you sociopathic self-aggrandizing tool." Moreover, if you have skill which you feel is augmented by these tools, then you may want to lean more heavily on that skill now if you think that the tool itself makes everyone capable of writing the same amazing code you do. Because it sounds like you will be unemployed soon if not already, as a casualty of the nonsense engine you're blogging about and touting.
</comments_about_topic>

Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.

topic

Productivity Claims Skepticism # Debates over whether 10x productivity gains are real or exaggerated, with critics noting lack of controlled studies and potential for gambling-like dopamine hits from prompting

commentCount

59

← Back to job