Summarizer

LLM Input

llm/9db4e77f-8dd5-46da-972e-40d33f3399ef/topic-6-63438a78-0fba-4bbb-ad15-f08f070f01d0-input.json

prompt

The following is content for you to summarize. Do not respond to the comments—summarize them.

<topic>
Future of Software Engineering # Existential concerns about the devaluation of coding skills, the shift from creative building to managerial reviewing of AI output, and fears that junior developers will lose the opportunity to learn through doing.
</topic>

<comments_about_topic>
1. There have always been hundreds or thousands of companies that want software engineers but simply don't have the revenue to support them. My first dev job was a small private company in exactly this spot. They basically paid me my salary for six months to figure out WordPress and PHP on the job having only ever done very basic programming stuff on my own in high school ~6 years prior.

The median dev salary across the entire US is something like $130k/yr. There are huge numbers of new or self-taught software devs in low cost of living areas of the country making $50-60k/yr.

2. Same. I am doing this as Claude knocked out two annoying yak shaving tasks I did not really want to do. Required careful review and tweaking.

Claiming that you now have 10 AI minions just wrecking your codebase sounds like showboating. I do not pity the people who will inherit those codebases later.

3. Disclaimer: not an """AI""" enthusiast. I think it takes away the joy of coding, which makes me sad.

With that out of the way, I don't think there will be "people inheriting codebases" for much longer, at least not in the vast majority of business-related software needs. People will still be useful insofar as you need someone responsible and able to be sued for contract breach, failures and whatnot, but we'll see more and more agents inheriting previous agents codebases. And in the other hand, "small software" that caters to particular customized workflows can be produced entirely by LLMs.

I can totally relate how some of us would want to be off raising goats, planting watermelons or whatever.

4. > I might spend 10 minutes doing a task with AI rather than an hour (w/o AI), but trust me - I am going to keep 50 minutes to myself, not deliver 5 more tasks

It's wild that you just outright admitted this. Seems like your employer would do best to let you go and find someone that can use tools to increase their productivity.

5. Show me the incentive, I'll show you the outcome. More than once I've had my hand slapped professionally for taking ownership of something my immediate superiors wanted to micromanage. Fine, here I was trying to take something off their plate that was in my wheelhouse, but if that's where they want to draw the line I guess I'll just give less of a shit.

If you actively deny your employees ownership, then the relationship becomes purely transactional.

It's also possible OP is just a bad employee, but I've met far more demoralized good employees than malicious bad ones over the course of my career.

6. DO you have any idea of the man hours it took to build those large projects you are speaking of? Lets take Linux for example. Suppose for the sake of argument that Claude Code with Opus 4.5 is as smart as an average person(AGI), but with the added benefit that he can work 24/7. Suppose now i have millions of dollars to burn and am running 1000 such instances on max plans. Now if I have started running this agent since the date Claude Opus 4.5 was released and i prompted him to create a commercial-grade multi-platform OS from the caliber of Linux.
An estimate of the linux kernel is 100 million man hours of work. divide by 1000. We expect to have a functioning OS like Linux by 2058 from these calcualtions.
How long has claude been released? 2 months.

7. I'm not discounting your experience, but purely from experiment design, you don't have any sort of pre/post AI control. You've spent 3 years becoming a subject-matter expert who's building software in your domain; I'm not surprised AI in it's current form is helpful. A more valuable comparison would be something like If you kept going without AI, how long would it take someone with similar domain experience who's just starting their solution to catch using AI?

8. I do stuff in my free time now that would have been a full time job a year ago. Accomplishing in months what would have taken years. (And doing in days what would have taken weeks.) I'm talking about actually built-out products with a decent amount of code and features, not basic prototypes. I feel like the vibe is "put up or shut up", so check out my bio for one example.

I think your logic goes wrong because you assume that more productivity implies less desire for engineers. But now engineers are maybe 2x or 5x more productive than before. So that makes them more attractive to hire than before. It's not like there was some fixed pool of work to be done and you just had to hire enough to exhaust the pool. It's like if new pickaxes were invented that let your gold miners dig 5x more gold. You'd see an explosion in gold miners, not a reduction. For another example, I spend all my free time coding now because I can do so much now. I get so much more result for the same effort, that it makes sense to put more effort in.

9. > It's not like there was some fixed pool of work to be done and you just had to hire enough to exhaust the pool.

I'm my opinion you are failing to consider other bottlenecks, a la the theory of constraints.

An analogy: Imagine you have a widget factory that requires 3 machines, executed in sequence, to produce one widget.

Now imagine one of those machines gets 2x-5x more efficient. What will you do? Buy more of the faster machines? Of course not! Maybe you'll scale up by buying more of the slower machines (which are now your bottleneck) so they can match the output of the faster one, but that's only if you can acquire the raw material inputs fast enough to make use of them, and also that you can sell the output fast enough to not end up with a massive unsold inventory.

Bringing this back to software engineering: there are other processes in the software development lifecycle besides writing code -- namely gathering requirements, testing with users (getting feedback), and deployment / operations. And human coordination across these processes is hard, and hard to scale with agents.

These other aspects are much harder to scale (for now, at least) with agents. This is the core reason why agentic development will lead to fewer developers -- because you just don't need as many developers to deliver the same amount of development velocity.

The same logic explains (at least in part) why US companies don't simply continue hiring more and more outsourced developers. At a certain point, more raw development velocity isn't helpful because you're limited by other constraints.

On the other hand, agentic development DOES mean a boon to solo developers, who can MUCH more easily scale just themselves. It's much easier to coordinate between the product team, the development team, the ops team, and the customer support team when all the teams are in the same person's head.

10. Right but then you expect way more productivity from those teams. I'm wondering where that is.

I find when I'm in a domain I'm not an expert in I am way more productive with the AI tools. With no knowledge of Java or Spring I was able to have AI build out a server in like 10 minutes, when it would have taken me hours to figure out the docs and deployment etc. But like, if I knew Java and Spring I could have built that same thing in 10 minutes anyways. That's not nothing, but also not generalisable to all of software development, not even close. Plus you miss out on actually learning the thing.

11. > I think your logic goes wrong because you assume that more productivity implies less desire for engineers.

Yes, this is the central fallacy. The reality is, we've been massively bottlenecked on software productivity ever since the concept of software existed. Only a tiny tiny fraction of all the software that could usefully be written has been. The limitation has always been the pool of developers that could do the work and the friction in getting those people to be able to do it.

What it is confounded by however is the short term effect which I think is absolutely drying up the market for new junior software devs. It's going to take a while for this to work through.

12. And to push this example further, if you can hire 10 developers each commanding 10 reliable junior-mid developers you have a team of 100, which is probably more than enough to build basically any software project in existence. WhatsApp was built with way less than that.

13. A lot of people either a) don’t know about the good tools or b) aren’t using them enough/properly.

There is a ton of anti-AI sentiment, and not all LLMs are equal. There is a lot of individual adoption that is yet to occur.

I know at least two startups that are one person or two people that are punching way above their weight due to this force multiplier. I don’t think it’s industry-wide yet, but it will be relatively soon.

Check back in on your assessment in a year.

14. Exactly my opinion. Im pretty pragmatic and open minded, though seasoned enough that I dont stay on the bleeding edge. I became a convert in October, and I think the most recent Sonnet/Opus models truly changed the calculus of "viable/useable" so that we have now crossed into the age of AI.

We are going to see the rest of the industry come along kicking and screaming over the next calendar year, and thats when the ball is going to start truly rolling.

15. > Ensure code is written in such a way that it's easy to understand for LLMs

Over the summer last year, I had the AI (Gemini Pro 2.5) write base libraries from scratch that area easy for itself to write code against. Now GPro3 can one-shot (with, at most, a single debug loop at the REPL) 100% of the normal code I need developed (back office/business-type code).

Huge productivity booster, there are a few things that are very easy for humans to do that AI struggles with. By removing them, the AI has been just fantastic to work with.

16. Agree. People are stuck applying the "agent" = "employee" analogy and think they are more productive by having a team/company of agents. Unless you've perfectly spec'ed and detailed multiple projects up front, the speed of a single agent shouldn't be the bottleneck.

17. My impression is that people who are exploring coordinated multi-agent-coding systems are working towards replacing full teams, not augmenting individuals. "Meaningful supervising role" becomes "automated quality and process control"; "generate requirements quickly" -> we already do this for large human software teams.

If that's the goal, then we shouldn't interpret the current experiment as the destination.

18. Reviewing PRs should be for junior engineers, architectural changes, brand new code, or broken tests. You should not review every PR; if you do, you're only doing it out of habit, not because it's necessary.

PRs come originally from the idea that there's an outsider trying to merge code into somebody's open source project, and the Benevolent Dictator wants to make sure it's done right. If you work on a corporate SWEng team, this is a completely different paradigm. You should trust your team members to write good-enough code, as long as conventions are followed, linters used, acceptance tests pass, etc.

19. I am also skeptical about the need for such a large number of PRs. Do those open because of previous PRs not accomplishing their goals?

It's frustrating because being part of a small team, I absolutely fucking hate it when any LLM product writes or refractors thousands of lines of code. It's genuinely infuriating because now I am fully reliant on it to make any changes, even if it's really simple. Just seems like a new version of vendor lock-in to me.

20. Yeah, but there is a difference, between if at least one people at one point of time understood the code (or the specific part of it), and none. Also, there are different levels. Wildfly’s code for example is utterly incomprehensible, because the flow jumps on huge inheritance chains up and down to random points all the time; some Java Enterprise people are terrible with this. Anyway, the average for tools used by many is way better than that. So it’s definitely possible to make it worse. Blindly trusting AI is one possible way to reach those new lows. So it would be good to prevent it, before it’s too late, and not praising it without that, and even throwing out one of the (broken, but better than nothing) safeguard. Especially how code review is obviously dead with such amount of generated code per week. (The situation wasn’t great there either before) So it’s a two in one bad situation.

21. OK, so you have the unbearable pain of using a separate terminal app to use the magic thingie that does your programming for you on prompt, and which didn't exist merely 2 years ago.

https://www.youtube.com/watch?v=kBLkX2VaQs4

22. > Claude Code has been around for almost a year and is being built by an entire team, yet doesn't seem to have benefited from this approach. The program is becoming buggier and less reliable over time, and development speed seems indistinguishable from anything else.

Shhh, this is not what you’re supposed to look at.

Look! Bazillion more agents! Gorrilion more agents! Productivity! Fire those lazy code monkeys, buy our product! Make me riiiich.

23. I feel like it's time for me to hang up this career. Prompting is boring, and doing it 5 times at once is just annoying multitasking. I know I'm mostly in it for the money, but at least there used to be a feeling of accomplishment sometimes. Now it's like, whose accomplishment is it?

24. Don't give up to the facade just yet.

This is the creator of a product saying how good it is.

If you've worked anywhere professionally you know how every place has its problems, where people just lie constantly about things?

Yeah.

Keep at it and see where things go.

I'm also a dev a bit overwhelmed by all of this talk, at my job I've tried quite a few things and I'm still mostly just using copilot for auto complete and very small tasks that I review throughly, everything else is manually.

If this is indeed the future I also don't wanna be a part of it and will switch to another career, but all this talk seems to come only from the people who actually built these things.

25. Or try to find a job where you can work how you like to work. With these things it's always "get more done ! MORE ! MORE !". But not all jobs are like this.

26. you found meaning in the work vs the outcome. You can find meaning in the outcome with a new form of work.

27. Agreed, the author basically says that coding is not required anymore, the job is reviewing code. Do engineers not actually want to build things themselves anymore? Where is the joy and pride in the craft? Are we just supposed to maximize productivity at the expense of our life's experience? Are we any different than machines at that point?

28. I feel like it’s not talked about enough that the ultimate irony of software engineering is that, as an industry, it’s aiming to make itself obsolete as much as possible. I struggle to think of any other industry that, completely on their own accord, has actively pushed to put themselves out of work to such a degree.

29. prompting in my experience is boring and/or frustrating. Why anyone would want to do more of that without MASSIVE financial incentives is unthinkable. No composer or writer would ever want to prompt a "work".

30. I’ve found that experienced devs use agentic coding in a more “hands-on” way than beginners and pure vibe-coders.

Vibecoders are the best because they push the models in humorous and unexpected ways.

Junior devs are like “I automated the deploy process via an agent and this markdown file”

Seasoned devs will spend more time writing the prompt for a bug fix, or lazily paste the error and then make the 1-line change themselves.

The current crop of LLMs are more powerful than any of these use cases, and it’s exciting to see experienced devs start to figure that out (I’m not stanning Gas Town[0], but it’s a glimpse of the potential).

[0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

31. Partially related: I really dislike the vibe of Gas Town, both the post and the tool, I really hope this isn't what the future looks like. It just feels disappointing.

32. Warm take for sure, but I feel that LLMs and agents have made me a worse programmer as a whole. I am enjoying the less mental strain as I do my hobbies like sketching/art while the LLM is running. But definitely it isn't making me any faster.

I'm at the point of considering another job but I just fear that my skills have deteriorated to the point that I can't pass any (manual) coding assessments anymore

33. This feels like the desperate, look at me! post, which is the exact opposite of Andrej Karpathy's recent tweet[0] about feeling left behind as a programmer, as covered on Hacker News[1].

I guess would want to see how sustainable this 5 parallel AI effort is, and are there demonstrably positive outcomes. There are plenty of "I one-shotted this" examples of something that already (mostly) existed, which are very impressive in their own right, but I haven't seen a lot of truly novel creations.

[0] https://x.com/karpathy/status/2004607146781278521

[1] https://news.ycombinator.com/item?id=46395714

34. I'm sure this works for people and I'm happy for them but this sounds like absolute hell to me.

Just take out everything that I like about SWE and then just leave me to do the stuff I hate.

35. The amount of people holding strong opinions on LLMs who openly admit they have not tried the state of the art tools is so high on Hacker news right now, that it's refreshing to get actual updates from the tool's creators.

I read a comment yesterday that said something like "many people tried LLMs early on, it was kind of janky and so they gave up, thinking LLMs are bad". They were probably right at the time, but the tech _has_ improved since then, while those opinions have not changed much.

So, yes claude code and sonnet/opus 4.5 is another step change that you should try out. For $20/month you can run claude code in the terminal and regular claude on the web app.
</comments_about_topic>

Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.

topic

Future of Software Engineering # Existential concerns about the devaluation of coding skills, the shift from creative building to managerial reviewing of AI output, and fears that junior developers will lose the opportunity to learn through doing.

commentCount

35

← Back to job