Summarizer

Economic Impact on Software Jobs

Existential anxiety regarding the obsolescence of mid-level engineers, the potential "hollowing out" of the middle class, and the shift toward one-person unicorn teams.

← Back to Opus 4.5 is not the normal AI agent experience that I have had thus far

88 comments tagged with this topic

View on HN · Topics
Have you ever worried that by programming in this way, you are methodically giving Anthropic all the information it needs to copy your product? If there is any real value in what you are doing, what is to stop Anthropic or OpenAI or whomever from essentially one-shotting Zed? What happens when the model providers 10x their costs and also use the information you've so enthusiastically given them to clone your product and use the money that you paid them to squash you?
View on HN · Topics
The article is arguing that it will basically replace devs. Do you think it can replace you basically one-shotting features/bugs in Zed? And also - doesn’t that make Zed (and other editors) pointless?
View on HN · Topics
> Do you think it can replace you basically one-shotting features/bugs in Zed? Nobody is one-shotting anything nontrivial in Zed's code base, with Opus 4.5 or any other model. What about a future model? Literally nobody knows. Forecasts about AI capabilities have had horrendously low accuracy in both directions - e.g. most people underestimated what LLMs would be capable of today, and almost everyone who thought AI would at least be where it is today...instead overestimated and predicted we'd have AGI or even superintelligence by now. I see zero signs of that forecasting accuracy improving. In aggregate, we are atrocious at it. The only safe bet is that hardware will be faster and cheaper (because the most reliable trend in the history of computing has been that hardware gets faster and cheaper), which will naturally affect the software running on it. > And also - doesn’t that make Zed (and other editors) pointless? It means there's now demand for supporting use cases that didn't exist until recently, which comes with the territory of building a product for technologists! :)
View on HN · Topics
Thanx. More of a "faster keyboard" so far then? And yeah - if I had a crystal ball, I would be on my private island instead of hanging on HN :)
View on HN · Topics
Thanks for posting this. It's a nice reminder that despite all the noise from hype-mongers and skeptics in the past few years, most of us here are just trying to figure this all out with an open mind and are ready to change our opinions when the facts change. And a lot of people in the industry that I respect on HN or elsewhere have changed their minds about this stuff in the last year, having previously been quite justifiably skeptical. We're not in 2023 anymore. If you were someone saying at the start of 2025 "this is a flash in the pan and a bunch of hype, it's not going to fundamentally change how we write code", that was still a reasonable belief to hold back then. At the start of 2026 that position is basically untenable: it's just burying your head in the sand and wishing for AI to go away. If you're someone who still holds it you really really need to download Claude Code and set it to Opus and start trying it with an open mind: I don't know what else to tell you. So now the question has shifted from whether this is going to transform our profession (it is), to how exactly it's going to play out. I personally don't think we will be replacing human engineers anytime soon ("coders", maybe!), but I'm prepared to change my mind on that too if the facts change. We'll see. I was a fellow mind-changer, although it was back around the first half of last year when Claude Code was good enough to do things for me in a mature codebase under supervision. It clearly still had a long way to go but it was at that tipping point from "not really useful" to "useful". But Opus 4.5 is something different - I don't feel I have to keep pulling it back on track in quite the way I used to with Sonnet 3.7, 4, even Sonnet 4.5. For the record, I still think we're in a bubble. AI companies are overvalued. But that's a separate question from whether this is going to change the software development profession.
View on HN · Topics
The AI bubble is kind of like the dot-com bubble in that it's a revolutionary technology that will certainly be a huge part of the future, but it's still overhyped (i.e. people are investing without regard for logic).
View on HN · Topics
We were enjoying cheap second hand rack mount servers, RAM, hard drives, printers, office chairs and so on for a decade after the original dot com crash. Every company that went out of business liquidated their good shit for pennies. I'm hoping after AI comes back down to earth there will be a new glut of cheap second hand GPUs and RAM to get snapped up.
View on HN · Topics
Right. And same for railways, which had a huge bubble early on. Over-hyped on the short time horizon. Long term, they were transformative in the end, although most of the early companies and early investors didn’t reap the eventual profits.
View on HN · Topics
But the dot-com bubble wasn't overhyed in retrospect. It was under-hyped.
View on HN · Topics
At the time it was overhyped because just by adding .com to your company's name you could increase your valuation regardless of whether or not you had anything to do with the internet. Is that not stupid? I think my comparison is apt; being a bubble and a truly society-altering technology are not mutually exclusive, and by virtue of it being a bubble, it is overhyped.
View on HN · Topics
There was definitely a lot of stupid stuff happening. IMO the clearest accurate way to put it is that it was overhyped for the short term (hence the crazy high valuations for obvious bullshit), and underhyped for the long term (in the sense that we didn't really foresee how broadly and deeply it would change the world). Of course, there's more nuance to it, because some people had wild long-term predictions too. But I think the overall, mainstream vibe was to underappreciate how big a deal it was.
View on HN · Topics
This is kind of why I'm not really scared of losing my job. While Claude is amazing at writing code, it still requires human operators. And even experienced human operators are bad at operating this machinery. Tell your average joe - the one who thinks they can create software without engineers - what "tools-in-a-loop" means, and they'll make the same face they made when you tried explaining iterators to them, before LLMs. Explain to them how typing system, E2E or integration test helps the agent, and suddenly, they now have to learn all the things they would be required to learn to be able to write on their own.
View on HN · Topics
It's true that some people will just continually move the goalposts because they are invested in their beliefs. But that doesn't mean that the skepticism around certain claims aren't relevant. Nobody serious is disputing that LLM's can generate working code. They dispute claims like "Agentic workflows will replace software developers in the short to medium term", or "Agentic workflows lead to 2-100x improvements in productivity across the board". This is what people are looking for in terms of evidence and there just isn't any. Thus far, we do have evidence that AI (at least in OSS) produces a 19% decrease in productivity [0]. We also have evidence that it harms our cognitive abilities [1]. Anecdotally, I have found myself lazily reaching for LLM assistance when encountering a difficult problem instead of thinking deeply about the problem. Anecdotally I also struggle to be more productive using AI-centric agents workflows in areas of expertise . We want evidence that "vibe engineering" is actually more productive across the entire lifespan of a software project. We want evidence that it produces better outcomes. Nobody has yet shown that. It's just people claiming that because they vibe coded some trivial project, all of software development can benefit from this approach. Recently a principle engineer at Google claimed that Claude Code wrote their team's entire year's worth of work in a single afternoon. They later walked that claim back, but most do not. I'm more than happy to be convinced but it's becoming extremely tiring to hear the same claims being parroted without evidence and then you get called a luddite when you question it. It's also tiring when you push them on it and they blame it on the model you use, and then the agent, and then the way you handle context, and then the prompts, and then "skill issue". Meanwhile all they have to show is some slop that could be hand coded in a couple hours by someone familiar with the domain. I use AI, I was pretty bullish on it for the last two years, and the combination of it simply not living up to expectations + the constant barrage of what feels like a stealth marketing campaign parroting the same thing over and over (the new model is way better, unlike the other times we said that) + the amount of absolute slop code that seems to continue to increase + companies like Microsoft producing worse and worse software as they shoehorn AI into every single product (Office was renamed to Copilot 365). I've become very sensitive to it, much in the same way I was very sensitive to the claims being made by certain VC backed webdev companies regarding their product + framework in the last few years. I'm not even going to bring up the economic, social, and environmental issues because I don't think they're relevant, but they do contribute to my annoyance with this stuff. [0] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... [1] https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling...
View on HN · Topics
If you're a developer at the dawn of the AI revolution, there is absolutely a gun to your head.
View on HN · Topics
I feel similar, I'm not against the idea that maybe LLMs have gotten so much better... but I've been told this probably 10 times in the last few years working with AI daily. The funny part about rapidly changing industries is that, despite the fomo, there's honestly not any reward to keeping up unless you want to be a consultant. Otherwise, wait and see what sticks. If this summer people are still citing the Opus 4.5 was a game changing moment and have solid, repeatable workflows, then I'll happily change up my workflow. Someone could walk into the LLM space today and wouldn't be significantly at a loss for not having paid attention to anything that had happened in the last 4 years other than learning what has stuck since then.
View on HN · Topics
I've lived through multiple incredibly rapid changes in tech throughout my career, and the lesson always learned was there is a lot of wasted energy keeping up. Two big examples: - Period from early mvc JavaScript frontends (backbone.js etc) and the time of the great React/Angular wars. I completely stepped out of the webdev space during that time period. - The rapid expansion of Deep Learning frameworks where I did try to keep up (shipped some Lua torch packages and made minor contributions to Pylearn2). In the first case, missing 5 years of front-end wars had zero impact. After not doing webdev work at all for 5-years I was tasked with shipping a React app. It took me a week to catch up, and everything was deployed in roughly the same time as someone would have had they spent years keeping up with changes. In the second case, where I did keep up with many of the developing deep learning frameworks, it didn't really confer any advantage. Coworkers who I worked with who started with Pytorch fresh out of school were just as proficient, if not more so, with building models. Spending energy keeping up offered no value other than feeling "current" at the time. Can you give me a counter example of where keeping up with a rapidly changing area that's unstable has conferred a benefit to you? Most of FOMO is really just fear . Again, unless you're trying to sell your self specifically as a consultant on the bleeding edge, there's no reason to keep up with all these changes (other than finding it fun).
View on HN · Topics
You moved out of webdev for 5 years, not everybody else had that luxury. I'm sure it was beneficial to those people to keep up with webdev technologies.
View on HN · Topics
If everything changes every month, then stuff you learn next month would be obsolete in two months. This is a response to people saying "adapt or be left behind". There's so much thrashing that if you're not interested with the SOTA, you can just wait for everything to calm down and pick it up then.
View on HN · Topics
I realize your experience has been frustrating. I hope you see that every generation of model and harness is converting more hold-outs. We're still a few years from hard diminishing returns assuming capital keeps flowing (and that's without any major new architectures which are likely) so you should be able to see how this is going to play out. It's in your interest to deal with your frustration and figure out how you can leverage the new tools to stay relevant (to the degree that you want to). Regarding the context window, Claude needs thinking turned up for long context accuracy, it's quite forgetful without thinking.
View on HN · Topics
Personally I'm sympathetic to people who don't want to have to use AI, but I dislike it when they attack my use of AI as a skill issue. I'm quite certain the workplace is going to punish people who don't leverage AI though, and I'm trying to be helpful.
View on HN · Topics
> but I dislike it when they attack my use of AI as a skill issue. No one attacked your use of AI. I explained my own experience with the "Claude Opus 4.5 is next tier". You barged in, ignored anything I said, and attacked my skills. > the workplace is going to punish people who don't leverage AI though, and I'm trying to be helpful. So what exactly is helpful in your comments?
View on HN · Topics
The only thing I disagreed with in your post is your objectively incorrect statement regarding Claude's context behavior. Other than that I'm just trying to encourage you to make preparations for something that I don't think you're taking seriously enough yet. No need to get all worked up, it'll only reflect on you.
View on HN · Topics
Note how nothing in your comment addresses anything I said. Except the last sentence that basically confirms what I said. This perfectly illustrates the discourse around AI. As for the snide and patronizing "it's in your interest to stay relevant": 1. I use these tools daily. That's why I don't subscribe to willful wide-eyed gullibility. I know exactly what these tools can and cannot do. The vast majority of "AI skeptics" are the same. 2. In a few years when the world is awash in barely working incomprehensible AI slop my skills will be in great demand. Not because I'm an amazing developer (I'm not), but because I have experience separating wheat from the chaff
View on HN · Topics
The snide and patronizing is your projection. It kinda makes me sad when the discourse is so poisoned that I can't even encourage someone to protect their own future from something that's obviously coming (technical merits aside, purely based on social dynamics). It seems the subject of AI is emotionally charged for you, so I expect friendly/rational discourse is going to be a challenge. I'd say something nice but since you're primed to see me being patronizing... Fuck you? That what you were expecting?
View on HN · Topics
> The snide and patronizing is your projection. It's not me who decided to barge in, assume their opponent doesn't use something or doesn't want to use something, and offer unsolicited advice. > It kinda makes me sad when the discourse is so poisoned that I can't even encourage someone to protect their own future from something that's obviously coming See. Again. You're so in love with your "wisdom" that you can't even see what you sound like: snide, patronising, condenscending. And completely missing the whole point of what was written. You are literally the person who poisons the discourse. Me: "here are the issues I still experience with what people claim are 'next tier frontier model'" You: "it's in your interests to figure out how to leverage new tools to stay relevant in the future" Me: ... what the hell are you talking about? I'm using these tools daily. Do you have anything constructive to add to the discourse? > so I expect friendly/rational discourse is going to be a challenge. It's only challenge to you because you keep being in love with your voice and your voice only. Do you have anything to contribute to the actual rational discourse, are you going to attack my character? > 'd say something nice but since you're primed to see me being patronizing... Fuck you? T Ah. The famous friendly/rational discourse of "they attack my use of AI" (no one attacked you), "why don't you invest in learning tools to stay relevant in the future" (I literally use these tools daily, do you have anything useful to say?) and "fuck you" (well, same to you). > That what you were expecting? What I was expecting is responses to what I wrote, not you riding in on a high horse.
View on HN · Topics
You were the one complaining about how the tools aren't giving you the results you expected. If you're using these tools daily and having a hard time, either you're working on something very different from the bulk of people using the tools and your problems or legitimate, or you aren't and it's a skill issue. If you want to take politeness as being patronizing, I'm happy to stop bothering. My guess is you're not a special snowflake, and you need to "get good" or you're going to end up on unemployment complaining about how unfair life is. I'd have sympathy but you don't seem like a pleasant human being to interact with, so have fun!
View on HN · Topics
I teach at a university, and spend plenty of time programming for research and for fun. Like many others, I spent some time on the holidays trying to push the current generation of Cursor, Claude Code, and Codex as far as I could. (They're all very good.) I had an idea for something that I wanted, and in five scattered hours, I got it good enough to use. I'm thinking about it in a few different ways: 1. I estimate I could have done it without AI with 2 weeks full-time effort. (Full-time defined as >> 40 hours / week.) 2. I have too many other things to do that are purportedly more important that programming. I really can't dedicate to two weeks full-time to a "nice to have" project. So, without AI, I wouldn't have done it at all. 3. I could hire someone to do it for me. At the university, those are students. From experience with lots of advising, a top-tier undergraduate student could have achieved the same thing, had they worked full tilt for a semester (before LLMs). This of course assumes that I'm meeting them every week.
View on HN · Topics
I think we're entering a world where programmers as such won't really exist (except perhaps in certain niches). Being able to program (and read code, in particular) will probably remain useful, though diminished in value. What will matter more is your ability to actually create things, using whatever tools are necessary and available, and have them actually be useful. Which, in a way, is the same as it ever was. There's just less indirection involved now.
View on HN · Topics
It will have to quintuple or more to make business sense for Anthropic. Sure, still cheaper than a full time developer, but don't expect it to stay at $200 for a long time. And then, when you explain to your boss how amazing it is, and can do all this work so easily and quickly, it's when your boss start asking the real question: what am I paying you for?
View on HN · Topics
A programmer, if we use US standards is probably $8000 per month. If you can get 30% more value out of that programmer (trust me, its WAY more then 30%), you gained $2400 of value. If you pay $200, $500, $1000 for that, its still a net positive. Ignoring the salary range of a actual senior... LLMs do not result in bosses firing people, it results in more projects / faster completed projects, what in turn means more $$$ for a company.
View on HN · Topics
Cheaper than hiring another developer, probably. My experience: for a few dollars I was able to extensively refactor a Python codebase in half a day. This otherwise would have taken multiple days of very tedious work.
View on HN · Topics
And that's what the C-suite wants to know. Prepare yourself to be replaced in the not so distant future. Hope you have a good "nest" to support yourself when you're inevitably fired.
View on HN · Topics
Homey, we're going to be replacing you devs that can't stand to use LLMs lol
View on HN · Topics
> Prepare yourself to be replaced in the not so distant future. Ignoring that this same developer, now has access to a tool, that makes himself a team. Going independent was always a issue because being a full stack dev, is hard. With LLMs, you have a entire team behind you for making graphics, code, documents, etc... YOU becomes the manager. We will see probably a lot more smaller teams/single devs making bigger projects, until they grow. The companies that think they can fire devs, are the same companies that are going to go too far, and burn bridges. Do not forget that a lot of companies are founded on devs leaving a company, and starting out on their own, taking clients with them! I did that years ago, and it worked for a while but eventually the math does not work out because one guy can only do so much. And when you start hiring, your costs balloon. But with LLMs ... Now your a one man team, ... hiring a second person is not hiring a person to make some graphics or doing more coding. Your hiring another team. This is what people do not realize... they look too much upon this as the established order, ignoring what those fired devs now can do!
View on HN · Topics
This sounds nice, except for the fact that almost everyone else can do this, too. Or at least try to, resulting in a fast race to the bottom. Do you really want to be a middle manager to a bunch of text boxes, churning out slop, while they drive up our power bills and slowly terraform the planet?
View on HN · Topics
The same way that having motorized farming equipment was a race to the bottom for farmers? Perhaps. Turned out to be a good outcome for most involved. Just like farmers who couldn't cope with the additional leverage their equipment provided them, devs who can't leverage this technology will have to "go to the cities".
View on HN · Topics
Please do read up on how farmers are doing with this race to the bottom (it hasn't been pretty). Mega farms are a thing because small farms simply can't compete. Small farmers have gone broke. The parent comment is trying to highlight this. If LLM's turn out the way C-Suite hopes. Let me tell you, you will be in a world of pain. Most of you won't be using LLM's to create your own businesses.
View on HN · Topics
I'm not saying they can all do it now... but I don't think it's much of a stretch that they can learn it quickly and cheaply.
View on HN · Topics
Well probably OP won't be affected because management is very pleased with him and his output, why would they fire him? Hire someone who can probably have better output than him for 10% more money or someone who might have the same output for 25% less pay? You think any manager in their right mind would take risks like that? I think the real consequences are that they probably are so pleased with how productive the team is becoming that they will not hire new people or fire the ones who aren't keeping up with the times. It's like saying "wow, our factory just produced 50% more cars this year, time to shut down half the factory to reduce costs!"
View on HN · Topics
> You think any manager in their right mind would take risks like that? You really underestimate stupidity of your average manager. Two of our top performers left because they were underpaid and the manager (in charge of the comp) never even tried to retain them.
View on HN · Topics
I bet they weren't as valuable as you think. This is a common issue with certain high performing line delivery employees (particularly those with technical skills, programmers, lawyers, accountants, etc), they always think they are carrying the whole team/company on their shoulders. It almost never turns out to be the case. The machine will keep grinding.
View on HN · Topics
That's one kind of stupidity. Actually firing the golden goose is one step further
View on HN · Topics
> Yeah, prepare for the future. Well excuse the shit out of my goddamn French, but being comfy for years and suddenly facing literal doom of my profession in a year wasn't on my bingo card. And what do you even mean by "prepare"? Shit out a couple of mil out of my ass and invest asap?
View on HN · Topics
>And what do you even mean by "prepare"? Not the person you're responding to but... if you think it's a horse -> car change (and, to stretch the metaphor, if you think you're in the business of building stables) then preparation means train in another profession. If you think it's a hand tools -> power tools change, learn how to use the new tools so you don't get left behind. My opinion is it's a hand -> power tools change, and that LLMs give me the power to solve more problems for clients, and do it faster and more predictably than a client trying to achieve the same with an LLM. I hope I'm right :-)
View on HN · Topics
Why do you suppose that these tools will conveniently stop improving at some point that increases your productivity but are still too much for your clients to use for themselves?
View on HN · Topics
Power tools give way to robotics though so it seems small minded to think so small? Have you been following the latest trends though? New models come out all the time so you can't have this tool brand mindset. Keep studying and you'll get there.
View on HN · Topics
> Most software engineers are seriously sleeping on how good LLM agents are right now, especially something like Claude Code. Nobody is sleeping. I'm using LLMs daily to help me in simple coding tasks. But really where is the hurry? At this point not a few weeks go by without the next best thing since sliced bread to come out. Why would I bother "learning" (and there's really nothing to learn here) some tool/workflow that is already outdated by the time it comes out? > 2026 is going to be a wake-up call Do you honestly think a developer not using AI won't be able to adapt to a LLM workflow in, say, 2028 or 2029? It has to be 2026 or... What exactly? There is literally no hurry. You're using the equivalent of the first portable CD-player in the 80s: it was huge, clunky, had hiccups, had a huge battery attached to it. It was shiny though, for those who find new things shiny. Others are waiting for a portable CD player that is slim, that buffers, that works fine. And you're saying that people won't be able to learn how to put a CD in a slim CD player because they didn't use a clunky one first.
View on HN · Topics
"But really where is the hurry?" It just depends on why you're programming. For many of us not learning and using up to date products leads to a disadvantage relative to our competition. I personally would very much rather go back to a world without AI, but we're forced to adapt. I didn't like when pagers/cell phones came out either, but it became clear very quickly not having one put me at a disadvantage at work.
View on HN · Topics
If anyone is excited about, and has experience with this kind of stuff, please DM. I have a role open for setting up these kinds of tools and workflows.
View on HN · Topics
They are sleeping on it because there is absolutely no incentive to use it. When needed it can be picked up in a day. Otherwise they are not paid based in tickets solved etc. If the incentives were properly aligned everyone would already use it
View on HN · Topics
I'm at the point where I say fuck it, let them sleep. The tech industry just went through an insane hiring craze and is now thinning out. This will help to separate the chaff from the wheat. I don't know why any company would want to hire "tech" people who are terrified of tech and completely obstinate when it comes to utilizing it. All the people I see downplaying it take a half-assed approach at using it then disparage it when it's not completely perfect. I started tinkering with LLMs in 2022. First use case, speak in natural english to the llm, give it a json structure, have it decipher the natural language and fill in that json structure (vacation planning app, so you talk to it about where/how you want to vacation and it creates the structured data in the app). Sometimes I'd use it for minor coding fixes (copy and paste a block into chatgpt, fix errors or maybe just ideation). This was all personal project stuff. At my job we got LLM access in mid/late 2023. Not crazy useful, but still was helpful. We got claude code in 2024. These days I only have an IDE open so I can make quick changes (like bumping up a config parameter, changing a config bool, etc.). I almost write ZERO code now. I usually have 3+ claude code sessions open. On my personal projects I'm using Gemini + codex primarily (since I have a google account and chatgpt $20/month account). When I get throttled on those I go to claude and pay per token. I'll often rip through new features, projects, ideas with one agent, then I have another agent come through and clean things up, look for code smells, etc. I don't allow the agents to have full unfettered control, but I'd say 70%+ of the time I just blindly accept their changes. If there are problems I can catch them on the MR/PR. I agree about the low hanging fruit and I'm constantly shocked at the sheer amount of FUD around LLMs. I want to generalize, like I feel like it's just the mid/jr level devs that speak poorly about it, but there's definitely senior/staff level people I see (rarely, mind you) that also don't like LLMs. I do feel like the online sentiment is slowly starting to change though. One thing I've noticed a lot of is that when it's an anonymous post it's more likely to downplay LLMs. But if I go on linkedin and look at actual good engineers I see them praising LLMs. Someone speaking about how powerful the LLMs are - working on sophisticated projects at startups or FAANG. Someone with FUD when it comes to LLM - web dev out of Alabama. I could go on and on but I'm just ranting/venting a little. I guess I can end this by saying that in my professional/personal life 9/10 of the top level best engineers I know are jumping on LLMs any chance they get. Only 1/10 talks about AI slop or bullshit like that.
View on HN · Topics
Not entirely disagreeing with your point but I think they've mostly been forced to pivot recently for their own sakes; they will never say it though. As much as they may seem eager the most public people tend to also be better at outside communication and knowing what they should say in public to enjoy more opportunities, remain employed or for the top engineers to still seem relevant in the face of the communities they are a part of. Its less about money and more about respect there I think. The "sudden switch" since Opus 4.5 when many were saying just a few months ago "I enjoy actual coding" but now are praising LLM's isn't a one off occurrence. I do think underneath it is somewhat motivated by fear; not for the job however but for relevance. i.e. its in being relevant to discussions, tech talks, new opportunities, etc.
View on HN · Topics
If that is true; then all the commentary around software people having jobs still due to "taste" and other nice words is just that. Commentary. In the end the higher level stuff still needs someone to learn it (e.g. learning ASX2 architecture, knowing what tech to work with); but it requires IMO significantly less practice then coding which in itself was a gate. The skill morphs more into a tech expert rather than a coding expert. I'm not sure what this means for the future of SWE's though yet. I don't see higher levels of staff in big large businesses bothering to do this, and at some scale I don't see founders still wanting to manage all of these agents, and processes (got better things to do at higher levels). But I do see the barrier of learning to code gone; meaning it probably becomes just like any other job.
View on HN · Topics
We just need to tax the hell out of the AI companies (assuming they are ever profitable) since all their gains are built on plundering the collective wisdom of humanity.
View on HN · Topics
I don’t think waiting for profitability makes sense. They can be massively disruptive without much profit as long as they spend enough money.
View on HN · Topics
Each of these years we’ve had a claim that it’s about to replace all engineers. By your logic, does it mean that engineers will never get replaced?
View on HN · Topics
I know what you are talking about, but there is more to life than just product-market fit. Hardly any of us are working on Postgres, Photoshop, blender, etc. but it's not just cope to wish we were. It's good to think about the needs to business and the needs of society separately. Yes, the thing needs users, or no one is benefiting. But it also needs to do good for those users, and ultimately, at the highest caliber, craftsmanship starts to matter again. There are legitimate reasons for the startup ecosystem to focus firstly and primarily on getting the users/customers. I'm not arguing against that. What I am arguing is why does the industry need to be dominated by startups in terms of the bulk of the products (not bulk of the users). It begs the question of how much societally-meaningful programming waiting to be done. I'm hoping for a world where more end users code (vibe or otherwise) and the solve their own problems with their own software. I think that will make more a smaller, more elite software industry that is more focused on infrastructure than last-mile value capture. The question is how to fund the infrastructure. I don't know except for the most elite projects, which is not good enough for the industry (even this hypothetical smaller one) on the whole.
View on HN · Topics
> There are legitimate reasons for the startup ecosystem to focus firstly and primarily on getting the users/customers. I'm not arguing against that. What I am arguing is why does the industry need to be dominated by startups in terms of the bulk of the products (not bulk of the users). It begs the question of how much societally-meaningful programming waiting to be done. You slipped in "societally-meaningful" and I don't know what it means and don't want to debate merits/demerits of socialism/capitalism. However I think lots of software needs to be written because in my estimation with AI/LLM/ML it'll generate value. And then you have lots of software that needs to rewritten as firms/technologies die and new firms/technologies are born.
View on HN · Topics
I didn't mean to do some snide anticaptialism. Making new Postgreses and blenders is really hard. I don't think the startup ecosystem does a very good job, but I don't assume central planning would do a much better job either. (The method I have the most confidence in is some sort of mixed system where there is non-profit, state-planned, and startup software development all at once.) Markets are a tool, a means to the end. I think they're very good, I'm a big fan! But they are not an excuse not to think about the outcome we want. I'm confident that the outcome I don't want is where most software developers are trying to find demand for their work, pivoting etc. it's very "pushing a string" or "cart before the horse". I want more "pull" where the users/benefiaries of software are better able to dictate or create themselves what they want, rather than being helpless until a pivoting engineer finds it for them. Basically start-up culture has combined theories of exogenous growth from technology change, and a baseline assumption that most people are and will remain hopelessly computer illiterate, into an ideology that assumes the best software is always "surprising", a paradigm shift, etc. Startups that make libraries/tools for other software developers are fortunately a good step in undermining these "the customer is an idiot and the product will be better than they expect" assumptions. That gives me hope we're reach a healthier mix of push and pull. Wild successes are always disruptive, but that shouldn't mean that the only success is wild, or trying to "act disruptive before wild success" ("manifest" paradigm shifts!) is always the best means to get there.
View on HN · Topics
Same anecdote for me (except I'm +/- 40 years experience). I consider my self a pretty good dev for non-web dev (GPU's, assembly, optimisation,...) and my conclusion is the same as you: impressive and scary. If the somehow the idea of what you want to do is on the web in text or in code, then Claude most likely has it. And its ability to understand my own codebases is just crazy (at my age, memory is declining and having Claude to help is just waow). Of course it fails some times, of course it need direction, but the thing it produces is really good.
View on HN · Topics
Scary is that the LLM might have been trained on the entire open source code ever produced - which is far beyond human comprehension - and with ever growing capability (bigger context window, more training) my gut feeling is that, it would exceed human capability in programming pretty soon. Considering 2025 was the ground breaking year for agents, can't stop imagine what would happen when it iterates in the next couple of years. I think it would evolve to be like Chess playing engines that consistently beat top Chess players in the world!
View on HN · Topics
I'm seeing this as well. Not huge codebases but not tiny - 4 year old startup. I'm new there and it would have been impossible for me to deliver any value this soon. 12 years experience; this thing is definitely amazing. Combined with a human it can be phenomenal. It also helped me tons with lots of external tools, understand what data/marketing teams are doing and even providing pretty crucial insights to our leadership that Gemini have noticed. I wouldn't try to completely automate the humans out of the loop though just yet, but this tech for sure is gonna downsize team numbers (and at the same time - allow many new startups to come to life with little capital that eventually might grow and hire people. So unclear how this is gonna affect jobs.)
View on HN · Topics
This is the thing that will be changing the open source and small/medium SaaS world a lot. Why use a 3rd party dependency that might have features you don't need when you can write a hyper-specific solution in a day with an LLM and then you control the full codebase. Or why pay €€€ for a SaaS every month when you can replicate the relevant bits yourself?
View on HN · Topics
It seems to me these days, any code I want to write tries to solve problems that LLMs already excel at. Thankfully my job is perhaps just 10% about coding, and I hope people like you still have some coding tasks that cannot be easily solved by LLMs. We should not exeggarate the capabilities of LLMs, sure, but let's also not play "don't look up".
View on HN · Topics
- I cloned a project from GitHub and made some minor modifications. - I used AI-assisted programming to create a project. Even if the content is identical, or if the AI is smart enough to replicate the project by itself, the latter can be included on a CV.
View on HN · Topics
I think I would prefer the former if I were reviewing a CV. It at least tells me they understood the code well enough to know where to make their minor tweaks. (I've spent hours reading through a repo to know where to insert/comment out a line to suit my needs.) The second tells me nothing.
View on HN · Topics
Do people really see a CV and read "computer mommy made me a program" and think it's impressive
View on HN · Topics
A CV for the disappearing job market as you shovel money into a oligarchy.
View on HN · Topics
Objectively, we are talking about systems that have gone from being cute toys to outmatching most juniors using only rigid and slow batch training cycles. As soon as models have persistent memory for their own try/fail/succeed attempts, and can directly modify what's currently called their training data in real time, they're going to develop very, very quickly. We may even be underestimating how quickly this will happen. We're also underestimating how much more powerful they become if you give them analysis and documentation tasks referencing high quality software design principles before giving them code to write. This is very much 1.0 tech. It's already scary smart compared to the median industry skill level. The 2.0 version is going to be something else entirely.
View on HN · Topics
Honestly the scary part is that we don’t really even need one more Opus. If all we had for the rest of our lives was Opus 4.5, the software engineering world would still radically change. But there’s no sign of them slowing down.
View on HN · Topics
If they'd be good enough you could rent them to put together closed source stuff you can hide behind a paywall, or maybe the AI owners would also own the paywall and rent you the software instead. The second that that is possible it will happen.
View on HN · Topics
Up until now, no business has been built on tools and technology that no one understands. I expect that will continue. Given that, I expect that, even if AI is writing all of the code, we will still need people around who understand it. If AI can create and operate your entire business, your moat is nil. So, you not hiring software engineers does not matter, because you do not have a business.
View on HN · Topics
> Up until now, no business has been built on tools and technology that no one understands. I expect that will continue. Big claims here. Did brewers and bakers up to the middle ages understand fermentation and how yeasts work?
View on HN · Topics
Does the corner bakery need a moat to be a business? How many people understand the underlying operating system their code runs on? Can even read assembly or C? Even before LLMs, there were plenty of copy-paste JS bootcamp grads that helped people build software businesses.
View on HN · Topics
> Does the corner bakery need a moat to be a business? Yes, actually. Its hard to open a competing bakery due to location availability, permitting, capex, and the difficulty of converting customers. To add to that, food establishments generally exist on next to no margin, due to competition, despite all of that working in their favor. Now imagine what the competitive landscape for that bakery would look like if all of that friction for new competitors disappeared. Margin would tend toward zero.
View on HN · Topics
With no margins and no paid employees, who is going to have the money to buy the bread?
View on HN · Topics
> If all an engineer did all day was build apps from scratch, with no expectation that others may come along and extend, build on top of, or depend on, then sure, Opus 4.5 could replace them. Why do they need to be replaced? Programmers are in the perfect place to use AI coding tools productively. It makes them more valuable.
View on HN · Topics
Because we’re expensive and companies would love to get rid of us
View on HN · Topics
It is massively cheaper than an overseas engineer. A cheap engineer can pump out maybe 1000 lines of low quality code in an hour. So like 10k tokens per hour for $50. So best case scenario $5/1000 tokens. LLMS are charging like $5 per million of tokens. And even if it is subsidized 100x it is still cheaper an order of magnitude than an overseas engineer. Not to mention speed. An LLM will spit out 1000 lines in seconds, not hours.
View on HN · Topics
I trust my offshore engineers way more than the slop I get from the "AI"s. My team makes my life a lot easier, because I know they know what they are doing. The LLMs, not so much.
View on HN · Topics
Adding capacity to software engineering through LLMs is like adding lanes to a highway — all the new capacity will be utilized. By getting the LLM to keep changes minimal I’m able to keep quality high while increasing velocity to the point where productivity is limited by my review bandwidth. I do not fear competition from junior engineers or non-technical people wielding poorly-guided LLMs for sustained development. Nor for prototyping or one offs, for that matter — I’m confident about knowing what to ask for from the LLM and how to ask.
View on HN · Topics
No that has certainly been my experience, but what is going to be the forcing function after a company decides it needs less engineers to go back to hiring?
View on HN · Topics
When an LLM can rewrite it in 24 hours and fill the missing parts in minutes that argument is hard to defend. I can vibe code what a dev shop would charge 500k to build and I can solo it in 1-2 weeks. This is the reality today. The code will pass quality checks, the code doesn’t need to be perfect, it doesn’t need to be cleaver it needs to be. It’s not difficult to see this right? If an LLM can write English it can write Chinese or python. Then it can run itself, review itself and fix itself. The cat is out of bag, what it will do to the economy… I don’t see anything positive for regular people. Write some code has turned into prompt some LLM. My phone can outplay the best chess player in the world, are you telling me you think that whatever unbound model anthropic has sitting in their data center can’t out code you?
View on HN · Topics
> let LLMs do whatever behind the scenes to hit the specs assuming for the sake of argument that's completely true, then what happens to "competitive advantage" in this scenario? it gets me thinking: if anyone can vibe from spec, whats stopping company a (or even user a) from telling an llm agent "duplicate every aspect of this service in python and deploy it to my aws account xyz"... in that scenario, why even have companies?
View on HN · Topics
These are businesses with extra-large capital requirements. You ain't replicating them, because you don't have the money, and they can easily strangle you with their money as you start out. Software is different, you need very very little to start, historically just your own skills and time. Thes latter two may see some changes with LLMs.
View on HN · Topics
I think `andrekandre is right in this hypothetical. Who'd pay for brand new Photoshop with a couple new features and improvements if LLM-cloned Photoshop-from-three-months-ago is free? The first few iterations of this cloud be massively consumer friendly for anything without serious cloud infra costs. Cheap clones all around. Like generic drugs but without the cartel-like control of manufacturing. Business after that would be dramatically different, though. Differentiating yourself from the willing-to-do-it-for-near-zero-margin competitors to produce something new to bring in money starts to get very hard. Can you provide better customer support? That could be hard, everyone's gonna have a pretty high baseline LLM-support-agent already... and hiring real people instead could dramatically increase the price difference you're trying to justify... Similarly for marketing or outreach etc; how are you going to cut through the AI-agent-generated copycat spam that's gonna be pounding everyone when everyone and their dog has a clone of popular software and services? Photoshop type things are probably a really good candidate for disruption like that because to a large extent every feature is independent. The noise reduction tool doesn't need API or SDK deps on the layer-opacity tool, for instance. If all your features are LLM balls of shit that doesn't necessarily reduce your ability to add new ones next to them, unlike in a more relational-database-based web app with cross-table/model dependencies, etc. And in this "try out any new idea cheaply and throw crap against the wall and see what sticks" world "product managers" and "idea people" etc are all pretty fucked. Some of the infinite monkeys are going to periodically hit to gain temporary advantage, but good luck finding someone to pay you to be a "product visionary" in a world where any feature can be rolled out and tested in the market by a random dev in hours or days.
View on HN · Topics
It might scale. So far, Im not convinced, but lets take a look at fundmentally whats happening and why humans > agents > LLMs. At its heart, programming is a constraint satisfaction problem. The more constraints (requirements, syntax, standards, etc) you have, the harder it is to solve them all simultaneously. New projects with few contributors have fewer constraints. The process of “any change” is therefore simpler. Now, undeniably 1) agents have improved the ability to solve constraints by iterating ; eg. Generate, test, modify, etc. over raw LLm output. 2) There is an upper bound (context size, model capability) to solve simultaneous constraints. 3) Most people have a better ability to do this than agents (including claude code using opus 4.5). So, if youre seeing good results from agents, you probably have a smaller set of constraints than other people. Similarly, if youre getting bad results, you can probably improve them by relaxing some of the constraints (consistent ui, number of contributors, requirements, standards, security requirements, split code into well defined packages). This will make both agents and humans more productive. The open question is: will models continue to improve enough to approach or exceed human level ability in this? Are humans willing to relax the constraints enough for it to be plausible? I would say currently people clambering about the end of human developers are cluelessly deceived by the “appearance of complexity” which does not match the “reality of constraints” in larger applications. Opus 4.5 cannot do the work of a human on code bases Ive worked on. Hell, talented humans struggle to work on some of them. …but that doesnt mean it doesnt work. Just that, right now, the constraint set it can solve is not large enough to be useful in those situations . …and increasingly we see low quality software where people care only about speed of delivery; again, lowering the bar in terms of requirements. So… you know. Watch this space. Im not counting on having a dev job in 10 years. If I do, it might be making a pile of barely working garbage. …but I have one now, and anyone who thinks that this year people will be largely replaced by AI is probably poorly informed and has misunderstood the capabilities on these models. Theres only so low you can go in terms of quality.
View on HN · Topics
Can't do that, else KPIs won't show that AI tools reduced amount of coding work by xx%