Summarizer

Workforce Displacement and Automation

Fears and anecdotes regarding job security, including a "Staff SWE" preferring AI to coworkers and contractors losing bids to smaller, AI-equipped teams. Discussions cover the automation of "bullshit jobs," the potential for a "winner take all" economy, and management incentives to cut labor costs.

← Back to Why didn't AI “join the workforce” in 2025?

The dialogue reveals a deepening divide between those who view AI as a revolutionary replacement for human labor and skeptics who see it as a sophisticated tool prone to generating "meaningless fluff." While some senior developers provocatively claim they would rather lose a coworker than their AI assistants, others argue that corporate leadership is prioritizing short-term cost-cutting over the potential for massive, AI-driven growth. This shift is already manifesting in the market, as lean, AI-augmented teams begin to outbid larger, traditional organizations by automating complex prototyping and administrative tasks. Ultimately, the consensus suggests a looming "winner-take-all" economy where the automation of "bullshit jobs" may inadvertently strip away human oversight, leaving professionals to navigate a future of hyper-efficiency or potential obsolescence.

48 comments tagged with this topic

View on HN · Topics
"the old economy had changed forever and was dead." Ehem - what is the difference compered to now? Wasn't programmers obsolete by 6mths ago and nobody would work so we do need UBI? However your point that if everybody are thinking there is buble there is none is valid. Ironically your whole post undermine this point. And you are not alone in your analysis. General bubble wisdom is not settled as one might think. Plus famous Alan Greenspan "irrational exuberance" was in 1996. And AFAIK in 1999 everybody know there is buble but it busted only in 2000. On top of that I have seen overlying plots of stock prices now and before dot com suggesting there is 1-2 years of increases still to go.
View on HN · Topics
> Ehem - what is the difference compered to now? Wasn't programmers obsolete by 6mths ago and nobody would work so we do need UBI? You're applying an arbitrary time constraint to the realization of AI's promise in order to rubbish it. This is a logical mistake common among critics: not yet, so never. It doesn't seem as if there is a near limit to the tech's development. Until that changes, the potential for job wipeouts and societal upheaval is real, whether in 5 or 50 years.
View on HN · Topics
> I've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs. I normally see things the same way you do, however I did have a conversation with a podiatrist yesterday that gave me food for thought. His belief is that certain medical roles will disappear as they'll become redundant. In his case, he mentioned radiology and he presented his case as thus: A consultant gets a report + X-Ray from the radiologist. They read the report and confirm what they're seeing against the images. They won't take the report blindly. What changes is that machines have been learning to interpret the images and are able to use an LLM to generate the report. These reports tend not to miss things but will over-report issues. As a consultant will verify the report for themselves before operating, they no longer need the radiologist. If the machine reports a non-existent tumour, they'll see there's no tumour.
View on HN · Topics
> someone who’s shipped entire new frontend feature sets, while also managing a team. I’ve used LLM to prototype these features rapidly and tear down the barrier to entry on a lot of simple problems that are historically too big to be a single-dev item, and clear out the backlog of “nice to haves” that compete with the real meat and bread of my business. This prototyping and “good enough” development has been massively impactful in my small org Has any senior React dev code review your work? I would be very interested to see what do they have to say about the quality of your code. It's a bit like using LLMs to medically self diagnose yourself and claiming it works because you are healthy. Ironically enough, it does seem that the only workforce AIs will be shrinking will be devs themselves. I guess in 2025, everyone can finally code
View on HN · Topics
I was at a podiatrist yesterday who explained that what he's trying to do is to "train" an LLM agent on the articles and research papers he's published to create a chatbot that can provide answers to the most common questions more quickly than his reception team can. He's also using it to speed up writing his reports to send to patients. Longer term, he was also quite optimistic on its ability to cut out roles like radiologists, instead having a software program interpret the images and write a report to send to a consultant. Since the consultant already checks the report against any images, the AI being more sensitive to potential issues is a positive thing: giving him the power to discard erroneous results rather than potentially miss something more malign.
View on HN · Topics
I think we are the stage of the "AI Bubble" that is equivalent to saying it is 1997, 18% of U.S. households have internet access. Obviously, the internet is not working out or 90%+ of households would have internet access if it was going to be as big of deal as some claim. I work at a place that is doing nothing like this and it seems obvious to me we are going to get put out of business in the long run. This is just adding a power law on top of a power law. Winner winner take all. What I currently do will be done by software engineers and agents in 10 years or less. Gemini is already much smarter than I am. I am going to end up at a factory or Walmart if I can get in. The "AI bubble" is a mass delusion of people in denial of this reality. There is no bubble. The market has just priced all this forward as it should. There is a domino effect of automation that hasn't happened yet because your company still has to interface with stupid companies like mine that are betting on the hand loom. Just have to wait for us to bleed out and then most people will never get hired for white collar work again. It amuses me when someone says who is going to want the factory jobs in the US if we reshore production? Me and all the other very average people who get displaced out of white collar work and don't want to be homeless is who. "More valuable" work is just 2026 managerial class speak for "place holder until the agent can take over the task".
View on HN · Topics
The “double checking” is a step to make sure there’s someone low-level to blame. Everyone knows the “double-checking” in most of these systems will be cursory at best, for most double-checkers. It’s a miserable job to do much of, and with AI, it’s a lot of what a person would be doing. It’ll be half-assed. People will go batshit crazy otherwise. On the off chance it’s not for that reason, productivity requirements will be increased until you must half-ass it.
View on HN · Topics
The denial on this topic is genuinely surreal. I've knocked out entire features in a single prompt that took me days in the past. I guess I should be happy that so many of my colleagues are willing to remove themselves from the competitive job pool with these kinds of attitudes.
View on HN · Topics
A previous company I worked for is San Francisco was very anti remote, but they announced on linked in that they are ok with remote engineers suddenly. It seems it’s still a workers market at least in SF. I’d AI could do it or even reduced head count I don’t think that would be the case.
View on HN · Topics
I think a practical measure still useful right now, which does capture a lot of the "non-performance" capabilities of an employee, is as follows: "Why has my job not been outsourced yet, since it is far cheaper?" Those are probably the same reasons why AI won't take your job this year. Raw coding metrics are a very small part of being a cog in a company, which is not me saying it will never happen. Just me saying that thos focus on coding performance kind of misses the forest for the trees.
View on HN · Topics
The adoption of AI tools for software development will probably not result in sudden layoffs but rather on harder to measure changes, like smaller teams being able to tackle significantly more ambitious projects than before. I suspect that another kind of impact is already happening in organisations where AI adoption is uneven: suddenly some employees appear to be having a lot more leisure time while apparently keeping the same productivity as before.
View on HN · Topics
The response to the Sal Khan op-ed resonated with me, along with other parts of this article. Something I’ve been digging more into is some of the figures around proposed job losses from AI. I think I even posted a simulation paper last week. After posting that, I came across numerous papers which critique Frey & Osborne’s approach, who are some of the forefathers for the AI job losses figures we see banded around commonly these days. One such paper is here but i can dig out others: https://melbourneinstitute.unimelb.edu.au/__data/assets/pdf_... It has made me very cautious around bold statements on AI - and I was already at the cautious end.
View on HN · Topics
Job losses aren’t directly tied to productivity, in the short term it’s all about expectations. Many companies are laying people off and then trying to get staff back when it doesn’t work. How much of this is hype and how much is sustained is difficult to determine right now.
View on HN · Topics
It never made sense to blame AI in the first place for tech layoffs. You have a new tool that you think can supercharge your employees, make them ~10x productive, be leveraged to disrupt all sorts of industries, and have the workforce best suited to learn and use these tools to their full potential. You think the value of labor may soon collapse, but there are piles of money to be made before that happens. If you truly believed that, you would be spinning up new projects and offshoots as this is a serious arms race with a ton of potential upside (not just in developing AI, but in leveraging it to build things cheaper). Allegedly every dollar you spent on an engineer is potentially worth 10x(?) what it was a couple years ago. Meaning your profit per engineer could soar, but tech companies decided they don't want more profit? AI is mostly solved and the value of labor has already collapsed? Or AI is a nice band-aid to prop up a smaller group of engineers while we weather the current economic/political environment and most CXO's don't believe there are piles of money to be had by leveraging AI now or the near future.
View on HN · Topics
I’ve had this same thought, although less well-articulated: AI is supposedly going to obviate the need for white collar workers, and the best all the CEOs can come up with is the exact current status quo minus the white collar workers?
View on HN · Topics
> Allegedly every dollar you spent on an engineer is potentially worth 10x(?) what it was a couple years ago. Meaning your profit per engineer could soar, but tech companies decided they don't want more profit? Exactly, so many of these claims are complete nonsense. I'm supposed to believe that boards/investors would be fine with companies doing massive layoffs to maintain flat/minuscule growth, when they could keep or expand their current staffing and massively expand their market share and profits with all this increased productivity? It's ridiculous. If this stuff had truly increased productivity at the levels claimed we would see firms pouring money into technical staff to capitalize on this newfound leverage.
View on HN · Topics
I really don’t agree with the author here. Perplexity has, for me, largely replaced Cal Newport’s job (read other journalists work and synthesize celebrity and pundit takes on topic X). I think the take that Claude isn’t literally a human so agents failed is silly and a sign of motivated reasoning. Business processes are going to lag the cutting edge by years in any conditions and by generations if there is no market pressure. But Codex isn’t capable of doing a substantial portion of what I would have had to pay a freelancer/consultant to do? Any LLM can’t replace a writer for a content mill? Nonsense. Newport needs to open his eyes and think harder about how a journalist can deliver value in the emerging market.
View on HN · Topics
I think the issue is that everybody assumes the economy operates under some kind of "free market" conditions: limited by available labor but with unlimited potential demand. In that situation, AI could indeed cause massive unemployment. But this is perhaps not the case. By pesimistic estimates half of the people work in bs jobs that have no real value to society, and every capitalist is focused on rent extraction now. If the economy can operate under such conditions, it doesn't really need more productivity growth, it is already demand-limited.
View on HN · Topics
This article seems based in a poorly defined statement. What does "joining the workforce" actually mean? There are plenty of jobs that have already been pretty much replaced by AI: certain forms of journalism, low-end photoshop work, logo generation, copywriting. What does the OP need to see in order to believe that AI has "joined the workforce"?
View on HN · Topics
TikTok, Youtube, news, blogs, … are getting flooded with AI generated content, I'd call that a pretty substantial "change in output". I think the mistake here is expecting that AI is just making workers in older jobs faster, when the reality is, more often than not, that it changes the nature of the task itself. Whenever AI reached the "good enough" point, it doesn't do so in a way that nicely aligns with human abilities, quite the opposite, it might be worse at performing a task, but be able to perform it 1000x faster. That allows you to do things that weren't previously possible, but it also means that professionals might not want to rely on using AI for the old tasks. A professional translator isn't going to switch over to using AI, the quality isn't there yet, but somebody like Amazon could offer a "OCR & translate all the books" service and AI would be good enough for it, since it could handle all the books that nobody has the time and money to translate manually. Which in turn will eventually put the professional translator out of a job when it gets better than good enough. We aren't quite there yet, but getting pretty close. In 2025 a lot of AI went from "useless, but promising" to "good enough".
View on HN · Topics
And just because people are thowing money at an AI company doesnt mean they have or will ever have a marketable product. The #1 product of nearly every AI company is hope, hope that one day they will replace the need to pay real employees. Hope like that allows a company to cut costs and fund dividends ... in the short term. The long term is some other person's problem. (Ill change my mind the day Bill Gates trusts MS copilot with his personal banking details.)
View on HN · Topics
I've seen organizations where 300 of 500 people could effectively be replaced by AI, just by having some of the the remaining 200 orchestrate and manage automation workflows that are trivially within the capabilities of current frontier models. There's a whole lot of bullshit jobs and work that will get increasingly and opaquely automated by AI. You won't see jobs go away unless or until organizations deliberately set out to reduce staff. People will use AI throughout the course of their days to get a couple of "hours" of tasks done in a few minutes, here and there, throughout the week. I've already seen reports and projects and writing that clearly comes from AI in my own workplace. Right now, very few people know how to recognize and assess the difference between human and AI output, and even fewer how to calibrate work assignments. Spreadsheet AIs are fantastic, reports and charting have just hit their stride, and a whole lot of people are going to appear to be very productive without putting a whole lot of effort into it. And then one day, when sufficiently knowledgable and aware people make it into management, all sorts of jobs are going to go quietly away, until everything is automated, because it doesn't make sense to pay a human 6 figures what an AI can do for 3 figures in a year. I'd love to see every manager in the world start charting the Pareto curves for their workplaces, in alongside actual hours worked per employee - work output is going to be very wonky, and the lazy, clever, and ambitious people are all going to be using AI very heavily. Similar to this guy: https://news.ycombinator.com/item?id=11850241 https://www.reddit.com/r/BestofRedditorUpdates/comments/tm8m... Part of the problem is that people don't know how to measure work effectively to begin with, let alone in the context of AI chatbots that can effectively do better work than anyone a significant portion of the adult population of the planet. The teams that fully embrace it, use the tools openly and transparently, and are able to effectively contrast good and poor use of the tools, will take off.
View on HN · Topics
It seems like we are using AI to automate the unimportant parts of jobs that we shouldn’t have been doing anyway. Things like endless status reports or emails. But from what I’ve seen it just makes that work output even less meaningful—who wants to read AI generated 10 pages that could have been two bullet points? And it doesn’t actually improve productivity because that was never the bottleneck of those jobs anyway. If anything, having some easy rote work is a nice way to break up the pace.
View on HN · Topics
Employee has a few bullet-points of updates, they feed it through an LLM to fluff it out into an email to their manager, and then the manager puts the received email through an LLM to summarize it down to a few bullet points... Probably making some mistakes. There are all these things in writing we used as signals for intelligence, attention to detail, engagement, willingness to accept feedback, etc... but they're now easy to counterfeit at scale. Hopefully everyone realizes what's going on and cuts out the middleman.
View on HN · Topics
> because it doesn't make sense to pay a human 6 figures what an AI can do for 3 figures in a year. Humans have one bit over "AI": You can't blame and fire "AI" when it inevitably goes wrong.
View on HN · Topics
> I've seen organizations where 300 of 500 people could effectively be replaced by AI, just by having some of the the remaining 200 orchestrate and manage automation workflows that are trivially within the capabilities of current frontier models Curious, what industries? And what capabilities do LLMs present to automate these positions that previous technologies do not? 'Bullshit jobs' and the potential to automate them are very real, but I think many of them could have been automated long before LLMs, and I don't think the introduction of LLMs is going to solve the bottleneck that prevents jobs like these from being automated.
View on HN · Topics
What do you think is the bottleneck?
View on HN · Topics
The percentage of jobs that are actually bullshit as opposed to the percentage of jobs the person making the claim thinks are bullshit merely because they are not that person's own job. Which is, of course, conveniently never a bullshit job but a Very Important One.
View on HN · Topics
AI doing a bullshit job isn't a productivity increase though; it's at best a cost cut. It would be an even bigger cost cut to remove the bullshit job
View on HN · Topics
Pretty ironic that he complains about Kahn citing someone who told him AI agents are capable of replacing 80% of call center employees, right after quoting Gary Marcus of all people, claiming LLMs will never live up to the hype. If you want to focus on what AI agents are actually capable of today, the last person I'd pay any attention to is Marcus, who has been wrong about nearly everything related to AI for years, and does nothing but double down.
View on HN · Topics
I'm a staff level SWE at a company that you've all heard of (not a flex, just providing context). If my manager said to me tomorrow: "I have to either get rid of one of your coworkers or your use of AI tools, which is it?" I would, without any hesitation, ask that he fire one of my coworkers. Gemini / Claude is way more useful to me than any particular coworker. And now I'm preparing for my post-software career because that coworker is going to be me in a few years. Obviously I hope that I'm wrong, but I don't think I am.
View on HN · Topics
Is that a useful thought experiment? Claude benefits you as an individual more than a coworker, but I find I hard to believe your use of Claude is more of a value add to the business than an additional coworker. Especially since that coworker will also have access to Claude. In the past we also just raised the floor on productivity, do you think this will be different?
View on HN · Topics
I get the point you are making, but the hypothetical question from your manager doesn't make sense to me. It's obviously true that any of your particular coworker wouldn't be useful to you relative to an AI agent, since their goal is to perform their own obligations to the rest of the company, whereas the singular goal of the AI tool is to help the user. Until these AI tools can completely replace a developer on its own, the decision to continue employing human developers or paying for AI tools will not be mutually exclusive.
View on HN · Topics
Sweet then your fired coworker goes “i will do the same work for 80% of thw09j9m’s salary”.
View on HN · Topics
For your answer to be correct for your employer , the added productivity from your use of LLMs must be at least as much as the productivity from whichever coworker you're having fired. No study I've seen claims much above a 20% increase in productivity, so either a) your productivity without LLMs was ~5x that of your coworkers, or b) you're making a mistake in your analysis (likely some combination of thinking about it from your perspective instead of your employers and overestimating how helpful LLMs are to you).
View on HN · Topics
It makes him (presumed) 20% more effective than his coworker makes him . Overall effectiveness of the team is not being considered, but that's why his manager isn't asking him :)
View on HN · Topics
That fits, except later on they say > And now I'm preparing for my post-software career because that coworker is going to be me in a few years. Which implies they anticipate their manager (or someone higher up in the company) to agree with them, presumably when considering overall effectiveness of the team.
View on HN · Topics
"I have to either get rid of one of your coworkers or your laptop, which is it?"
View on HN · Topics
But isn't living in a stable society, where everyone can find employment, achieve some form of financial security, and not be ravaged by endless rounds of layoffs, more desirable than having net productive co-workers?
View on HN · Topics
I’ll make sure to pour one out in memory of all the lamplighters, the stable hands, night soil collectors, and coopers that no longer can find employment these days. These arguments were had 150 years ago with the advent of the railroad, with electricity, with factories and textiles, even if you don’t have net productive coworkers, if there’s a more productive way to do things, you’ll go out of business and be supplanted. Short of absolutely tyrannical top down control, which would make everyone as a whole objectively poorer, how would this ever be prevented?
View on HN · Topics
The difference is that back then we were talking a few jobs here and there. Now we are talking about the majority of work being automated, from accountancy to zoo keeping, and very little in the way of new jobs coming in to replace them. By the way stable hands and night soil collectors are still around. Just a bit harder to find. We used to have a septic tank that had to be emptied by workmen every so often. Pretty much the same.
View on HN · Topics
You're forgetting that corporations have only one responsibility & it is to make profits for their shareholders.
View on HN · Topics
Whereas a government's responsibility is to ensure peace and prosperity for as many of its citizens as possible. These things will be at odds when increased profits for companies no longer coincides with increased employment.
View on HN · Topics
What's most useful to you is not necessarily most useful to the business. The bar for critical thinking to get staff at this company I've surely heard of must not be very high.
View on HN · Topics
Why would a company pocket the savings of less labor when they could reinvest the productivity gains of AI in more labor, shifting employees to higher-level engineering tasks?
View on HN · Topics
Because shareholder value is more important than productivity to leadership. Thank Jack Welch.
View on HN · Topics
Im there with you at the govt contracting company i work for we lost a contract we had for ten years. Our team was 10 to 15 employees and we lost the contract to a company who are now doing the work with 5 employees and AI. My company said we now are going to being bidding with smaller teams and promoting our use of AI. One example of them promoting the company's use of AI is creating a prototype using chatGPT and AntiGravity. He took a demo video off of Youtube of a govt agency app, fed the video to chatGPT, GPT spit out all the requirements for the ten page application and then he fed those requirements to AntiGravity and boom it repilcated/created the working app/prototype in 15 minutes. Previously that would take a team of 3 to 5 a week or few to complete such a prototype.
View on HN · Topics
You would probably have the same answer if your boss said, I have to get rid of one of your co-workers or your use of editing tools - ie all editors. You either get rid of your co-worker or go back to using punch cards. You would probably get rid of your co-worker and keep Vim/Emacs/VsCode/Zed/JetBrains or whatever editor you use. All your example tells us is that AI tools are valuable tools.