Summarizer

Corporate Hype vs. Utility

Cynicism toward executive predictions (Altman, Hinton) viewed as efforts to pump stock prices or attract investment. Users contrast "corporate puffery" and "vaporware" with the practical, often mundane utility of AI in specific B2B workflows like insurance claim processing or data extraction.

← Back to Why didn't AI “join the workforce” in 2025?

Discussions surrounding AI reflect a deep-seated cynicism toward "corporate puffery," with many viewing the grandiose predictions of tech executives as self-serving maneuvers to inflate stock prices and attract investment. While high-level promises of AGI and autonomous agents are often dismissed as "snake oil" or "vaporware," some users point to tangible, albeit mundane, successes in B2B sectors like insurance where AI streamlines data extraction and email drafting. This creates a sharp divide between those who see a speculative bubble driven by FOMO and practitioners who find value in the technology as a specialized tool for optimizing digital workflows. Ultimately, there is a growing call to abandon "vibes-based" hype in favor of a sober assessment of AI’s current capabilities and its real-world limitations.

56 comments tagged with this topic

View on HN · Topics
I know a lot of people with access to Claude Code and the like will say that 'No, it sure seems to reason to me!' Great. But most (?) of the business out there aren't paying for the big boy models. I know of a F100 that got snookered into a deal with GPT 4 for 5 years, max of 40 responses per session, max of 10 sessions of memory, no backend integration. Those folks rightly think that AI is a bad idea.
View on HN · Topics
> practically everyone thinks we are in a bubble Very untrue, economy doesn't happen on online forums in echo chambers but out there. Every major company invests into AI however they can for the classical FOMO emotion. This is how movers and decision makers think. No CEO thinks - this will crash, so lets invest into it massively and spread our company finances more thinly when the SHTF comes.
View on HN · Topics
Sorry but that was not my point. My point was refuting of the thesis (in the comment that I am replying to) that nobody was making grand claims about AI contrary to grand claims about internet pre dot com. Obviously in both cases there were/are grand claims made.
View on HN · Topics
> So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes. A lot of the predictions come from interviews and presentations with top tech executives. Their job is to increase the perceived value of their product, not to offer an objective assessment. I've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs. I have also gotten a lot of value out of Cembalest's recent "eyes on the market", which looks at the economic side of this AI push.
View on HN · Topics
In the gap between reality and executive speak around LLMs, I’m wondering about motives. Getting executives, junior devs, HR, and middle management hooked onto an advice and document template machine owned and operated by your corporation would seemingly have a huge upside for an entity like Microsoft. Their infatuation might be more about how profitable such arrangements would be versus any meaningful productivity improvement for developers. Like, in ways that BizTalk, Dynamics, and SharePoint attempt to capture business processes onto a pay-for-play MS stack, and all benefit when being pitched to non-technical customers, Copilot provides an ever evolving sycophantic exec-centred channel to push and entangle it all as MS sees fit. Having all parts of your business divulge in real-time through saveable chats every part of your business, strategy, tooling, and process to MS servers and Azure services is itself a pretty stunning arrangement. Imagining those same services directly selling busy customers entangling integrations, or trendy azure services, through freewheeling MCP-like glue, all inline in that customers own business processes? It sounds like tech exec nirvana, automated self-directed sales. They don’t need job deleting sentience to make the share price go up and rationalize this LLM push. They are far more aware of the limitations than we…
View on HN · Topics
> They are far more aware of the limitations than we… Every individual only cares about their paycheck and promotion. They will happily ignore their knowledge of the limitations if it means squeezing out an extra resume bulletpoint, paycheck or promotion, even if it causes the company to go bankrupt down the line (by that time they would've jumped ship somewhere else anyway).
View on HN · Topics
I've seen this sort of thing a few times. "Yes, I'm sure AI can do that other job that's not mine over there.". Now maybe foot doctors work closer to radiologists than I'm aware of. But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field. Apparently there are one or two incredibly easy tasks that they can sort of do, but it comprises a very small amount of the job of an actual radiologist.
View on HN · Topics
I'm afraid I don't have the details. I was reading about certain lung issues the AI was doing a good job on and thought, "oh well that's it for radiology." But the radiologist chimed in with, "yeah that's the easiest thing we do and the rates are still not acceptable, meanwhile we keep trying to get it to do anything harder and the success rates are completely unworkable."
View on HN · Topics
AI luminary and computer scientist Geoffrey Hinton predicted in 2016 that AI would be able to do all of the things radiologists can do within five years. We're still not even close. He was full of shit and now almost 10 years later he's changed his prediction, though still pretending he was right, by moving the goal posts. His new prediction is that radiologists will use AI to be more efficient and accurate, half suggesting he meant that all along. He didn't. He was simply bullshitting, bluffing, making an educated wish. This is the nonsense we're living through, predictions, guesses, promises that cannot possibly be fulfilled and which will inevitably change to something far less ambitious and with much longer timelines and everyone will shrug it off as if we weren't being mislead by a bunch of fraudsters.
View on HN · Topics
> Their job is to increase the perceived value of their product I don't agree. Your job cannot be "lie to the customer." They may see this as the easy way to get more money and justify their comfy position, but it is not their job.
View on HN · Topics
Elon proved that "corporate puffery" is more valuable than any product you make or could make. The job of CEOs now is to generate science fiction fantasies and sell those to the public.
View on HN · Topics
I overstated. To clarify, I think a major purpose of those public interviews/conferences is to increase the perceived value of the product. Entrepreneurship, essentially. Taking chances, speculating, thinking about possibilities, etc.
View on HN · Topics
No one said anything about lying. Their job is to make the company successful. Part of success is raising funds and boosting share price. That is their job, and how do you imagine they can do that? Sound kind of glum and down about the company prospects? Do not make be laugh. Even if the company is literally haemorrhaging cash and has < week of runway left, senior executives are often so far up their own basses and surrounded by yes men, that they often honestly believe they can turn things around. Its often not about will-fully lying. Its just delusional belief and faith in something that is very unlikely. (Last minute turn arounds and DSA do exist, but like lottery players, seeing the very few people who do win and mimicking them does not make you into a winner; most of the time)
View on HN · Topics
Snake oil salesmen will predict that "this will be the year when the snake oil you buy from us will cure all ailments". Nothing new under the sun.
View on HN · Topics
Interestingly enough, my understanding is that some snakes in Asia can be used to produce oil that helps joint problems. American snakes weren't useful for this. So something that was sort of useful in a niche application was co-opted by people who didn't know how to make it work and then ultra hyped. The parallels are spot on.
View on HN · Topics
And publications can expect more readers for breathless hype articles than for sober analyses.
View on HN · Topics
I appreciate the effort and that's a nice looking project. That's similar to the gains I've gotten as well with Greenfield projects (I use codex too!). However not as grandiose as these the Canadian girlfriend post category.
View on HN · Topics
I work in insurance - regulated, human capital heavy, etc. Three examples for you: - our policy agent extracts all coverage limits and policy details into a data ontology. This saves 10-20 mins per policy. It is more accurate and consistent than our humans - our email drafting agent will pull all relevant context on an account whenever an email comes in. It will draft a reply or an email to someone else based on context and workflow. Over half of our emails are now sent without meaningfully modifying the draft, up from 20% two months ago. Hundreds of hours saved per week, now spent on more valuable work for clients. - our certificates agent will note when a certificate of insurance is requested over email and automatically handle the necessary checks and follow up options or resolution. Will likely save us around $500k this year. We also now increasingly share prototypes as a way to discuss ideas. Because the cost to vibe code something illustrative is very low, an it’s often much higher fidelity to have the conversation with something visual than a written document
View on HN · Topics
Thanks for that. It's a really interesting data point. My takeaway, which I've already felt and I feel like anyone dealing with insurance would anyway, is that the industry is wildly outdated. Which I guess offers a lot of low hanging fruit where AI could be useful. Other than the email drafting, it really seems like all of that should have been handled by just normal software decades ago.
View on HN · Topics
>So, errors can clearly happen, but they happen less often than they used to. If you take the comment at face value. I'm sorry but I've been around this industry long enough to be sceptical of self serving statements like these. >"draft" clearly implies a human will will double-check. I'm even more sceptical of that working in practice.
View on HN · Topics
That sounds a lot like "LLMs are finally powerful enough technology to overcome our paper/PDF-based business". Solving problems that frankly had no business existing in 2020.
View on HN · Topics
Lately, it seems like all the blogs have shifted away from talking about productivity and are now talking about how much they "enjoy" working with LLMs. If firing up old coal plants and skyrocketing RAM prices and $5000 consumer GPUs and violating millions of developers' copyrights and occasionally coaxing someone into killing themselves is the cost of Brian From Middle Management getting to Enjoy Programming Again instead of having to blame his kids for not having any time on the weekends, I guess we have no choice but to oblige him his little treat.
View on HN · Topics
It’s the honeymoon period with crack all over again. Everyone feels great until their teeth start falling out.
View on HN · Topics
I don't see how AI can bring about 10%+ annual economic growth, let alone infinite abundance, without somehow crossing the bit-to-atom interface. Without a breakthrough in general-purpose robotics - which feels decades away - agents will just be confined to optimizing B2B SaaS. Human utility is rooted in the physical environment. I find digital abundance incredibly uninspiring.
View on HN · Topics
> If superhuman intelligence is solved it'll be in the form of building a more healthy society (or, if you like, a society that can outcompete other societies). Maybe so, but the point I'm trying to make is this needs to look nothing like sci-fi ASI fantasies, or rather, it won't look and feel like that before we get the humanoid AI robots that the GP mentioned. You can have humans or human institutions using more or less specialized tools that together enable the system to act much more intelligently. There doesn't need to be a single system that individually behaves like a god - that's a misconception that comes from believing that intelligence is something like a computational soul, where if you just have more of it you'll eventually end up with a demigod.
View on HN · Topics
> But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities. > So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes. The impacts of the technologies that already exist are already more than enough to concern us for now… SPOT ON, let us all take inspiration. "The impacts of the technologies that already exist are already more than enough to concern us for now"!
View on HN · Topics
Codex and the like took off because there existed a "validator" of its work - a collection of pre-existing non-LLM software - compilers, linters, code analyzers etc. And the second factor is very limited and defined grammar of programming languages. Under such constraints it was much easier to build a text generator which will validate itself using external tools in a loop, until generated stream makes sense. And the other "successful" industry being disrupted is the one where there is no need validate output, because errors are ok or irrelevant. A text not containing much factual data, like fiction or business-lingo or spam. Or pictures, where it doesn't matter which color is a specific pixel, a rough match will do just fine. But outside of those two options, not many other industries can use at scale an imprecise word or media generator. Circular writing and parsing of business emails with no substance? Sure. Not much else.
View on HN · Topics
> I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities. So well put. LLMs are useful for a great many things. It's just that being the best new product of the recent years, maybe even defining a decade doesn't cut it. It has to be the century-defining, world-ending, FOMO-inducing massive thing to put Skynet to shame and justify investments in trillion dollars. It's either AI joining the workforce soon, or Nvidia and OpenAI aren't that valuable. I guess it manages to maximize shareholder value, and make AI feel like a disappointment.
View on HN · Topics
The response to the Sal Khan op-ed resonated with me, along with other parts of this article. Something I’ve been digging more into is some of the figures around proposed job losses from AI. I think I even posted a simulation paper last week. After posting that, I came across numerous papers which critique Frey & Osborne’s approach, who are some of the forefathers for the AI job losses figures we see banded around commonly these days. One such paper is here but i can dig out others: https://melbourneinstitute.unimelb.edu.au/__data/assets/pdf_... It has made me very cautious around bold statements on AI - and I was already at the cautious end.
View on HN · Topics
Job losses aren’t directly tied to productivity, in the short term it’s all about expectations. Many companies are laying people off and then trying to get staff back when it doesn’t work. How much of this is hype and how much is sustained is difficult to determine right now.
View on HN · Topics
It never made sense to blame AI in the first place for tech layoffs. You have a new tool that you think can supercharge your employees, make them ~10x productive, be leveraged to disrupt all sorts of industries, and have the workforce best suited to learn and use these tools to their full potential. You think the value of labor may soon collapse, but there are piles of money to be made before that happens. If you truly believed that, you would be spinning up new projects and offshoots as this is a serious arms race with a ton of potential upside (not just in developing AI, but in leveraging it to build things cheaper). Allegedly every dollar you spent on an engineer is potentially worth 10x(?) what it was a couple years ago. Meaning your profit per engineer could soar, but tech companies decided they don't want more profit? AI is mostly solved and the value of labor has already collapsed? Or AI is a nice band-aid to prop up a smaller group of engineers while we weather the current economic/political environment and most CXO's don't believe there are piles of money to be had by leveraging AI now or the near future.
View on HN · Topics
We still sandbox, quarantine and restrict them though, because they can't really behave as agents, but they're effective in limited contexts. Like the way waymo cars kind of drive on a track I guess? Still very useful, but not the agents that were being sold, really. Edit: should we call them "special agents"? ;-)
View on HN · Topics
Terrible productivity loss vs. signing up for a hosted Wordpress site.
View on HN · Topics
It probably has one that the web form is already using, but if agentic AI requires specialized APIs, it's going to be a while before reality meets the hype.
View on HN · Topics
It was from Altman's blog: > We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies... "materially change the output of companies" seems fairly defined and didn't happen in most cases. I guess some kicked out more slop but I don't think that's what he meant.
View on HN · Topics
Agentic AI companies are doing millions in revenue. Just because agents haven’t spread to the entire economy yet doesn’t mean they are not useful for relatively complex tasks.
View on HN · Topics
And just because people are thowing money at an AI company doesnt mean they have or will ever have a marketable product. The #1 product of nearly every AI company is hope, hope that one day they will replace the need to pay real employees. Hope like that allows a company to cut costs and fund dividends ... in the short term. The long term is some other person's problem. (Ill change my mind the day Bill Gates trusts MS copilot with his personal banking details.)
View on HN · Topics
When did hacker news become laggard-adopter/consumer-news. Cal is a consumer of AI - interesting article for this community, but not this community. I thought hacker news was for builders and innovators - people who see the potential of a technology for solving problems big and small and go and tinker and build and explore with it, and sometimes eventually change the world (hopefully for the better). Instead of sitting on the sidelines grumbling about that some particular tech that hasn’t yet changed the world / met some particular hype (yet). Incredibly naive to think AI isn’t making real difference already (even without/before replacing labor en masse.) Actually try to explore the impact a bit. It’s not AGI, but doesn’t have to be to transform. It’s everywhere and will do nothing but accelerate. Even better, be part of proving Cal wrong for 2026.
View on HN · Topics
And we enter the Trough of Discontent...
View on HN · Topics
everyone excited about AI agents doesn’t have to evaluate the actual output they do Very few people do so neither Altman, the many CEOs industry wide, Engineering Managers, Software Engineers, “Forward Deployed Engineers” have to actually inspect their demos show good looking output its just the people in support roles that have to be like “wait a minute, this is very inconsistent” all while everyone is doing their best not to get replaced its clanker discrimination and mixed with clanker incompetence
View on HN · Topics
I predict all house cats will be replaced by robots by 2027. People just do not realise how great of an effect the AI and robotics will have on home pet ownership. However as a CEO of a publicly listed company ”Robot-cats-that-are-totally-like-awesome-and-are-gonna-like-totally-be-as-lovable-as-real-ones Inc.” I am seeing this change from first row seat. We plan to be on the helm of the new pet-robot future, which not furry and cute, but cold and boring. Sources? What, but, you are not a journalist, you are not suppose to challenge what I say, I’m a CEO! No I’m not just using media to create artificial hype to pull investors and make money on bullshit that is never gonna work! How can you say that! It’s a real thing, trust me bro!
View on HN · Topics
> But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities. yes, 100% I think that way too often, discussions of the current state of tech get derailed by talking about predictions of future improvements. hypothetical thought experiment: I set a New Year's resolution for myself of drinking less alcohol. on New Year's Eve, I get pulled over for driving drunk. the officer wants to give me a sobriety test. I respond that I have projected my alcohol consumption will have decreased 80% YoY by Q2 2026. the officer is going to smile and nod...and then insist on giving me the sobriety test. compare this with a non-hypothetical anecdote: I was talking with a friend about the environmental impacts of AI, and mentioned the methane turbines in Memphis [0] that are being used to power Elon Musk's MechaHitler slash CSAM generator. the friend says "oh, but they're working on building nuclear power plants for AI datacenters". and that's technically true...but it misses the broader point. if someone lives downwind of that data center, and they have a kid who develops asthma, you can try to tell them "oh in 5 years it'll be nuclear powered". and your prediction might be correct...but their kid still has asthma. 0: https://time.com/7308925/elon-musk-memphis-ai-data-center/
View on HN · Topics
Once again, more evidence mounts that AI is massively overhyped and limited in usefulness, and once again we will see people making grandiose claims (without evidence of course) and predictions that will inevitably fall flat in the future. We are, of course, perpetually just 3-6 months away from when everything changes. I think Carmack is right, LLM's are not the route to AGI.
View on HN · Topics
Pretty ironic that he complains about Kahn citing someone who told him AI agents are capable of replacing 80% of call center employees, right after quoting Gary Marcus of all people, claiming LLMs will never live up to the hype. If you want to focus on what AI agents are actually capable of today, the last person I'd pay any attention to is Marcus, who has been wrong about nearly everything related to AI for years, and does nothing but double down.
View on HN · Topics
What has he been wrong about? He was way ahead of predicting the scaling limitations, llm not making it to agi.
View on HN · Topics
What scaling limitations, Gemini 3 shows us that is not over yet, and little brother flash is a hyper sparse, 1T parameter model (aiui) that is both fast and good I agree with GP, Marcus has not been an accurate or significant voice, could care lass what he has to say about ai. He's not a practitioner anymore in my mind
View on HN · Topics
Well clearly LLMs are not AGI, and all such calls of them being 'AGI' have been a pump and dump scam. So he got that dead right for years.
View on HN · Topics
Who has said that "LLMs are AGI"?
View on HN · Topics
Probably any sales and marketing departments of companies with an "AI" product (based on an LLM) which is presented as having AGI-like capabilities. I'm doubt parent poster was referring to anyone phrasing it in those literal terms. Kind of like how "some people claim flavored water can cure cancer" doesn't mean that's the literal pitch being given for the snake-oil.
View on HN · Topics
That's complete bullshit, and you know it. The big labs are saying that AGI will be here soon, not that it's here now. Please prove me wrong.
View on HN · Topics
"We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." We know how to build it and it will be entering the workforce in 2025. Well, we're in 2026 now and we don't have it in the workforce or anywhere else because they haven't built it because they don't really know how to build it because they're hucksters selling vaporware built on dead end technologies they cannot admit to.
View on HN · Topics
There needs to be a companion to Betteridge’s law that addresses AI-related headlines with “because since the beginning of time the field of artificial intelligence over-promises and under-delivers.”
View on HN · Topics
Cal Newport looked in the wrong places. He has no visibility into the usage of ChatGPT to do homework. The collapse of Chegg should tell you, with no other public information, that if 30% of students were already cheating somehow, somewhat weakly, they are now doing super-powerful cheating, and surely more than 30% of students at this stage. It’s also kind of stupid to hand wave away, programming. Programmers are where all the early adopters of software are. He’s merely conflating an adoption curve with capabilities. Programmers, I’m sure, were also the first to use Google and smartphones. “It doesn’t work for me” is missing the critical word “yet” at the end, and really, is it saying much that forecasts about adoption in the metric, “years until when Cal Newport’s arbitrary criteria of what agent and adoption means meets some threshold only inside Cal Newport’s head” is hard to do? There are 700m active weeklies for ChatGPT. It has joined the workforce! It just isn’t being paid the salaries.
View on HN · Topics
read it again. he criticizes the hype built around 2025 as the Year X for agents. many were thinking that "we'll carry PCs in our pockets" when Windows Mobile-powered devices came out. many predicted 2003 as the Year X for what we now call smartphones. no, it was 2008, with the iPhone launch.
View on HN · Topics
They're simply bluffing, and you called them on it. Thanks for your service. Too many people think they can just bullshit and bluff their way along and need to be taken down a peg, or for repeat offenders, shunned and ostracized.
View on HN · Topics
They're just bluffing. It's bullshitting they get away with everywhere else so they think it's acceptable here.