Summarizer

LLMs replacing Stack Overflow

← Back to Total monthly number of StackOverflow questions over time

The decline of Stack Overflow is frequently described as a "double whammy" where a history of toxic, pedantic moderation finally collided with the frictionless, judgment-free utility of LLMs. While many developers celebrate escaping the site’s "snarky gatekeeping" in favor of AI’s instant and context-specific answers, others mourn the loss of a public commons that offered "battle-scarred" human wisdom and nuanced expert debate. There is a growing concern that this shift toward private AI interactions creates a potential "knowledge hellscape" where future models may stagnate as the original sources of human innovation and collaborative documentation are cannibalized. Ultimately, the sentiment reflects a bittersweet transition from a community-curated knowledge base to an individualized, agent-driven workflow that prioritizes immediate solutions over the long-term sharing of human experience.

161 comments tagged with this topic

View on HN · Topics
The moderation definitely got kind of nasty in the last 5 years or so. To the point where you would feel unwelcome for asking a question you had already researched, and felt was perfectly sound to ask. However, that didn't stop millions of people from asking questions every day , it just felt kinda shitty to those of us who spent more time answering, when we actually needed to ask one on a topic we were lacking in. (Speaking as someone who never moderated). My feeling was always that the super mods were people who had too much time on their hands... and the site would've been better without them (speaking in the past tense, now). But I don't think that's what killed it. LLMs scraping all its content and recycling it into bite-sized Gemini or GPT answers - that's what killed it.
View on HN · Topics
Thinking they didn't keep up with the times or that they should've made changes is perfectly fine. It's the vitriol in some of the comments here I really can't stand. As for me, I also don't answer much anymore. But not sure if it's due to the community or frankly because most low hanging fruits are gone. Still sometimes visit, though. Even for thing's an LLM can answer, because finding it on SO takes me 2 seconds but waiting for the LLM to write a novella about the wrong thing often takes longer.
View on HN · Topics
Right? It's a perfect example of the problem. In college, I worked tech support. My approach was to treat users as people. To see all questions as legitimate, and any knowledge differential on my part as a) the whole point of tech support, and b) an opportunity to help. But there were some people who used any differential in knowledge or power as an opportunity to feel superior. And often, to act that way. To think of users as a problem and an interruption, even though they were the only reason we were getting paid. I've been refusing to contribute to SO for so long that I can't even remember the details. But I still recall the feeling I got from their dismissive jackassery. Having their content ripped off by LLMs is the final blow, but they have richly earned their fate.
View on HN · Topics
> The fundamental value proposition of SO is getting an answer to a question I read an interview once with one of the founders of SO. They said the main value stackoverflow provided wasn't to the person who asked the question. It was for the person who googled it later and found the answer. This is why all the moderation pushes toward deleting duplicates of questions, and having a single accepted answer. They were primarily trying to make google searches more effective for the broader internet. Not provide a service for the question-asker or answerer. Sad now though, since LLMs have eaten this pie.
View on HN · Topics
> This is why all the moderation pushes toward deleting duplicates of questions, and having a single accepted answer. Having duplicates of the question is precisely why people use LLMs instead of StackOverflow. The majority of all users lack the vocabulary to properly articulate their problems using the jargon of mathematicians and programmers. Prior to LLMs, my use case for StackOverflow was something like this: 30 minutes trying (and failing) to use the right search terms to articulate the problem (remember, there was no contextual understanding, so if you used a word with two meanings and one of those meanings was more popular, you’d have to omit it using the exclusion operator). 30 minutes reading through the threads I found (half of which will have been closed or answered by users who ignored some condition presented by the OP). 5 minutes on implementation. 2 minutes pounding my head on my desk because it shouldn’t have been that hard. With an LLM, if the problem has been documented at any point in the last 20 years, I can probably solve it using my initial prompt even as a layman. When you’d actually find an answer on StackOverflow, it was often only because you finally found a different way of phrasing your search so that a relevant result came up. Half the time the OP would describe the exact problem you were having only for the thread to be closed by moderators as a duplicate of another question that lacked one of your conditions.
View on HN · Topics
LLMs also search Google for answers. Hence the knowledge may be not lost even for those who only supervises machines that write code.
View on HN · Topics
> Sad now though, since LLMs have eaten this pie. By regenerating an answer on command and never caring about the redundancy, yeah. The DRY advocate within me weeps.
View on HN · Topics
Sad? No. A good LLM is vastly better than SO ever was. An LLM won't close your question for being off-topic in the opinion of some people but not others. It won't flame you for failing to phrase your question optimally, or argue about exactly which site it should have been posted on. It won't "close as duplicate" because a vaguely-similar question was asked 10 years ago in a completely-different context (and never really got a great answer back then). Moreover, the LLM has access to all instances of similar problems, while a human can only read one SO page at a time. The question of what will replace SO in future models, though, is a valid one. People don't realize what a massive advantage Google has over everyone else in that regard. So many site owners go out of their way to try to block OpenAI's crawlers, while simultaneously trying to attract Google's.
View on HN · Topics
Thinking from first principles, a large part of the content on stack overflow comes from the practical experience and battle scars worn by developers sharing them with others and cross-curating approaches. Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc. It will be this shared interlocutor with vast swaths of experiential knowledge collected and redistributed at an even larger scale than SO and forum-style platforms allow for. It does remove the human touch so it's quite a different dynamic and the amount of data to collect is staggering and challenging from a legal point of view, but I suspect a lot of the knowledge used to train LLMs in the next ten years will come from large-scale telemetry and millions of hours in RL self-play where LLMs learn to scale and debug code from fizzbuzz to facebook and twitter-like distributed system.
View on HN · Topics
> Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc. That might work until an LLM encounters a question it's programmed to regard as suspicious for whatever reason. I recently wanted to exercise an SMTP server I've been configuring, and wanted to do it by an expect script, which I don't do regularly. Instead of digging through the docs, I asked Google's Gemini (whatever's the current free version) to write a bare bones script for an SMTP conversation. It flatly refused. The explanation was along the lines "it could be used for spamming, so I can't do that, Dave." I understand the motivation, and can even sympathize a bit, but what are the options for someone who has a legitimate need for an answer? I know how to get one by other means; what's the end game when it's LLMs all the way down? I certainly don't wish to live in such a world.
View on HN · Topics
I don't know how others use LLMs, but once I find the answer to something I'm stuck on I do not tell the LLM that it's fixed. This was a problem in forums as well but I think even fewer people are going to give that feedback to a chatbot
View on HN · Topics
Am I the only one that sees this as a hellscape? No longer interacting with your peers but an LLM instead? The knowledge centralized via telemetry and spying on every user’s every interaction and only available thru a enshitified subscription to a model that’s been trained on this stolen data?
View on HN · Topics
I actively hated interacting with the power users on SO, and I feel nothing about an LLM, so it's a definite improvement in QoL for me.
View on HN · Topics
Asking questions on SO was an exercise in frustration, not "interacting with peers". I've never once had a productive interaction there, everything I've ever asked was either closed for dumb reasons or not answered at all. The library of past answers was more useful, but fell off hard for more recent tech, I assume because people all were having the same frustrations as I was and just stopped going there to ask anything. I have plenty of real peers I interact with, I do not need that noise when I just need a quick answer to a technical question. LLMs are fantastic for this use case.
View on HN · Topics
It's funny, because I had a similar question but wanted to be able to materialize a view in Microsoft SQL Server, and ChatGPT went around in circles suggesting invalid solutions. There were about 4 possibilities that I had tried before going to ChatGPT, it went through all 4, then when the fourth one failed it gave me the first one again.
View on HN · Topics
You can't use the free chat client for questions like that in my experience. Almost guaranteed to waste your time. Try the big-3 thinking models (ChatGPT 5.2 Pro, Gemini 3 Pro, and Claude Opus 4.5).
View on HN · Topics
The "human touch" on StackOverflow?! I'll take the "robot touch," thanks very much.
View on HN · Topics
> It's just that those goals (i.e. "we want people to be able to search for information and find high-quality answers to well-scoped, clear questions that a reasonably broad audience can be interested in, and avoid duplicating effort") don't align with those of the average person asking a question (i.e. "I want my code to work"). This explains the graph in question: Stackoverflow's goals were misaligned to humans. Pretty ironic that AI bots goals are more aligned :-/
View on HN · Topics
The UX sounds better than Stack Overflow.
View on HN · Topics
The part where you don't talk to anyone else, just a robot intermediary which is simulating the way humans talk, is part of UX. Sounds like pretty horrifying UX.
View on HN · Topics
One UX experience that was clearly replaced by other services and spaces before the widespread use of AI doesn’t sound very compelling to me. Be more creative than AI.
View on HN · Topics
As long as software is properly documented, and documentation is published in LLM-friendly formats, LLMs may be able to answer most of the beyond basic questions even when docs don't explicitly cover a particular scenario. Take an API for searching products, one for getting product details, and then an API for deleting a product. The documentation does not need to cover the detailed scenario of "How to delete a product" where the first step is to search, the second step is to get the details (get the ID), and the third step is to delete. The LLM is capable of answering the question "how to delete the product 'product name'". To some degree, many of the questions on SO were beyond basic, but still possible for a human to answer if only they read documentation. LLMs just happen to be capable of reading A LOT of documentation a LOT faster, and then coming up with an answer A LOT faster.
View on HN · Topics
There was, obviously, only one main reason: LLMs. Anything else makes no sense. Even if the moderation was "horrible" (which sounds to me like a horrible exaggeration), there was nothing which came close to being as good as SO. There was no replacement. People will use the best available platform, even if you insist in describing it as "horrible". It's was not horrible compared to the alternatives, web forums like Reddit and HN, which are poorly optimized for answering questions.
View on HN · Topics
Look at the data - it had already been on the downslide for years before LLMs became a meaningful alternative. AI was the killing blow, but there was undoubtedly other factors.
View on HN · Topics
The decline was much slower, not the following exponential decline that can only have been caused by LLMs.
View on HN · Topics
> I disagree with most comments that the brusque moderation is the cause of SO's problems, though it certainly didn't help. By the time my generation was ready to start using SO, the gatekeeping was so severe that we never began asking questions. Look at the graph. The number of questions was in decline before 2020. It was already doomed because it lost the plot and killed any valuable culture. LLMs were a welcome replacement for something that was not fun to use. LLMs are an unwelcome replacement for many other things that are a joy to engage with.
View on HN · Topics
> I disagree with most comments that the brusque moderation is the cause of SO's problems, though it certainly didn't help. SO has had poor moderation from the beginning. Overwhelmingly, people consider the moderation poor because they expect to be able to come to the site and ask things that are well outside of the site's mission. (It's also common to attribute community actions to "moderators" who in reality have historically done hardly any of it; the site simply didn't scale like that. There have been tens of millions of questions, versus a couple dozen moderators.) The kinds of questions that people are getting quick, accurate answers for from an LLM are, overwhelmingly, the sort of thing that SO never wanted. Generally because they are specific to the person asking: either that person's issue won't be relevant to other people, or the work hasn't been done to make it recognizable by others. And then of course you have the duplicates. You would not believe the logic some people put forward to insist that their questions are not duplicate; that they wouldn't be able, in other words, to get a suitable answer (note: the purpose is to answer a question, not solve a problem) from the existing Q&A. It is as though people think they are being insulted when they are immediately given a link to where they can get the necessary answer, by volunteers. I agree that Reddit played a big role in this. But not just by answering questions; by forming a place where people who objected to the SO content model could congregate. Insulting other users is and always has been against Stack Overflow Code of Conduct. The large majority of insults, in my experience, come from new users who are upset at being politely asked to follow procedures or told that they aren't actually allowed to use the site the way they're trying to. There have been many duplicate threads on the meta site about why community members (with enough reputation) are permitted to cast close votes on questions without commenting on what is wrong. The consensus: close reasons are usually fairly obvious; there is an established process for people to come to the meta site to ask for more detailed reasoning; and comments aren't anonymous, so it makes oneself a target.
View on HN · Topics
>> Question can be marked as duplicate without an answer. > No, they literally cannot. You missed that people repeatedly closed question as duplicate when it was not a duplicate. So it had answer, just to a different mildly related question. LLM are having problems but they gaslight me in say 3% of cases, not 60% of cases like SO mods.
View on HN · Topics
This doesn't mean that it's over for SO. It just means we'll probably trend towards more quality over quantity. Measuring SO's success by measuring number of questions asked is like measuring code quality by lines of code. Eventually SO would trend down simply by advancements of search technology helping users find existing answers rather than asking new ones. It just so happened that AI advanced made it even better (in terms of not having to need to ask redundant questions).
View on HN · Topics
Too bad stack overflow didn't high-quality-LLM itself early. I assume it had the computer-related brainpower. with respect to the "moderation is the cause" thing... Although I also don't buy moderation as the cause, I wonder if any sort of friction from the "primary source of data" can cause acceleration. for example, when I'm doing an interenet search for the definition of a word like buggywhip, some search results from the "primary source" show: > buggy whip, n. meanings, etymology and more | Oxford English Dictionary > Factsheet What does the noun buggy whip mean? There is one meaning in OED's entry for the noun buggy whip. See 'Meaning & use' for definition, usage, and quotation evidence. which are non-answer to keep their traffic. but the AI answer is... the answer. If SO early on had had some clear AI answer + references, I think that would have kept people on their site.
View on HN · Topics
The newer questions that LLMs can't answer will be answered in forums - either SO, reddit, or elsewhere. There will be a much higher percentage of relevant content with far fewer new pages regurgitating questions about solved problems. So the LLMs will be able to keep up.
View on HN · Topics
I think the interesting thing here for those of us who use open source frameworks is that we can ask the LLM to look at the source to find the answer (eg. Pytorch or Phoenix in my case). For closed source libraries I do not know.
View on HN · Topics
> will we find new, better ways to find answers to technical questions? I honestly don't think they need to. As we've seen so far, for most jobs in this world, answers that sound correct are good enough. Is chasing more accuracy a good use of resources if your audience can't tell the difference anyway?
View on HN · Topics
> SO was by far the leading source of high quality answers to technical questions We will arrive on most answers by talking to an LLM. Many of us have an idea about we want. We relied on SO for some details/quirks/gotchas. Example of a common SO question: how to do x in a library or language or platform? Maybe post on the Github for that lib. Or forums.. there are quirky systems like Salesforce or Workday which have robust forums. Where the forums are still much more effective than LLMs.
View on HN · Topics
> I disagree with most comments that the brusque moderation is the cause of SO's problems Questions asked on SO that got downvoted by the heavy handed moderation would have been answered by LLMs without any of the flak whatsoever. Those who had downvoted other's questions on SO for not being good enough, must be asking a lot of such not good enough questions to an LLM today. Sure, the SO system worked, but it was user hostile and I'm glad we all don't have to deal with it anymore.
View on HN · Topics
I spent the last 14 days chasing an issue with a Spark transform. Gemini and Claude were exceptionally good at giving me answers that looked perfectly reasonable: none of them worked, they were almost always completely off-road. Eventually I tried with something else, and found a question on stackoverflow, luckily with an answer. That was the game changer and eventually I was able to find the right doc in the Spark (actually Iceberg) website that gave me the final fix. This is to say that LLMs might be more friendly. But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions. Not sure why someone is thinking this is a good thing.
View on HN · Topics
What I always appreciate about SO is the dialogue between commenters. LLMs give one answer, or bullet points around a theme, or just dump a load of code in your IDE. SO gives a debate, in which the finer points of an issue are thrashed out, with the best answers (by and large) floating to the top. SO, at its best, is numerous highly-experienced and intelligent humans trying to demonstrate how clever they are. A bit like HN, you learn from watching the back and forth. I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience. Whatever people's gripes about the site, I learned a hell of a lot from it. I still find solutions there, and think a world without it would be worse.
View on HN · Topics
The fundamental difference between asking on SO and asking an LLM is that SO is a public forum, and an LLM will be communicated with in private. This has a lot of implications, most of which surround the ability for people to review and correct bad information.
View on HN · Topics
The other major benefit of SO being a public forum is that once a question was wrestled with and eventually answered, other engineers could stumble upon and benefit from it. With SO being replaced by LLMs, engineers are asking LLMs the same questions over and over, likely getting a wide range of different answers (some correct and others not) while also being an incredible waste of resources.
View on HN · Topics
Surely the fundamental difference is one asks actual humans who know what's right vs statistical models that are right by accident.
View on HN · Topics
Providing context to ask a Stack Overflow question was time-consuming. In the time it takes to properly format and ask a question on Stack Overflow, an engineer can iterate through multiple bad LLM responses and eventually get to the right one. The stats tell the uncomfortable truth. LLMs are a better overall experience than Stack Overflow, even after accounting for inaccurate answers from the LLM. Don't forget, human answers on Stack Overflow were also often wrong or delayed by hours or days. I think we're romanticizing the quality of the average human response on Stack Overflow.
View on HN · Topics
That's only because of LLMs consuming pre-existing discussions on SO. They aren't creating novel solutions.
View on HN · Topics
What I'm appreciating here is the quality of the _best_ human responses on SO. There are always a number of ways to solve a problem. A good SO response gives both a path forward, and an explanation why, in the context of other possible options, this is the way to do things. LLMs do not automatically think of performance, maintainability, edge cases etc when providing a response, in no small part because they do not think. An LLM will write you a regex HTML parser.[0] The stats look bleak for SO. Perhaps there's a better "experience" with LLMs, but my point is that this is to our detriment as a community. [^0]: He comes, https://stackoverflow.com/questions/1732348/regex-match-open...
View on HN · Topics
> I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience Interesting question - the result is just words so surely a LLM can simulate an ego. Feed it the Linux kernel mailing list? Isn’t back and forth exactly what the new MoE thinking models attempt to simulate? And if they don’t have the experience that is just a question of tokens?
View on HN · Topics
SO was somewhere people put their hard won experience into words, that an LLM could train on. That won't be happening anymore, neither on SO or elsewhere. So all this hard won experience, from actually doing real work, will be inaccessible to the LLMs. For modern technologies and problems I suspect it will be a notably worse experience when using an LLM than working with older technologies. It's already true for example, when using the Godot game engine instead of Unity. LLMs constantly confuse what you're trying to do with Unity problems, offer Unity based code solutions etc.
View on HN · Topics
> Isn’t back and forth exactly what the new MoE thinking models attempt to simulate? I think the name "Mixture of Experts" might be one of the most misleading labels in our industry. No, that is not at all what MoE models do. Think of it rather like, instead of having one giant black box, we now have multiple smaller opaque boxes of various colors, and somehow (we don't really know how) we're able to tell if your question is "yellow" or "purple" and send that to the purple opaque box to get an answer. The result is that we're able to use less resources to solve any given question (by activating smaller boxes instead of the original huge one). The problem is we don't know in advance which questions are of which color: it's not like one "expert" knows CSS and the other knows car engines. It's just more floating point black magic, so "How do I center a div" and "what's the difference between a V6 and V12" are both "yellow" questions sent to the same box/expert, while "How do I vertically center a div" is a red question, and "what's the most powerful between a V6 and V12" is a green question which activates a completely different set of weights.
View on HN · Topics
You can ask an LLM to provide multiple approaches to solutions and explore the pros and cons of each, then you can drill down and elaborate on particular ones. It works very well.
View on HN · Topics
It's flat wrong to suggest SO had the right answer all the time, and in fact in my experience for trickier work it was often wrong or missing entirely. LLMs have a better hit rate with me.
View on HN · Topics
Yes, it does answer you question, when the site lets it go through. Note that "answers your question" does not mean "solving your problem". Sometimes the answer to a question is "this is infeasible because XYZ" and that's good feedback to get to help you re-evaluate a problem. Many LLMs still struggle with this and would rather give a wrong answer than a negative one. That said, the "why don't you use X" response is practically a stereotype for a reason. So it's certainly not always useful feedback. If people could introspect and think "can 'because my job doesn't allow me to install Z' be a valid response to this", we'd be in a true Utopia.
View on HN · Topics
Interpreting that claim as "SO users always, 100% of the time answer questions correctly" is uncharitable to the point of being unreasonable. Most people would interpret the claim as concisely expressing that you get better accuracy from grumpy SO users than friendly LLMs.
View on HN · Topics
For the record I was interpreting that as LLMs are useless (which may have been just as uncharitable), which I categorically deny. I would say they're about just as useful without wading through the mire that SO was.
View on HN · Topics
I'm hoping increasing we'll see agents helping with this sort of issue. I would like an agent that would do things like pull the spark repo into the working area and consult the source code/cross reference against what you're trying to do. Once technique I've used successfully is to do this 'manually' to ensure codex/Claude code can grep around the libraries I'm using
View on HN · Topics
You still get the same thing though? That grumpy guy is using an LLM and debugging with it. Solves the problem. AI provider fine tunes their model with this. You now have his input baked into it's response. How you think these things work? It's either a human direct input it's remembering or a RL enviroment made by a human to solve the problem you are working on. Nothing in it is "made up" it's just a resolution problem which will only get better over time.
View on HN · Topics
Because what you’re describing is the exception. Almost always with LLM’s I get a better solution, or helpful pointer in the direction of a solution, and I get it much faster. I honestly don’t understand anyone could prefer Google/SO, and in fact that the numbers show that they don’t. You’re in an extreme minority.
View on HN · Topics
> But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions. Which by the way is incredibly ironic to read on the internet after like fifteen years of annoying people left and right about toxic this and toxic that. Extreme example: Linus Torvalds used to be notoriously toxic. Would you still defend your position if the “grumpy” guy answered in Linus’ style?
View on HN · Topics
> Would you still defend your position if the “grumpy” guy answered in Linus’ style? If they answered correctly, yes. My point is that providing _actual knowledge_ is by itself so much more valuable compared to _simulated knowledge_, in particular when that simulated knowledge is hyper realistic and wrong.
View on HN · Topics
Not a big surprise once LLMs came along: stack overflow developed some pretty unpleasant traits over time. Everything from legitimate questions being closed for no good reason (or being labeled a duplicate even though they often weren’t), out of date answers that never get updated as tech changes, to a generally toxic and condescending culture amongst the top answerers. For all their flaws, LLMs are so much better.
View on HN · Topics
Agreed. I personally stopped contributing to StackOverflow before LLMs, because of the toxic moderation. Now with LLMs, I can't remember the last time I visited StackOverflow.
View on HN · Topics
This has been my experience. My initial (most popular) questions (and I asked almost twice as many questions, as I gave answers) were pretty basic, but they started getting a lot more difficult, as time went on, and they became unanswered, almost always (I often ended up answering my own question, after I figured it out on my own). I was pretty pissed at this, because the things I encountered, were the types of things that people who ship, encounter; not academic exercises. Tells me that, for all the bluster, a lot of folks on there, don't ship. LLMs may sometimes give pretty sloppy answers, but they are almost always ship-relevant.
View on HN · Topics
Yeah, I think this is the real answer. I still pop into SO when in learning a new language or trip into new simple questions (in my case, how to connect and test a local server). But when you're beyond the weeds, SO is as best an oasis in the desert. Half the time a mirage, nice when it does help out. But rare either way. I don't use LLMs eother. But the next generation might feel differently and those trends mean there's no new users coming in.
View on HN · Topics
Maybe there's a key idea for something to replace StackOverflow as a human tech Q&A forum: Having a system which somehow incentivizes asking and answering these sorts of challenging and novel questions. These are the questions which will not easily be answered using LLMs, as they require more thought and research.
View on HN · Topics
> the more experienced you become, the less useful it is This is killer feature of LLMs - you will not became more experienced.
View on HN · Topics
Gen 0: expertsexchange.com, later experts-exchange.com (1996) Gen 1: stackoverflow.com (2008) Gen 2: chatgpt.com (2022, sort of)
View on HN · Topics
None of those worked as programming tools. I really miss Google Answers though, with the bounties. Random example: http://answers.google.com/answers/threadview/id/762357.html It's remarkable how similar in style the answers are to what we all know from e.g. chatgpt.
View on HN · Topics
Which is why LLMs are so much more useful than SO and likely always will be. LLMs do this even. Like trying to write my own queue by scratch and I ask an LLM for feedback I think it’s Gemini that often tells me Python’s deque is better. duh! That’s not the point. So I’ve gotten into the habit of prefacing a lot of my prompts with “this is just for practice” or things of that nature. It actually gets annoying but it’s 1,000x more annoying finding a question on SO that is exactly what you want to know but it’s closed and the replies are like “this isn’t the correct way to do this” or “what you actually want to do is Y”
View on HN · Topics
Stack Overflow would still have a vibrant community if it weren't for the toxic community. Imagine a non-toxic Stack Overflow replacement that operated as an LLM + Wiki (CC-licensed) with a community to curate it. That seems like the sublime optimal solution that combines both AI and expertise. Use LLMs to get public-facing answers, and the community can fix things up. No over-moderation for "duplicates" or other SO heavy-handed moderation memes. Someone could ask a question, an LLM could take a first stab at an answer. The author could correct it or ask further questions, and then the community could fill in when it goes off the rails or can't answer. You would be able to see which questions were too long-tail or difficult for the AI to answer, and humans could jump in to patch things up. This could be gamified with points. This would serve as fantastic LLM training material for local LLMs. The authors of the site could put in a clause saying that "training is allowed as long as you publish your weights + model". Someone please build this. Edit: Removed "LLMs did not kill Stack Overflow." first sentence as suggested. Perhaps that wasn't entirely accurate, and the rest of the argument stands better on its own legs.
View on HN · Topics
Hardly. - A huge number of developers will want to use such a tool. Many of them are already using AI in a "single player" experience mode. - 80% of the answers will be correct when one-shot for questions of moderate difficulty. - The long tail of "corrector" / "wiki gardening" / pedantic types fill fix the errors. Especially if you gamify it. Just because someone doesn't like AI doesn't mean the majority share the same opinion. AI products are the fastest growing products in history. ChatGPT has over a billion MAUs. It's effectively won over all of humanity. I'm not some vibe coder. I've been programming since the 90's, including on extremely critical multi-billion dollar daily transaction volume infra, yet I absolutely love AI. The models have lots of flaws and shortcomings, but they're incredibly useful and growing in capability and scope -- I'll stand up and serve as your counter example.
View on HN · Topics
I just think you could save a lot of money and energy doing all this but skipping the LLM part? Like what is supposed to be gained? The moment/act of actual generation of lines of code or ideas, whether human or not, is a much smaller piece of the pie relative to ongoing correction, curation, etc (like you indicate). Focusing on it and saying it intrinsically must/should come from the LLM mistakes the intrinsically ephemeral utility of the LLMs and the arguably eternal nature of the wiki at the same time. As sibling says, it turns it into work vs the healthy sharing of ideas. The whole pitch here just feels like putting gold flakes on your pizza: expensive and would not be missed if it wasn't there. Just to say, I'm maybe not as experienced and wise I guess but this definitely sounds terrible to me. But whatever floats your boat I guess!
View on HN · Topics
> Someone could ask a question, an LLM could take a first stab at an answer. The author could correct it or ask further questions, and then the community could fill in when it goes off the rails or can't answer. Isn't this how Quora is supposed to operate?
View on HN · Topics
Oh yeah. My favorite feature of LLMs, is the only dumb question, is the one I don't ask. I guess someone could train an LLM to be spiteful and nasty, but that would only be for entertainment.
View on HN · Topics
That depends on what you mean by "came along". If you mean "once that everyone got around to the idea that LLMs were going to be good at this thing" then sure, but it was not long ago that the majority of people around here were very skeptical of the idea that LLMs would ever be any good at coding.
View on HN · Topics
What you're arguing about is the field completely changing over 3 years; it's nothing, as a time for everyone to change their minds. LLMs were not productified in a meaningful way before ChatGPT in 2022 (companies had sufficiently strong LLMs, but RLHF didn't exist to make them "PR-safe"). Then we basically just had to wait for LLM companies to copy Perplexity and add search engines everywhere (RAG already existed, but I guess it was not realistic to RAG the whole internet), and they became useful enough to replace StackOverflow.
View on HN · Topics
I dont think this is true. People were skeptical of agi / better than human coding which is not the case. As a matter of fact i think searching docs was one of the first manor uses of llms before code.
View on HN · Topics
That's because there has been rapid improvement by LLMs. Their tendency to bullshit is still an issue, but if one maintains a healthy skepticism and uses a bit of logic it can be managed. The problematic uses are where they are used without any real supervision. Enabling human learning is a natural strength for LLMs and works fine since learning tends to be multifaceted and the information received tends to be put to a test as a part of the process.
View on HN · Topics
all true, but i still find myself ask questions there after llm gave wrong answers and wasted my time
View on HN · Topics
How can we be sure that LLMs won't start giving stale answers?
View on HN · Topics
They will, but model updates and competition help solve the problem. If people find that Claude consistently gives better/more relevant answers over GPT, for example, people will choose the better model. The worst thing with Q/A sites isn't they don't work. It's that they there are no alternatives to stackoverflow. Some of the most upvoted answers on stackoverflow prove that it can work well in many cases, but too bad most other times it doesn't.
View on HN · Topics
>For all their flaws, LLMs are so much better But LLMs get their answers from StackOverflow and similar places being used as the source material. As those start getting outdated because of lack of activity, LLMs won't have the source material to answer questions properly.
View on HN · Topics
I regularly use Claude and friends where I ask it to use the web to look at specific GitHub repos or documentation to ask about current versions of things. The “LLMs just get their info from stack overflow” trope from the GPT-3 days is long dead - they’re pretty good at getting info that is very up to date by using tools to access the web. In some cases I just upload bits and pieces from a library along with my question if it’s particularly obscure or something home grown, and they do quite well with that too. Yes, they do get it wrong sometimes - just like stack overflow did too.
View on HN · Topics
Now they can read the documentation and code in the repo directly and answer based on that.
View on HN · Topics
Yep, LLMs are perfect for the "quick buy annoying to answer 500 times" questions about writing a short script, or configuring something, or using the right combination of command line parameters. Quicker than searching the entirety of Google results and none of the attitude.
View on HN · Topics
Indeed. StackOverflow was by far the most unpleasant website that I have regularly interacted with. Sometimes, just seeing how users were treated there (even in Q&A threads that I wasn’t involved in at all) disturbed me so much it was actually interfering with my work. I’m so, so glad that I can now just ask an AI to get the same (or better) answers, without having to wade through the barely restrained hate on that site.
View on HN · Topics
Right. I often end up on Stack Exchange when researching various engineering-related topics, and I'm always blown away by how incredibly toxic the threads are. We get small glimpses of that on HN, but it was absolutely out of control on Stack Exchange. At the same time, I think there was another factor: at some point, the corpus of answered questions has grown to a point where you no longer needed to ask, because by default, Google would get you to the answer page. LLMs were just a cherry on top.
View on HN · Topics
There is an obvious acceleration of the downwards trend at the time ChatGPT got popular. AI is clearly a part of this, but not the only thing that affects SO activity.
View on HN · Topics
I wonder if we can attribute some $billion of the investment in LLMs directly to the toxicity on StackOverflow.
View on HN · Topics
It is sort of because of AI - it provided a way of escaping StackOverflow's toxicity!
View on HN · Topics
Could view it as push/pull dynamics: pushed away by toxicity, pulled to good answers from AI.
View on HN · Topics
I once published a method for finding the closest distance between an ellipse and a point on SO: https://stackoverflow.com/questions/22959698/distance-from-g... I consider it the most beautiful piece of code I've ever written and perhaps my one minor contribution to human knowledge. It uses a method I invented, is just a few lines, and converges in very few iterations. People used to reach out to me all the time with uses they had found for it, it was cited in a PhD and apparently lives in some collision plugin for unity. Haven't heard from anyone in a long time. It's also my test question for LLMs, and I've yet to see my solution regurgitated. Instead they generate some variant of Newtons method, ChatGPT 5.2 gave me an LM implementation and acknowledged that Newtons method is unstable (it is, which is why I went down the rabbit hole in the first place.) Today I don't know where I would publish such a gem. It's not something I'd bother writing up in a paper, and SO was the obvious place were people who wanted an answer to this question would look. Now there is no central repository, instead everyone individually summons the ghosts of those passed in loneliness.
View on HN · Topics
StackOverflow is famously obnoxious about questions badly asked, badly categorized, duplicated… It’s actually a topic on which StackOverflow would benefit from AI A LOT. Imagine StackOverflow rebrands itself as the place where you can ask the LLM and it benefits the world, whoch correctly rephrasing the question behind the scenes and creating public records for them.
View on HN · Topics
The company tried this. It fell through immediately. So they went away, and came back with a much improved version. It also fell through immediately. Turns out, this idea is just bad: LLMs can't rephrase questions accurately, when those questions are novel, which is precisely the case that Stack Overflow needs. For the pedantic: there were actually three attempts, all of which failed. The question title generator was positively received ( https://meta.stackexchange.com/q/388492/308065 ), but ultimately removed ( https://meta.stackoverflow.com/q/424638/5223757 ) because it didn't work properly, and interfered with curation. The question formatting assistant failed obviously and catastrophically ( https://meta.stackoverflow.com/a/425167/5223757 ). The new question assistant failed in much the same ways ( https://meta.stackoverflow.com/a/432638/5223757 ), despite over a year of improvements, but was pushed through anyway.
View on HN · Topics
Yeah, I suspect that a lot of the decline represented in the OP's graph (starting around early 2020) is actually discord and that LLMs weren't much of a factor until ChatGPT 3.5 which launched in 2022. LLMs have definitely accelerated Stackoverflow's demise though. No question about that. Also makes me wonder if discord has a licensing deal with any of the large LLM players. If they don't then I can't imagine that will last for long. It will eventually just become too lucrative for them to say no if it hasn't already.
View on HN · Topics
I believe the community has seen the benefit of forums like SO and we won’t let the idea go stale. I also believe the current state of SO is not sustainable with the old guard flagging any question and response you post there. The idea can/should/might be re-invented in an LLM context and we’re one good interface away from getting there. That’s at least my hope.
View on HN · Topics
I had a similar beautiful experience where an experienced programmer answered one of my elementary JavaScript typing questions when I was just starting to learn programming. He didn't need to, but he gave the most comprehensive answer possible attacking the question from various angles. He taught me the value of deeply understanding theoretical and historical aspects of computing to understand why some parts of programming exist the way they are. I'm still thankful. If this was repeated today, an LLM would have given a surface level answer, or worse yet would've done the thinking for me obliviating the question in the first place. I wrote a blog post about my experience at https://nmn.gl/blog/ai-and-learning
View on HN · Topics
Had a similar experience. Asked a question about a new language feature in java 8 (parallell streams), and one of the language designers (Goetz) answered my question about the intention of how to use it. An LLM couldn't have done the same. Someone would have to ask the question and someone answer it for indexing by the LLM. If we all just ask questions in closed chats, lots of new questions will go unanswered as those with the knowledge have simply not been asked to write the answers down anywhere.
View on HN · Topics
You can prompt the LLM to not just give you the answer. Possibly even ask it to consider the problem from different angles but that may not be helpful when you don't know what you don't know.
View on HN · Topics
Has anyone tried building a modern Stack Overflow that's actually designed for AI-first developers? The core idea: question gets asked → immediately shows answers from 3 different AI models. Users get instant value. Then humans show up to verify, break it down, or add production context. But flip the reputation system: instead of reputation for answers, you get it for catching what's wrong or verifying what works. "This breaks with X" or "verified in production" becomes the valuable contribution. Keep federation in mind from day one (did:web, did:plc) so it's not another closed platform. Stack Overflow's magic was making experts feel needed. They still do—just differently now.
View on HN · Topics
Why disallow AI input? Is it that poor? Surely it isn't.
View on HN · Topics
What's the point of AI on a site like that? Wouldn't you just ask an LLM directly if you were fine with AI answers?
View on HN · Topics
I and many others already go the extra mile to ask multiple LLM's for hard questions or for getting a diversity of AI opinions to then internalize and cross check myself. There are apps that build up a nice sized user base on this small convenience aded of getting 2 answers at once REF https://lmarena.ai/ https://techcrunch.com/2025/05/21/lm-arena-the-organization-... All the major AI companies of course do not want to give you the answers from other AI's so this service needs to be a third party. But then beyond that there are hard/niche questions where the AI's are wrong often and humans also have a hard time getting it right, but with a larger discussion and multiple minds chewing the problem one can get to a more correct answer often by process of elimination. I encountered this recently in a niche non-US insurance project and I basically coded together the above as an internal tool. AI suggestions + human collaboration to find the best answer. Of course in this case everyone is getting paid to spend time with this thing so more like AI first Stack Overflow Internal. I have no evidence that an public version would do well when ppl don't get paid to commend and rate.
View on HN · Topics
I think this could be really cool, but the tricky thing would be knowing when to use it instead of just asking the question directly to whichever AI. It’s hard to know that you’ll benefit from the extra context and some human input unless you already have a pretty good idea about the topic.
View on HN · Topics
AI is generally setup to return the "best" answer as defined as the most common answer, not the rightest, or most efficient or effective answer, unless the underlying data leans that way. It's why AI based web search isn't behaving like google based search. People clicking on the best results really was a signal for google on what solution was being sought. Generally, I don't know that LLMs are covering this type of feedback loop.
View on HN · Topics
Models are NOT search engines. Even if LLMs were trained on the answer, that doesn't mean they'll ever recommend it. Regardless of how accurate it may be. LLMs are black box next token predictors and that's part of the issue.
View on HN · Topics
Why did SO decide to do that to us? to not invest in ai and then, iirc, claim our contributions their ownership. i sometimes go back to answers i gave, even when answered my own questions.
View on HN · Topics
The graph is scary, but I think it's conflating two things: 1. Newbies asking badly written basic questions, barely allowed to stay, and answered by hungry users trying to farm points, never to be re-read again. This used to be the vast majority of SO questions by number. 2. Experiencied users facing a novel problem, asking questions that will be the primary search result for years to come. It's #1 that's being canibalized by LLM's, and I think that's good for users. But #2 really has nowhere else to go; ChatGPT won't help you when all you have is a confusing error message caused by the confluence of three different bugs between your code, the platform, and an outdated dependency. And LLMs will need training data for the new tools and bugs that are coming out.
View on HN · Topics
I’m going to argue the opposite. LLMs are fantastic at answering well posed questions. They are like chess machines evaluating a tonne of scenarios. But they aren’t that good at guessing what you actually have on your mind. So if you are a novice, you have to be very careful about framing your questions. Sometimes, it’s just easier to ask a human to point you in the right direction. But SO, despite being human, has always been awful to novices. On the other hand, if you are experienced, it’s really not that difficult to get what you need from an LLM, and unlike on SO, you don’t need to worry about offending an overly sensitive user or a moderator. LLMs never get angry at you, they never complain about incorrect formatting or being too lax in your wording. They have infinite patience for you. This is why SO is destined to be reduced to a database of well structured questions and answers that are gradually going to become more and more irrelevant as time goes by.
View on HN · Topics
Yes, LLMs are great at answering questions, but providing reasonable answers is another matter. Can you really not think of anything that hasn't already been asked and isn't in any documentation anywhere? I can only assume you haven't been doing this very long. Fairly recently I was confronted with a Postgres problem, LLMs had no idea, it wasn't in the manual, it needed someone with years of experience. I took them IRC and someone actually helped me figure it out. Until "AI" gets to the point it has run software for years and gained experience, or it can figure out everything just by reading the source code of something like Postgres, it won't be useful for stuff that hasn't been asked before.
View on HN · Topics
Not sure. As software becomes a commodity I can see the "old school" like tech slowing down (e.g. programming languages, frameworks frontend and backend, etc). The need for a better programming language is less now since LLM's are the ones writing code anyway more so these days - the pain isn't felt necessarily by the writer of the code to be more concise/expressive. The ones that do come out will probably have more specific communities for them (e.g. AI)
View on HN · Topics
The obvious culprit here are the LLMs, but I do wonder whether Github's social features, despite its flaws, have given developers fewer reasons to ask questions on SO? Speaking from experience, every time I hit a wall with my projects, I would instinctively visit the project's repo first, and check on the issues / discussions page. More often than not, I was able to find someone with an adjacent problem and get close enough to a solution just by looking at the resolution. If it all failed, I would fall back to asking questions on the discussion forum first before even considering to visit SO.
View on HN · Topics
As someone that spent a fair bit of time answering questions on StackOverflow, what stood out years ago was how much the same thing would be asked every day. Countless duplicates. That has all but ceased with LLMs taking all that volume. Honestly, I don't think that's a huge loss for the knowledge base. The other thing I've noticed lately is a strong push to get non-programming questions off StackOverflow, and on to other sites like SuperUser, ServerFault, DevOps, etc. Unfortunately, what's left is so small I don't think there's enough to sustain a community. Without questions to answer, contributors providing the answers disappear, leaving the few questions there often unanswered.
View on HN · Topics
This is horrifying. Given the fact that when I need a question answered I usually refer to S.O. , but more recently have taken suggestions from LLM models that were obviously trained on S.O. data... And given the fact that all other web results for "how do you change the scroll behavior on..." or "SCSS for media query on..." all lead to a hundred fake websites with pages generated by LLMs based on old answers. Destroying S.O. as a question/answer source leaves only the LLMs to answer questions. That's why it's horrific.
View on HN · Topics
I do use Claude a lot, but I still regularly ask questions on https://bioinformatics.stackexchange.com/ . It's often just too niche, LLMs hallucinate stuff like an entire non-existent benchmarking feature in Snakemake, or can't explain how I should get transcriptome aligners to give me correct quantifications for a transcript. I guess it's too niche. And as a lonely Bioinformatician it can be nice to get confirmation from other bioinformaticians. Looking back at my Stack Exchange/Stack Overflow (never really got the difference) history, my earlier, more general programming questions from when I just started are all no-brainers for any LLM.
View on HN · Topics
Interestingly, stagnation started around 2014 (in the number of questions asked no longer rising,) and a visible decline started in 2020 [1]: two years before ChatGPT launched! It’s an interesting question if the decline would have happened regardless of LLMs, just slower? [1] An annotated visualization of the same data I did: https://blog.pragmaticengineer.com/are-llms-making-stackover...
View on HN · Topics
The decline is not surprising. I am sure AI is replacing Stackoverflow for a lot of people. And my experience with asking questions was pretty bad. I asked a few very specific questions about some deep detail in Windows and every time I got only some smug comments about my stupid question or the question got rejected outright. That while a ton of beginner questions were approved. Definitely not a very inviting club. I found i got better responses on Reddit.
View on HN · Topics
This is a huge loss. In the past people asked questions of real people who gave answers rooted in real use. And all this was documented and available for future learning. There was also a beautiful human element to think that some other human cared about the problem. Now people ask questions of LLMs. They churn out answers from the void, sometimes correct but not rooted in real life use and thought. The answers are then lost to the world. The learning is not shared. LLMs have been feeding on all this human interaction and simultaneously destroying it.
View on HN · Topics
SO has lost against LLMs because it has insistently positioned itself as a knowledge base rather than a community. The harsh moderation, strict content policing, forbidden socialization, lack of follow mechanics etc have all collectively contributed to it. They basically made a bet because they wanted to be the full anti-thesis of ad-ridden garbage-looking forums. Pure information, zero tolerance for humanity, sterile looking design. They achieved that goal, but in the end, they dug their own grave too. LLMs didn’t admonish us to write our questions better, or simply because we asked for an opinion. They didn’t flag, remove our post with no advance notice. They didn’t forbid to say hello or thanks, they welcomed it. They didn’t complain when we asked something that was asked many times. They didn’t prevent us from deleting our own content. Oh yeah, no wonder nobody bothers with SO anymore. It’s a good lesson for the future.
View on HN · Topics
IMO people underestimate the value of heavy moderation. But moderation heavy or light, good or bad. Why wait hours for an answer when an LLM gives it in seconds?
View on HN · Topics
Do I read that correctly — it is close to zero today?! I used to think SO culture was killing it but it really may have been AI after all.
View on HN · Topics
Still a couple thousand away from 0. But yea the double whammy of toxic culture and LLMs did the trick. Decline already set in well before good enough LLMs were available. I wonder how reddit compares, though its ofc pretty different use case there
View on HN · Topics
It can be both. Push and pull factors work better together than either does individually.
View on HN · Topics
LLMs caused this decline. Stop denying that. You don't have to defend LLMs from any perceived blame. This is not a bad thing. The steep decline in the early months of 2023 actually started with the release of ChatGPT, which is 2022-11-30, and its gradually widening availability to (and awareness of) the public from that date. The plot clearly shows that cliff. The gentle decline since 2016 does not invalidate this. Were it not for LLMs, the site's post rate would now probably be at around 5000 posts/day, not 300. LLMs are to "blame" for eating all the trivial questions that would have gotten some nearly copy-pasted answer by some eager reputation points collector, or closed as a duplicate, which nets nobody any rep. Stack Overflow is not a site for socializing . Do not mistake it for reddit. The "karma" does not mean "I hate you", it means "you haven't put the absolute minimum conceivable amount of effort into your question". This includes at least googling the question before you ask. If you haven't done that, you can't expect to impose on the free time of others. SO has a learning curve. The site expects more from you than just to show up and start yapping. That is its nature. It is "different" because it must be. All other places don't have this expectation of quality. That is its value proposition.
View on HN · Topics
Here’s how SO could still be useful in the LLM era: User asks a question, llm provides an immediate answer/reply on the forum. But real people can still jump in to the conversation to add additional insights and correct mistakes. If you’re a user that asks a duplicate question, it’ll just direct you to the good conversation that already happened. A symbiosis of immediate usually-good-enough llm answers PLUS human generated content that dives deeper and provides reassurances in correctness
View on HN · Topics
Those popups were a big contributor for me to stop using SO. I stopped updating my uBlock origin rules when LLMs became good enough. I am now using the free Kimi K2 model via Groq over CLI, which is much faster.
View on HN · Topics
SO peaked long, long before LLMs came along. My personal experience is that GitHub issues took over. You can clearly see the introduction of ChatGPT in late 2022. That was the final nail in the coffin. I am still really glad that Stack Overflow saved us from experts-exchange.com - or “the hyphen site” as it is sometimes referred to.
View on HN · Topics
AI is a vampire. Coming to your corner of the world, to suck your economic blood, eventually. It’s hard to ignore the accelerated decline that started in late 2022/early 2023.
View on HN · Topics
One thing you won’t get with in an LLM is genuine research. I once answered a 550 point question by researching the source code of vim to see how the poster’s question could be resolved. [0] [0] https://stackoverflow.com/questions/619423/backup-restore-th...
View on HN · Topics
LLMs absolutely body-slammed SO, but anyone who was an active contributor knows the company was screwing over existing moderators for years before this. Writing was on the walls
View on HN · Topics
> If by "body-slammed" you mean "trained on SO user data while violating the terms of the CC BY-SA license", then sure. You know that's not what they meant, but why bring up the license here? If they were over the top compliant, attributing every SO answer under every chat, and licensing the LLM output as CC BY-SA, I think we'd still have seen the same shift. > In the best case scenario, LLMs might give you the same content you were able to find on SO. In the common scenario, they'll hallucinate an answer and waste your time. Best case it gives you the same level of content, but more customized, and faster. SO being wrong and wasting your time is also common.
View on HN · Topics
Are we in the age of all CS problems being solved and everything being invented? Even if so, do LLM incorporate all that knowledge? A lot of my knowledge in CS come from books and lectures, LLMs can shine in that area by scraping all those sources. However SO was less about academic knowledge but more about experience sharing. You won't find recipes for complex problems in books, e.g. how to catch what part of my program corrupts memory for variable 'a' in gdb. LLMs know correct answer to this question because someone shared their experience, including SO. Are we Ok with stopping this process of sharing from one human to another?
View on HN · Topics
it is indeed a shame. if you are doing anything remotely new and novel, which is essential if you want to make a difference in an increasingly competitive field, LLMs confidently leave you with non-working solutions, or sometimes worse they set you on the wrong path. I had similar worries in the past about indexable forums being replaced by discord servers. the current situation is even worse.
View on HN · Topics
Good riddance. There were some ok answers there, but also many bad or obsolete answers (leading to scrolling down find to find the low-ranked answer that sort of worked), and the moderator toxicity was just another showcase of human failure on top of that. It selected for assholes because they thought they had a captive, eternally renewing audience that did not have any alternative. And that resulted in the chilling effect of people not asking questions because they didn't want to run the moderation gauntlet, so the site's usefulness went even further down. Its still much less useful for recent tech, than it is for ancient questions about parsing HTML with regex and that sort of thing. LLMs are simply better in every way, provided they are trained on decent documents. And if I want them to insult me too, just for that SO nostalgia, I can just ask them to do that and they will oblige. Looking forward to forgetting that site ever existed, my brain's health will improve.
View on HN · Topics
People are still asking questions, it's no longer on the public internet. Google, Anthropic, OpenAI etc get to see and use them.
View on HN · Topics
Man after reading some of the comments and looking at the graph I have learned a lesson. I went to SO all the time to find answers to questions, but I never participated. I mean they made it hard, but given the amount of benefit I gained I should've overcome that friction. If I and people like me had, maybe we could have diluted the moderation drama that others talk about (and that I, as a greedy user, never saw). Now it's a crap-shoot with an LLM instead of being able to peruse great answers from different perspectives to common problems and building out my own solution.
View on HN · Topics
I have a SO profile and I both contributed and used the site for some time. I use the site from time to time to research something. I know a lot more about software than 15 years ago. I used to ask questions and answer questions a lot, but after I matured I have no time and whatever I earn is not worth my time. So perhaps the content would grow in size and quality if they rewarded users with something besides XP. I don't use AI for research so far. I use AI to implement components that fit my architecture and often tests of components.
View on HN · Topics
Obviously LLMs ate StackOverflow, but perhaps developers could keep it alive for much longer if they wanted to . LLMs provide answers, but only humans provide human contact. And that last part is where SO failed by allowing a few people power trip over the rest of us. Kind of like reddit does at times, but harder. I'm not sad.
View on HN · Topics
Now imagine what happens when a new programming language comes along. When we have a question, we will no longer be able to Google it and find answers to it on Stack Overflow. We will ask the LLMs. They will work it out. From that moment, the LLM we used has the knowledge for solving this particular problem. Over time, this produces huge moat for the largest providers. I believe it is one of the subtler reasons why the AI race is so fierce.
View on HN · Topics
Ideally, you'd train them on the core documentation of the language or tool itself. Hopefully, LLMs lead to more thorough documentation at the start of a new language, framework, or tool. Perhaps to the point of the documentation being specifically tailored to read well for the LLM that will parse and internalize it. Most of what StackOverflow was was just a regurgitation of knowledge that people could acquire from documentation or research papers. It obviously became easier to ask on SO than dig through documentation. LLMs (in theory) should be able to do that digging for you at lightning speed. What ended up happening was people would turn to the internet and Stack Overflow to get a quick answer and string those answers together to develop a solution, never reading or internalizing documentation. I was definitely guilty of this many times. I think in the long run it's probably good that Stack Overflow dies.
View on HN · Topics
I still would like to get other humans' experiences and perspectives when it comes to solving some problems, I hope SO doesn't go away entirely. With LLMs, at least in my experience, they'll answer your question best they can, just as you asked it. But they won't go the extra step to make assumptions based on what they think you're trying to do and make recommendations. Humans do that, and sometimes it isn't constructive at all like "just use a different OS", but other times it could be "I don't know how to solve that, but I've had better lack with this other library/tool".
View on HN · Topics
Interesting timing. I just analyzed TabNews (Brazilian dev community) and ~50% of 2025 posts mention AI/LLMs. The shift is real. The 2014 peak is telling. That's before LLMs, before the worst toxicity complaints. Feels like natural saturation, most common questions were already answered. My bet, LLMs accelerated the decline but didn't cause it. They just made finding those existing answers frictionless.
View on HN · Topics
It's unfortunate that SO hasn't found a way to leverage LLMs. Lots of questions benefit from some initial search, which is hard enough that moderators likely felt frustrated with actual duplicates, or close enough duplicates, and LLMs seem able to assist. However I hope we don't lose the rare gem answers that SO also had, those expert responses that share not just a programming solution but deeper insight.
View on HN · Topics
I think that SO leveraging LLMs implicitly. Like I'll always ask LLM first, that's the easiest option. And I'll only come to SO if LLM fails to answer.
View on HN · Topics
I am surprised at the amount of hate for Stack Overflow here. As a developer I can't think of a single website that has helped me as much over the last ten years. It has had a huge benefit for the development community, and I for one will mourn its loss. I do wonder where answers will come from in the future. As others have noted in this thread, documentation is often missing, or incorrect. SO collected the experiences of actual users solving real problems. Will AI share experiences in a similar way? In principle it could, and in practice I think it will need to. The shared knowledge of SO made all developers more productive. In an AI coded future there will need to be a way for new knowledge to be shared.
View on HN · Topics
There's no doubt that generally LLMs are better. In addition SO had its issues. That being said I can't help but worry about losing humans asking questions and humans answering questions. The sentimentality aside, if humans aren't posing questions and if humans aren't recommending answers, what are the models going to use?
View on HN · Topics
Wonder if this is a good proxy for '# of Google Searches'. Or perhaps a forward indicator (sign of things to come), since LLMs are adopted by the tech-savvy first, then the general public a little later, so Stack Overflow was among the first casualties.
View on HN · Topics
For me, my usage of SO started declining as LLMs rose. Occasionally I still end up there, usually because a chat response referenced a SO thread. I was willing to put up with the toxicity as long as the site still had technical value for me. But still, machines leave me wanting. Where do people go to ask real humans novel technical questions these days?
View on HN · Topics
> Where do people go to ask real humans novel technical questions these days? I don't think such generic place exists. I just do my own research or abandon the topic. I think that in big companies you probably could use some internal chats or just ask some smart guy directly? I don't have that kind of connections and all online communities are full of people whose skill is below mine, so it makes little sense to ask something. I still do sometimes, but rarely receive competent answer. If you have some focused topic like a question about small program, of course you can just use github issues or email author directly. But if you have some open question, probably SO is the only generic platform out there. To put it differently, find some experts and ask which online place to the visit to help strangers. Most likely they just don't do it. So for me, personally, LLMs are the saviour. With enough forth and back I can research any topic that doesn't require very deep expertise. Sure, access to an actual expert willing to guide me would be better, but I just don't have that luxury.
View on HN · Topics
I think the bigger point we should realize is LLMs offer the EXACT same thing in a better way. Many people are still sharing answers to problems but they do it through an AI which then fine tunes on it and now that problem solution is shared with EVERYONE. Far better method of automated sharing of content
View on HN · Topics
Stack overflow was useful with a fairly sanitized search like “mysql error 1095”. Agentic LLMs do there best work when able to access your entire repository or network environment for context, which is impossible to sanitize. For a season, private environments will continue to be able to use SO. But as LLMs capture all the good questions and keep them private, public SO will become less and less relevant. It’s sad to see a resource of this class go.
View on HN · Topics
I just typed the literal phrase "mysql error 1095" into ChatGPT with no context, and it gave an answer that was no worse than SO for the same search. No need to give it anything about my repository, network environment, or even a complete sentence.
View on HN · Topics
I suspect a lot of the traffic shift is from Google replacing the top search result, which used to be Stack Overflow for programming questions, with a Gemini answer.
View on HN · Topics
Has AI summarization led to people either getting their answer from a search engine directly, and failing that, just giving up?
View on HN · Topics
The result is not surprising! Many people are now turning to LLMs with their questions instead. This explains the decline in the number of questions asked.
View on HN · Topics
Maybe the average question will be more "high level" now that all simple questions are answered by LLMs ?
View on HN · Topics
StackOverflow cemented my fears of asking questions. Even though there were no results for what I needed, I was too afraid to ask. Good riddance, now I’m never afraid to ask dumb questions to LLM and I’ve learned a lot more with no stress of judgement.
View on HN · Topics
Probably similar for google. My first line of search is always chatgpt
View on HN · Topics
It makes sense to see the number of questions decline over time as people google questions and get results. It would be interesting to look at the number of comments and views of questions over time to see if that has declined as LLMs have driven declining engagement and discussion.
View on HN · Topics
Why did SO traffic halve from it's maximum till the ChatGPT release date? Also, for a long time after initial release, ChatGPT was pretty much useless for coding questions. It only became more or less useful ~2 years ago. So there's about 4x decline from peak to explain with reasons that do not involve LLMs. What these could be?
View on HN · Topics
LLMs are dogshit in many ways but when it comes to programming they are faster than people, respond instantaneously to further information, and can iterate until they understand the problem fully. Bonus is that you don’t get some dipshit being snarky.
View on HN · Topics
llm killed stackoverflow
View on HN · Topics
This entire thread is fantastic. I felt nostalgic, angry and then concerned all at once. I love LLMs. But I miss SO. I miss being able to have that community. How do we bring it back? If anyone from the Stack Overflow team is reading this (I assume you are): what’s the plan? My take: stop optimizing for raw question volume and start optimizing for producing and maintaining “known good” public knowledge. The thing SO still has that Discord and LLMs don’t is durable, linkable, reviewable answers with accountable humans behind them. But the workflow needs to match how devs work now. A concrete idea: make “asking” a guided flow that’s more like opening a good GitHub issue. Let me paste my error output, environment, minimal repro, what I tried, and what I think is happening. Then use tooling (including an LLM if you want) to pre check duplicates, suggest missing details, and auto format. Crucially: don’t punish me for being imperfect. Route borderline questions into a sandbox or draft mode where they can be improved instead of just slammed shut. Second idea: invest hard in keeping answers current. A ton of SO is correct but stale. Add obvious “this is old” signaling and make it rewarding to post updates, not just brand new answers. Last thing that I don’t see an easy answer to: LLMs are feasting on old SO content today. But LLMs still need fresh, high quality, real world edge cases tomorrow. They need the complexity and problem solving that humans provide. A lot of the answers I get are recycled. No net new thinking. If fewer people ask publicly, where does that new ground truth come from? What’s the mechanism that keeps the commons replenished? So… TLDR…my question to this group of incredibly intelligent people: how does SO save itself?
View on HN · Topics
This is incredible. Anyone who claims LLMs aren't useful will need to explain how come almost every programmer can solve 95% of his problems with an LLM without needing anything else. This is real usefulness right here. EDIT: I'm not saying I'm loving what happened and what is becoming of our roles and careers, I'm just saying things have changed forever; there's still a (shrinking) minority of people who seem to not be convinced.
View on HN · Topics
Was already dying a decade ago, but AI pretty much guarantees we'll never see a public forum that useful ever again. AI may be fine for people asking the basic stuff or who don't care about maintenance, but for a time SO was the place to find extraordinary and creative solutions that only a human can come up with. When you were in a jam and found a gem on there it not only solved your problem, but brought clarity and deep knowledge to your entire situation in ways that I've never seen an LLM do. It inspired you to refactor the code that got you into that mess to begin with and you grew as a developer. This timeline shows the death of a lot more than just the site itself.