llm/7c7e49f1-870c-4915-9398-3b2e1f116c0c/topic-1-f56bdfb0-2c86-4543-8d3e-5a32ec7fb6e8-input.json
You are a comment summarizer. Given a topic and a list of comments tagged with that topic, write a single paragraph summarizing the key points and perspectives expressed in the comments. TOPIC: LLMs replacing Stack Overflow COMMENTS: 1. Some comments: - This is a really remarkable graph. I just didn't realize how thoroughly it was over for SO. It stuns me as much as when Encyclopædia Britannica stopped selling print versions a mere 9 years after the publication of Wikipedia, but at an even faster timescale. - I disagree with most comments that the brusque moderation is the cause of SO's problems, though it certainly didn't help. SO has had poor moderation from the beginning. The fundamental value proposition of SO is getting an answer to a question; if you can the same answer faster, you don't need SO. I suspect that the gradual decline, beginning around 2016, is due to growth in a number of other sources of answers. Reddit is kind of a dark horse here, as I began seeing answers on Google to more modern technical questions link to a Reddit thread frequently along with SO from 2016 onwards. I also suspect Discord played a part, though this is harder to gauge; I certainly got a number of answers to questions for, e.g., Bun, by asking around in the Bun Discord, etc. The final nail in the coffin is of course LLMs, which can offer a SO-level answer to a decent percentage of questions instantly. (The fact that the LLM doesn't insult you is just the cherry on top.) - I know I'm beating a dead horse here, but what happens now? Despite stratification I mentioned above, SO was by far the leading source of high quality answers to technical questions. What do LLMs train off of now? I wonder if, 10 years from now, LLMs will still be answering questions that were answered in the halcyon 2014-2020 days of SO better than anything that came after? Or will we find new, better ways to find answers to technical questions? 2. It has been ... Borderline creepy... Watching how folks - including some professional writers - have adapted their workflows to the capabilities of LLMs, treating them as a copywriter whose input is a spec and for whose output they are the editor. Because it seems natural to me; that's how I've always written... Except, I'm also the bot. Just turn off part of my brain and an endless stream of verbiage emerges, vaguely centered around a theme... Then the real work begins: editing for relevance and imposing a coherent structure. So, I don't really fault anyone who adopts these new tools for the task. But I have some strong feelings about the lazy editing. 3. Blame the managers who weren't users of the site, decided it wasn't important to the business, and ignored the problems. This always cracks me up. I've seen it so many times, and so many books cover this... Classic statement is "never take your eye off the ball". Sure, you need to plan ahead. You need to move down a path. But take your eye off of today, and you won't get to tomorrow. Maybe they'll SCO it, and spend the next 10 years suing everyone and their LLM dog. You know, I wonder how the board and execs made out suing Linux related... things. End users were threatened too, compelled to pay... SO could be spun off into a neat tiger, nipping at everyone's toes. 4. Friend in my group was in the public beta back in '08. We all ended up signing up by the end of '09. I used it off-and-on over the years (have some questions and replies with hundreds of upvotes). Though SO had a rap for having what might seem like harsh replies or moderation, it was often imho just blunt/curt, to the point, and often objectively defensible. I also agree with your timeframe that, in the later 2010s, the site became infected with drama, and moderation suddenly started reaching its tendrils into non-technical areas, when it should not have. And on an ostensibly technical site, no less! I found myself contributing less and less (same with Wikipedia), because I merely wanted to continue honing my craft through learning and contributing technical data with others who shared this same passion... I did not want to have politics shoved in my face, or have every post of mine have to be filtered through an increasingly extreme ideology which had nothing to do with the technical nature of the site. When I had my SO suspended with no warning or recourse for writing "master" in a reply, I knew it was time to leave for good. Most of the admins on the site transformed from technical (yet sometimes brash!) geeks, into political flag-waving and ideology-pushing avatars (including pushing their sexual agendas front and center), and not of the FSF/FLOSS kind, either. These types of dramas have infected nearly everything online, especially since 2020. Even Linus has lost his mind with pushing politics into what should be purely technical areas https://news.ycombinator.com/item?id=41936049 LLMs were a final blow for many reasons, though I think that a huge part of it is that LLMs won't chide you and suspend/ban you for wanting to stick to strictly technical matters. I don't have to pledge allegiance to a particular ideology and pass a purity test before asking technical questions to an LLM. 5. The moderation definitely got kind of nasty in the last 5 years or so. To the point where you would feel unwelcome for asking a question you had already researched, and felt was perfectly sound to ask. However, that didn't stop millions of people from asking questions every day , it just felt kinda shitty to those of us who spent more time answering, when we actually needed to ask one on a topic we were lacking in. (Speaking as someone who never moderated). My feeling was always that the super mods were people who had too much time on their hands... and the site would've been better without them (speaking in the past tense, now). But I don't think that's what killed it. LLMs scraping all its content and recycling it into bite-sized Gemini or GPT answers - that's what killed it. 6. Thinking they didn't keep up with the times or that they should've made changes is perfectly fine. It's the vitriol in some of the comments here I really can't stand. As for me, I also don't answer much anymore. But not sure if it's due to the community or frankly because most low hanging fruits are gone. Still sometimes visit, though. Even for thing's an LLM can answer, because finding it on SO takes me 2 seconds but waiting for the LLM to write a novella about the wrong thing often takes longer. 7. Right? It's a perfect example of the problem. In college, I worked tech support. My approach was to treat users as people. To see all questions as legitimate, and any knowledge differential on my part as a) the whole point of tech support, and b) an opportunity to help. But there were some people who used any differential in knowledge or power as an opportunity to feel superior. And often, to act that way. To think of users as a problem and an interruption, even though they were the only reason we were getting paid. I've been refusing to contribute to SO for so long that I can't even remember the details. But I still recall the feeling I got from their dismissive jackassery. Having their content ripped off by LLMs is the final blow, but they have richly earned their fate. 8. > The fundamental value proposition of SO is getting an answer to a question I read an interview once with one of the founders of SO. They said the main value stackoverflow provided wasn't to the person who asked the question. It was for the person who googled it later and found the answer. This is why all the moderation pushes toward deleting duplicates of questions, and having a single accepted answer. They were primarily trying to make google searches more effective for the broader internet. Not provide a service for the question-asker or answerer. Sad now though, since LLMs have eaten this pie. 9. > This is why all the moderation pushes toward deleting duplicates of questions, and having a single accepted answer. Having duplicates of the question is precisely why people use LLMs instead of StackOverflow. The majority of all users lack the vocabulary to properly articulate their problems using the jargon of mathematicians and programmers. Prior to LLMs, my use case for StackOverflow was something like this: 30 minutes trying (and failing) to use the right search terms to articulate the problem (remember, there was no contextual understanding, so if you used a word with two meanings and one of those meanings was more popular, you’d have to omit it using the exclusion operator). 30 minutes reading through the threads I found (half of which will have been closed or answered by users who ignored some condition presented by the OP). 5 minutes on implementation. 2 minutes pounding my head on my desk because it shouldn’t have been that hard. With an LLM, if the problem has been documented at any point in the last 20 years, I can probably solve it using my initial prompt even as a layman. When you’d actually find an answer on StackOverflow, it was often only because you finally found a different way of phrasing your search so that a relevant result came up. Half the time the OP would describe the exact problem you were having only for the thread to be closed by moderators as a duplicate of another question that lacked one of your conditions. 10. LLMs also search Google for answers. Hence the knowledge may be not lost even for those who only supervises machines that write code. 11. > Sad now though, since LLMs have eaten this pie. By regenerating an answer on command and never caring about the redundancy, yeah. The DRY advocate within me weeps. 12. Sad? No. A good LLM is vastly better than SO ever was. An LLM won't close your question for being off-topic in the opinion of some people but not others. It won't flame you for failing to phrase your question optimally, or argue about exactly which site it should have been posted on. It won't "close as duplicate" because a vaguely-similar question was asked 10 years ago in a completely-different context (and never really got a great answer back then). Moreover, the LLM has access to all instances of similar problems, while a human can only read one SO page at a time. The question of what will replace SO in future models, though, is a valid one. People don't realize what a massive advantage Google has over everyone else in that regard. So many site owners go out of their way to try to block OpenAI's crawlers, while simultaneously trying to attract Google's. 13. > Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc. That might work until an LLM encounters a question it's programmed to regard as suspicious for whatever reason. I recently wanted to exercise an SMTP server I've been configuring, and wanted to do it by an expect script, which I don't do regularly. Instead of digging through the docs, I asked Google's Gemini (whatever's the current free version) to write a bare bones script for an SMTP conversation. It flatly refused. The explanation was along the lines "it could be used for spamming, so I can't do that, Dave." I understand the motivation, and can even sympathize a bit, but what are the options for someone who has a legitimate need for an answer? I know how to get one by other means; what's the end game when it's LLMs all the way down? I certainly don't wish to live in such a world. 14. 1.5 years ago Gemini (the same brand!) refused to provide C++ help to minors because C++ is dangerous: https://news.ycombinator.com/item?id=39632959 15. I don't know how others use LLMs, but once I find the answer to something I'm stuck on I do not tell the LLM that it's fixed. This was a problem in forums as well but I think even fewer people are going to give that feedback to a chatbot 16. Asking questions on SO was an exercise in frustration, not "interacting with peers". I've never once had a productive interaction there, everything I've ever asked was either closed for dumb reasons or not answered at all. The library of past answers was more useful, but fell off hard for more recent tech, I assume because people all were having the same frustrations as I was and just stopped going there to ask anything. I have plenty of real peers I interact with, I do not need that noise when I just need a quick answer to a technical question. LLMs are fantastic for this use case. 17. It's funny, because I had a similar question but wanted to be able to materialize a view in Microsoft SQL Server, and ChatGPT went around in circles suggesting invalid solutions. There were about 4 possibilities that I had tried before going to ChatGPT, it went through all 4, then when the fourth one failed it gave me the first one again. 18. You can't use the free chat client for questions like that in my experience. Almost guaranteed to waste your time. Try the big-3 thinking models (ChatGPT 5.2 Pro, Gemini 3 Pro, and Claude Opus 4.5). 19. > this nails it I assume you’re taking about the ending where gippity tells you how awesome you are and then spits out a wrong answer? 20. I actively hated interacting with the power users on SO, and I feel nothing about an LLM, so it's a definite improvement in QoL for me. 21. The "human touch" on StackOverflow?! I'll take the "robot touch," thanks very much. 22. > It's just that those goals (i.e. "we want people to be able to search for information and find high-quality answers to well-scoped, clear questions that a reasonably broad audience can be interested in, and avoid duplicating effort") don't align with those of the average person asking a question (i.e. "I want my code to work"). This explains the graph in question: Stackoverflow's goals were misaligned to humans. Pretty ironic that AI bots goals are more aligned :-/ 23. The part where you don't talk to anyone else, just a robot intermediary which is simulating the way humans talk, is part of UX. Sounds like pretty horrifying UX. 24. As long as software is properly documented, and documentation is published in LLM-friendly formats, LLMs may be able to answer most of the beyond basic questions even when docs don't explicitly cover a particular scenario. Take an API for searching products, one for getting product details, and then an API for deleting a product. The documentation does not need to cover the detailed scenario of "How to delete a product" where the first step is to search, the second step is to get the details (get the ID), and the third step is to delete. The LLM is capable of answering the question "how to delete the product 'product name'". To some degree, many of the questions on SO were beyond basic, but still possible for a human to answer if only they read documentation. LLMs just happen to be capable of reading A LOT of documentation a LOT faster, and then coming up with an answer A LOT faster. 25. If the LLM is also writing the documentation, because the developers surely don’t want to, I’m not sure how well this will work out. I have some co-workers who have tried to use Copilot for their documentation (because they never write any and I’m constantly asking them questions as a result), and the results were so bad they actually spent the time to write proper documentation. It failed successfully, I suppose. 26. Indeed, how documentation is written is key. But funny enough, I have been a strong advocate that documentation should always be written in Reference Docs style, and optionally with additional Scenario Docs. The former is to be consumed by engineers (and now LLMs), while the later is to be consumed by humans. Scenario Docs, or use case docs, are what millions of blog articles were made of in the early days, then we turned to Stack Overflow questions/answers, then companies started writing documentation in this format too. Lots of Quick Starts for X, Y, and Z scenarios using technology K. Some companies gave away completely on writing reference documentation, which would allow engineers to understand the fundamentals of technology K and then be able to apply to X, Y, and Z. But now with LLMs, we can certainly go back to writing Reference docs only, and let LLMs do the extra work on Scenario based docs. Can they hallucinate still? Sure. But they will likely get most beyond-basic-maybe-not-too-advanced scenarios right in the first shot. As for using LLMs to write docs: engineers should be reviewing that as much as they should be reviewing the code generated by AI. 27. > Like you go to some question and the accepted answer with the most votes is for a ten-year-old version of the technology. This is still a problem with LLMs as a result. The bigger problem is that now the LLM doesn’t show you it was a 10 year old solution, you have to try it, watch it fail, then find out it’s old, and ask for a more up to date example, then watch it flounder around. I’ve experienced this more times than I can count. 28. Then you're doing it wrong? I'd need to see a few examples, but this is easily solved by giving the llm more context, any really. Give it the version number, give it a url to a doc. Better yet git clone the repo and tell it to reference the source. Apologies for using you as an example, but this is a common theme on people who slam LLMs. They ask it a specific/complex question with little context and then complain when the answer is wrong. 29. This is exactly the issue that most people run into and it's literally the GIGO principle that we should all be familiar with by now. If your design spec amounts to "fix it" then don't be surprised at the results. One of the major improvements I've noticed in Claude Code using Opus 4.5 is that it will often read the source of the library we're using so that it fully understands the API as well as the implementation. You have to treat LLMs like any other developer that you'd delegate work to and provide them with a well thought out specification of the feature they're building or enough details about how to reproduce a bug for them to diagnose and fix it. If you want their code to conform to the style you prefer then you have to give them a style guide and examples or provide a linter and code formatter and let them know how to run it. They're getting better at making up for these human deficits as more and more of these common failure cases are recorded but you can get much better output now by simply putting some thought into how you use them. 30. Sonnet does it as well, I use it to save credits, I honestly don't see much difference to Opus if you keep your problems/codebase/general context window small enough. In JavaScript land, known for its volatile ecosystem, it often uses constructors that don't exist anymore because of API changes. But a small lookup of the source is usually enough for it to correct the code immediately. 31. I’ve specified many of these things and still had it fall on its face. And at some point, I’m providing so much detail that I may as well do it myself, which is ultimately what ends up happening. Also, it seems assuming the latest version would make much more sense than assuming a random version from 10 years ago. If I was handing work off to another person, I would expect to only need to specify the version if it was down level, or when using the latest stable release. 32. Usually that's resolved by saying "I want you to use v2" or whatever it is, which you can't really do with a Stack Overflow answer as easily. 33. Have you tried using context7 or a similar MCP to have the agent automatically fetch up to date documentation? 34. There was, obviously, only one main reason: LLMs. Anything else makes no sense. Even if the moderation was "horrible" (which sounds to me like a horrible exaggeration), there was nothing which came close to being as good as SO. There was no replacement. People will use the best available platform, even if you insist in describing it as "horrible". It's was not horrible compared to the alternatives, web forums like Reddit and HN, which are poorly optimized for answering questions. 35. The decline was much slower, not the following exponential decline that can only have been caused by LLMs. 36. You overvalue the impact of LLMs in regards to SO. They did have an impact, but it's the moderation that ultimately bent and broke the camel's back. An LLM may give seemingly good answers, but it always lacks in nuance and, most importantly, in being vetted by another person. It's the quality assurance that matters, and anyone with even a bit of technical skill quickly brushes up against that illusion of knowledge an LLM gives and will either try to figure it out on their own or seek out other sources to solve it if it matters. Reddit, for all its many problems, was often still easier to ask on and easier to get answers on without needing an intellectual charade and without some genius not reading the post, closing it and linking to a similar sounding title despite the content being very different. Which is the crux of the issue; you can't ask questions on SO. Or rather, you can't ask questions. No, no, that's not enough. You'll have to engage with the community, answer many other questions first, ensure that your account has enough "clout" to overturn stupid closures of questions, and when you have wasted enough time doing that, then you can finally ask your own question. Or you can just go somewhere else that isn't an intellectual charade and circle jerking and figure it out without wasting tons of time chasing clout and hoping a moderator won't just close the question as duplicate. SO was never the best platform, exactly because of its horrendous moderation. It was good, yes. It had the quality assurance, to a degree, yes. But when just asking a question becomes such a monumental task, people will go elsewhere, to better platforms. Which includes other forums, and, LLMs. So no, what you're attributing to LLMs is merely a symptom of the deeper issue. 37. > I disagree with most comments that the brusque moderation is the cause of SO's problems, though it certainly didn't help. By the time my generation was ready to start using SO, the gatekeeping was so severe that we never began asking questions. Look at the graph. The number of questions was in decline before 2020. It was already doomed because it lost the plot and killed any valuable culture. LLMs were a welcome replacement for something that was not fun to use. LLMs are an unwelcome replacement for many other things that are a joy to engage with. 38. >> Question can be marked as duplicate without an answer. > No, they literally cannot. You missed that people repeatedly closed question as duplicate when it was not a duplicate. So it had answer, just to a different mildly related question. LLM are having problems but they gaslight me in say 3% of cases, not 60% of cases like SO mods. 39. This doesn't mean that it's over for SO. It just means we'll probably trend towards more quality over quantity. Measuring SO's success by measuring number of questions asked is like measuring code quality by lines of code. Eventually SO would trend down simply by advancements of search technology helping users find existing answers rather than asking new ones. It just so happened that AI advanced made it even better (in terms of not having to need to ask redundant questions). 40. > - I know I'm beating a dead horse here, but what happens now? Despite stratification I mentioned above, SO was by far the leading source of high quality answers to technical questions. What do LLMs train off of now? I wonder if, 10 years from now, LLMs will still be answering questions that were answered in the halcyon 2014-2020 days of SO better than anything that came after? Or will we find new, better ways to find answers to technical questions? To me this shows just how limited LLMs are. Hopefully more people realize that LLMs aren't as useful as they seem, and in 10 years they're relegated to sending spam and generating marketting websites. 41. Too bad stack overflow didn't high-quality-LLM itself early. I assume it had the computer-related brainpower. with respect to the "moderation is the cause" thing... Although I also don't buy moderation as the cause, I wonder if any sort of friction from the "primary source of data" can cause acceleration. for example, when I'm doing an interenet search for the definition of a word like buggywhip, some search results from the "primary source" show: > buggy whip, n. meanings, etymology and more | Oxford English Dictionary > Factsheet What does the noun buggy whip mean? There is one meaning in OED's entry for the noun buggy whip. See 'Meaning & use' for definition, usage, and quotation evidence. which are non-answer to keep their traffic. but the AI answer is... the answer. If SO early on had had some clear AI answer + references, I think that would have kept people on their site. 42. The newer questions that LLMs can't answer will be answered in forums - either SO, reddit, or elsewhere. There will be a much higher percentage of relevant content with far fewer new pages regurgitating questions about solved problems. So the LLMs will be able to keep up. 43. > What do LLMs train off of now? I wonder if, 10 years from now, LLMs will still be answering questions that were answered in the halcyon 2014-2020 days of SO better than anything that came after? Or will we find new, better ways to find answers to technical questions? That's a great question. I have no idea how things will play out now - do models become generalized enough to handle "out of distrubition" problems or not ? If they don't then I suppose a few years from now we'll get an uptick in Stackoverflow questions; the website will still exist it's not going anywhere. 44. I think the interesting thing here for those of us who use open source frameworks is that we can ask the LLM to look at the source to find the answer (eg. Pytorch or Phoenix in my case). For closed source libraries I do not know. 45. > SO was by far the leading source of high quality answers to technical questions We will arrive on most answers by talking to an LLM. Many of us have an idea about we want. We relied on SO for some details/quirks/gotchas. Example of a common SO question: how to do x in a library or language or platform? Maybe post on the Github for that lib. Or forums.. there are quirky systems like Salesforce or Workday which have robust forums. Where the forums are still much more effective than LLMs. 46. > will we find new, better ways to find answers to technical questions? I honestly don't think they need to. As we've seen so far, for most jobs in this world, answers that sound correct are good enough. Is chasing more accuracy a good use of resources if your audience can't tell the difference anyway? 47. > I disagree with most comments that the brusque moderation is the cause of SO's problems Questions asked on SO that got downvoted by the heavy handed moderation would have been answered by LLMs without any of the flak whatsoever. Those who had downvoted other's questions on SO for not being good enough, must be asking a lot of such not good enough questions to an LLM today. Sure, the SO system worked, but it was user hostile and I'm glad we all don't have to deal with it anymore. 48. I spent the last 14 days chasing an issue with a Spark transform. Gemini and Claude were exceptionally good at giving me answers that looked perfectly reasonable: none of them worked, they were almost always completely off-road. Eventually I tried with something else, and found a question on stackoverflow, luckily with an answer. That was the game changer and eventually I was able to find the right doc in the Spark (actually Iceberg) website that gave me the final fix. This is to say that LLMs might be more friendly. But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions. Not sure why someone is thinking this is a good thing. 49. The fundamental difference between asking on SO and asking an LLM is that SO is a public forum, and an LLM will be communicated with in private. This has a lot of implications, most of which surround the ability for people to review and correct bad information. 50. The other major benefit of SO being a public forum is that once a question was wrestled with and eventually answered, other engineers could stumble upon and benefit from it. With SO being replaced by LLMs, engineers are asking LLMs the same questions over and over, likely getting a wide range of different answers (some correct and others not) while also being an incredible waste of resources. 51. Surely the fundamental difference is one asks actual humans who know what's right vs statistical models that are right by accident. 52. Providing context to ask a Stack Overflow question was time-consuming. In the time it takes to properly format and ask a question on Stack Overflow, an engineer can iterate through multiple bad LLM responses and eventually get to the right one. The stats tell the uncomfortable truth. LLMs are a better overall experience than Stack Overflow, even after accounting for inaccurate answers from the LLM. Don't forget, human answers on Stack Overflow were also often wrong or delayed by hours or days. I think we're romanticizing the quality of the average human response on Stack Overflow. 53. That's only because of LLMs consuming pre-existing discussions on SO. They aren't creating novel solutions. 54. What I'm appreciating here is the quality of the _best_ human responses on SO. There are always a number of ways to solve a problem. A good SO response gives both a path forward, and an explanation why, in the context of other possible options, this is the way to do things. LLMs do not automatically think of performance, maintainability, edge cases etc when providing a response, in no small part because they do not think. An LLM will write you a regex HTML parser.[0] The stats look bleak for SO. Perhaps there's a better "experience" with LLMs, but my point is that this is to our detriment as a community. [^0]: He comes, https://stackoverflow.com/questions/1732348/regex-match-open... 55. This comment and the parent one make me realize that people who answer probably value the exchange between experts more than the answer. Perhaps the antidote involves a drop of the poison. Let an LLM answer first, then let humans collaborate to improve the answer. Bonus: if you can safeguard it, the improved answer can be used to train a proprietary model. 56. > I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience Interesting question - the result is just words so surely a LLM can simulate an ego. Feed it the Linux kernel mailing list? Isn’t back and forth exactly what the new MoE thinking models attempt to simulate? And if they don’t have the experience that is just a question of tokens? 57. SO was somewhere people put their hard won experience into words, that an LLM could train on. That won't be happening anymore, neither on SO or elsewhere. So all this hard won experience, from actually doing real work, will be inaccessible to the LLMs. For modern technologies and problems I suspect it will be a notably worse experience when using an LLM than working with older technologies. It's already true for example, when using the Godot game engine instead of Unity. LLMs constantly confuse what you're trying to do with Unity problems, offer Unity based code solutions etc. 58. > Isn’t back and forth exactly what the new MoE thinking models attempt to simulate? I think the name "Mixture of Experts" might be one of the most misleading labels in our industry. No, that is not at all what MoE models do. Think of it rather like, instead of having one giant black box, we now have multiple smaller opaque boxes of various colors, and somehow (we don't really know how) we're able to tell if your question is "yellow" or "purple" and send that to the purple opaque box to get an answer. The result is that we're able to use less resources to solve any given question (by activating smaller boxes instead of the original huge one). The problem is we don't know in advance which questions are of which color: it's not like one "expert" knows CSS and the other knows car engines. It's just more floating point black magic, so "How do I center a div" and "what's the difference between a V6 and V12" are both "yellow" questions sent to the same box/expert, while "How do I vertically center a div" is a red question, and "what's the most powerful between a V6 and V12" is a green question which activates a completely different set of weights. 59. You can ask an LLM to provide multiple approaches to solutions and explore the pros and cons of each, then you can drill down and elaborate on particular ones. It works very well. 60. It's flat wrong to suggest SO had the right answer all the time, and in fact in my experience for trickier work it was often wrong or missing entirely. LLMs have a better hit rate with me. 61. The example wasn't even finding a right answer so I don't see where you got that.. Searching questions/answers on SO can surface correct paths on situations where the LLMs will keep giving you variants of a few wrong solutions, kind of like the toxic duplicate closers.. Ironically, if SO pruned the history to remove all failures to match its community standards then it would have the same problem. 62. "But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions." > "actually answered our questions." Read carefully. 63. Yes, it does answer you question, when the site lets it go through. Note that "answers your question" does not mean "solving your problem". Sometimes the answer to a question is "this is infeasible because XYZ" and that's good feedback to get to help you re-evaluate a problem. Many LLMs still struggle with this and would rather give a wrong answer than a negative one. That said, the "why don't you use X" response is practically a stereotype for a reason. So it's certainly not always useful feedback. If people could introspect and think "can 'because my job doesn't allow me to install Z' be a valid response to this", we'd be in a true Utopia. 64. >> Eventually I tried with something else, and found a question on stackoverflow, luckily with an answer. That was the game changer and eventually I was able to find the right doc Read carefully and paraphrase to the generous side. The metaphor that follows that is obviously trying to give an example of what might be somehow lost. 65. Interpreting that claim as "SO users always, 100% of the time answer questions correctly" is uncharitable to the point of being unreasonable. Most people would interpret the claim as concisely expressing that you get better accuracy from grumpy SO users than friendly LLMs. 66. For the record I was interpreting that as LLMs are useless (which may have been just as uncharitable), which I categorically deny. I would say they're about just as useful without wading through the mire that SO was. 67. I'm hoping increasing we'll see agents helping with this sort of issue. I would like an agent that would do things like pull the spark repo into the working area and consult the source code/cross reference against what you're trying to do. Once technique I've used successfully is to do this 'manually' to ensure codex/Claude code can grep around the libraries I'm using 68. You still get the same thing though? That grumpy guy is using an LLM and debugging with it. Solves the problem. AI provider fine tunes their model with this. You now have his input baked into it's response. How you think these things work? It's either a human direct input it's remembering or a RL enviroment made by a human to solve the problem you are working on. Nothing in it is "made up" it's just a resolution problem which will only get better over time. 69. Because what you’re describing is the exception. Almost always with LLM’s I get a better solution, or helpful pointer in the direction of a solution, and I get it much faster. I honestly don’t understand anyone could prefer Google/SO, and in fact that the numbers show that they don’t. You’re in an extreme minority. 70. > Would you still defend your position if the “grumpy” guy answered in Linus’ style? If they answered correctly, yes. My point is that providing _actual knowledge_ is by itself so much more valuable compared to _simulated knowledge_, in particular when that simulated knowledge is hyper realistic and wrong. 71. Not a big surprise once LLMs came along: stack overflow developed some pretty unpleasant traits over time. Everything from legitimate questions being closed for no good reason (or being labeled a duplicate even though they often weren’t), out of date answers that never get updated as tech changes, to a generally toxic and condescending culture amongst the top answerers. For all their flaws, LLMs are so much better. 72. Agreed. I personally stopped contributing to StackOverflow before LLMs, because of the toxic moderation. Now with LLMs, I can't remember the last time I visited StackOverflow. 73. No, you don't. Not only there are many examples of detailed stackoverflow articles written by absolute experts, you also need answer often for something trivial(which is like half of my chatgpt), e.g. how to export in pgadmin, or a nondescriptive error in linux. 74. If you read parent's comment it's not "I feel like" comment even though he mentioned it. I have been in software engineering for long and the queries to stackoverflow/chatgpt combined haven't decreased for me. 75. This has been my experience. My initial (most popular) questions (and I asked almost twice as many questions, as I gave answers) were pretty basic, but they started getting a lot more difficult, as time went on, and they became unanswered, almost always (I often ended up answering my own question, after I figured it out on my own). I was pretty pissed at this, because the things I encountered, were the types of things that people who ship, encounter; not academic exercises. Tells me that, for all the bluster, a lot of folks on there, don't ship. LLMs may sometimes give pretty sloppy answers, but they are almost always ship-relevant. 76. Yeah, I think this is the real answer. I still pop into SO when in learning a new language or trip into new simple questions (in my case, how to connect and test a local server). But when you're beyond the weeds, SO is as best an oasis in the desert. Half the time a mirage, nice when it does help out. But rare either way. I don't use LLMs eother. But the next generation might feel differently and those trends mean there's no new users coming in. 77. Maybe there's a key idea for something to replace StackOverflow as a human tech Q&A forum: Having a system which somehow incentivizes asking and answering these sorts of challenging and novel questions. These are the questions which will not easily be answered using LLMs, as they require more thought and research. 78. > the more experienced you become, the less useful it is This is killer feature of LLMs - you will not became more experienced. 79. Gen 0: expertsexchange.com, later experts-exchange.com (1996) Gen 1: stackoverflow.com (2008) Gen 2: chatgpt.com (2022, sort of) 80. None of those worked as programming tools. I really miss Google Answers though, with the bounties. Random example: http://answers.google.com/answers/threadview/id/762357.html It's remarkable how similar in style the answers are to what we all know from e.g. chatgpt. 81. Which is why LLMs are so much more useful than SO and likely always will be. LLMs do this even. Like trying to write my own queue by scratch and I ask an LLM for feedback I think it’s Gemini that often tells me Python’s deque is better. duh! That’s not the point. So I’ve gotten into the habit of prefacing a lot of my prompts with “this is just for practice” or things of that nature. It actually gets annoying but it’s 1,000x more annoying finding a question on SO that is exactly what you want to know but it’s closed and the replies are like “this isn’t the correct way to do this” or “what you actually want to do is Y” 82. Stack Overflow would still have a vibrant community if it weren't for the toxic community. Imagine a non-toxic Stack Overflow replacement that operated as an LLM + Wiki (CC-licensed) with a community to curate it. That seems like the sublime optimal solution that combines both AI and expertise. Use LLMs to get public-facing answers, and the community can fix things up. No over-moderation for "duplicates" or other SO heavy-handed moderation memes. Someone could ask a question, an LLM could take a first stab at an answer. The author could correct it or ask further questions, and then the community could fill in when it goes off the rails or can't answer. You would be able to see which questions were too long-tail or difficult for the AI to answer, and humans could jump in to patch things up. This could be gamified with points. This would serve as fantastic LLM training material for local LLMs. The authors of the site could put in a clause saying that "training is allowed as long as you publish your weights + model". Someone please build this. Edit: Removed "LLMs did not kill Stack Overflow." first sentence as suggested. Perhaps that wasn't entirely accurate, and the rest of the argument stands better on its own legs. 83. Fixing loads of LLM-generated content is neither easy nor fun. You'll have a very hard time getting people to do that. 84. Hardly. - A huge number of developers will want to use such a tool. Many of them are already using AI in a "single player" experience mode. - 80% of the answers will be correct when one-shot for questions of moderate difficulty. - The long tail of "corrector" / "wiki gardening" / pedantic types fill fix the errors. Especially if you gamify it. Just because someone doesn't like AI doesn't mean the majority share the same opinion. AI products are the fastest growing products in history. ChatGPT has over a billion MAUs. It's effectively won over all of humanity. I'm not some vibe coder. I've been programming since the 90's, including on extremely critical multi-billion dollar daily transaction volume infra, yet I absolutely love AI. The models have lots of flaws and shortcomings, but they're incredibly useful and growing in capability and scope -- I'll stand up and serve as your counter example. 85. People answer on SO because it's fun. Why should they spend their time fixing AI answers? It's very tedious as the kind of mistakes LLMs make can be rather subtle and AI can generate a lot of text very fast. It's a sisyphean taks, I doubt enough people would do it. 86. Your points are arguing that the tool would be useful - not that anyone would build it. No one wants to curate what is, essentially, randomly generated text. What an absolute nightmare that would be 87. > essentially, randomly generated text. You oversimplified and lost too much precision. Try again? 88. I just think you could save a lot of money and energy doing all this but skipping the LLM part? Like what is supposed to be gained? The moment/act of actual generation of lines of code or ideas, whether human or not, is a much smaller piece of the pie relative to ongoing correction, curation, etc (like you indicate). Focusing on it and saying it intrinsically must/should come from the LLM mistakes the intrinsically ephemeral utility of the LLMs and the arguably eternal nature of the wiki at the same time. As sibling says, it turns it into work vs the healthy sharing of ideas. The whole pitch here just feels like putting gold flakes on your pizza: expensive and would not be missed if it wasn't there. Just to say, I'm maybe not as experienced and wise I guess but this definitely sounds terrible to me. But whatever floats your boat I guess! 89. > Someone could ask a question, an LLM could take a first stab at an answer. The author could correct it or ask further questions, and then the community could fill in when it goes off the rails or can't answer. Isn't this how Quora is supposed to operate? 90. Oh yeah. My favorite feature of LLMs, is the only dumb question, is the one I don't ask. I guess someone could train an LLM to be spiteful and nasty, but that would only be for entertainment. 91. If you say the wrong thing to grok, it will go off on you. It's quite entertaining! 92. That depends on what you mean by "came along". If you mean "once that everyone got around to the idea that LLMs were going to be good at this thing" then sure, but it was not long ago that the majority of people around here were very skeptical of the idea that LLMs would ever be any good at coding. 93. What you're arguing about is the field completely changing over 3 years; it's nothing, as a time for everyone to change their minds. LLMs were not productified in a meaningful way before ChatGPT in 2022 (companies had sufficiently strong LLMs, but RLHF didn't exist to make them "PR-safe"). Then we basically just had to wait for LLM companies to copy Perplexity and add search engines everywhere (RAG already existed, but I guess it was not realistic to RAG the whole internet), and they became useful enough to replace StackOverflow. 94. I dont think this is true. People were skeptical of agi / better than human coding which is not the case. As a matter of fact i think searching docs was one of the first manor uses of llms before code. 95. That's because there has been rapid improvement by LLMs. Their tendency to bullshit is still an issue, but if one maintains a healthy skepticism and uses a bit of logic it can be managed. The problematic uses are where they are used without any real supervision. Enabling human learning is a natural strength for LLMs and works fine since learning tends to be multifaceted and the information received tends to be put to a test as a part of the process. 96. all true, but i still find myself ask questions there after llm gave wrong answers and wasted my time 97. How can we be sure that LLMs won't start giving stale answers? 98. They will, but model updates and competition help solve the problem. If people find that Claude consistently gives better/more relevant answers over GPT, for example, people will choose the better model. The worst thing with Q/A sites isn't they don't work. It's that they there are no alternatives to stackoverflow. Some of the most upvoted answers on stackoverflow prove that it can work well in many cases, but too bad most other times it doesn't. 99. >For all their flaws, LLMs are so much better But LLMs get their answers from StackOverflow and similar places being used as the source material. As those start getting outdated because of lack of activity, LLMs won't have the source material to answer questions properly. 100. I regularly use Claude and friends where I ask it to use the web to look at specific GitHub repos or documentation to ask about current versions of things. The “LLMs just get their info from stack overflow” trope from the GPT-3 days is long dead - they’re pretty good at getting info that is very up to date by using tools to access the web. In some cases I just upload bits and pieces from a library along with my question if it’s particularly obscure or something home grown, and they do quite well with that too. Yes, they do get it wrong sometimes - just like stack overflow did too. 101. The amount of docs that have a “Copy as markdown” or “Copy for AI” button has been noticeably increasing, and really helps the LLM with proper context. 102. StackOverflow answers are outdated. Every time I end up on that site these days, I find myself reading answers from 12 years ago that are no longer relevant. 103. Now they can read the documentation and code in the repo directly and answer based on that. 104. Yep, LLMs are perfect for the "quick buy annoying to answer 500 times" questions about writing a short script, or configuring something, or using the right combination of command line parameters. Quicker than searching the entirety of Google results and none of the attitude. 105. > For all their flaws, LLMs are so much better. For now. They still need to be enshitted. 106. Models are check-pointed. You can save one you like and use it forever. 107. You can save an open source + open weights model, which is frozen in time. That’s still very useful for some things but lacks knowledge of current data. So we’ll end up with a choice of low-performing stale models or high-performing enshittified models which know about more current information. 108. Open source models get updated all the time. You'd only be a few months behind. 109. There are open source models you yourself or a trusted third party can run. No ads. 110. Yup. Like Claude 3 Opus. 111. Really? I thought you could only do that with open source models. Can you teach me how to checkpoint the current version of Claude Code so I can keep it as-is forever? 112. Indeed. StackOverflow was by far the most unpleasant website that I have regularly interacted with. Sometimes, just seeing how users were treated there (even in Q&A threads that I wasn’t involved in at all) disturbed me so much it was actually interfering with my work. I’m so, so glad that I can now just ask an AI to get the same (or better) answers, without having to wade through the barely restrained hate on that site. 113. This change was happening well before LLMs. People were tired of being yelled at and treated poorly. A cautionary tale for many of these types of tech platforms, this one included. 114. They will no doubt blame this on AI, somehow (ChatGPT release: late 2022, decline start: mid 2020), instead of the toxicity of the community and the site's goals of being a knowledgebase instead of a QA site despite the design. PS - This comment is closed as a [duplicate] of this comment: https://news.ycombinator.com/item?id=46482620 115. Right. I often end up on Stack Exchange when researching various engineering-related topics, and I'm always blown away by how incredibly toxic the threads are. We get small glimpses of that on HN, but it was absolutely out of control on Stack Exchange. At the same time, I think there was another factor: at some point, the corpus of answered questions has grown to a point where you no longer needed to ask, because by default, Google would get you to the answer page. LLMs were just a cherry on top. 116. Other tech support forums are terrible in other ways. AI is a godsend. Typical response: I am RJ, an Independent Advisor and Microsoft Gold Certified Support Specialist Enthusiast. I know how your system is not functioning as desired! Rest assured, I am here to help you resolve this today. Please follow these steps in order. Do not skip any steps. Step 1: Reboot your computer Step 2: Reinstall windows Step 3: Contact Microsoft support Did this resolve your issue? [ Yes ] [ No ] If this helped, please mark this as the Answer and give me a 5-star rating so I can continue providing high-quality, scripted responses to other users! Standard Disclaimer: I do not work for Microsoft. I am an independent volunteer who enjoys copying and pasting from a manual written in 2014. 117. There is an obvious acceleration of the downwards trend at the time ChatGPT got popular. AI is clearly a part of this, but not the only thing that affects SO activity. 118. I wonder if we can attribute some $billion of the investment in LLMs directly to the toxicity on StackOverflow. 119. Ironically they could probably do some really useful deduplication/normalization/search across questions and answers using AI/embeddings today, if only they’d actually allowed people to ask the same questions infinite different ways, and treated the result of that as a giant knowledge graph. I was into StackOverflow in the early 2010s but ultimately stopped being an active contributor because of the stupid moderation. 120. Use of GPT3 among programmers started 2021 with GitHub Copilot which preceded ChatGPT. I agree the toxic moderation (and tone-deaf ownership!) initiated the slower decline earlier that then turned into the LLM landslide. Tbf SO also suffered from its own success as a knowledgebase where the easy pickings were long gone by then. 121. It is sort of because of AI - it provided a way of escaping StackOverflow's toxicity! 122. Could view it as push/pull dynamics: pushed away by toxicity, pulled to good answers from AI. 123. I once published a method for finding the closest distance between an ellipse and a point on SO: https://stackoverflow.com/questions/22959698/distance-from-g... I consider it the most beautiful piece of code I've ever written and perhaps my one minor contribution to human knowledge. It uses a method I invented, is just a few lines, and converges in very few iterations. People used to reach out to me all the time with uses they had found for it, it was cited in a PhD and apparently lives in some collision plugin for unity. Haven't heard from anyone in a long time. It's also my test question for LLMs, and I've yet to see my solution regurgitated. Instead they generate some variant of Newtons method, ChatGPT 5.2 gave me an LM implementation and acknowledged that Newtons method is unstable (it is, which is why I went down the rabbit hole in the first place.) Today I don't know where I would publish such a gem. It's not something I'd bother writing up in a paper, and SO was the obvious place were people who wanted an answer to this question would look. Now there is no central repository, instead everyone individually summons the ghosts of those passed in loneliness. 124. And everything is “fact checked” by the Grok LLM. Which… Yeah… https://en.wikipedia.org/wiki/Grok_(chatbot)#Controversies 125. StackOverflow is famously obnoxious about questions badly asked, badly categorized, duplicated… It’s actually a topic on which StackOverflow would benefit from AI A LOT. Imagine StackOverflow rebrands itself as the place where you can ask the LLM and it benefits the world, whoch correctly rephrasing the question behind the scenes and creating public records for them. 126. The company tried this. It fell through immediately. So they went away, and came back with a much improved version. It also fell through immediately. Turns out, this idea is just bad: LLMs can't rephrase questions accurately, when those questions are novel, which is precisely the case that Stack Overflow needs. For the pedantic: there were actually three attempts, all of which failed. The question title generator was positively received ( https://meta.stackexchange.com/q/388492/308065 ), but ultimately removed ( https://meta.stackoverflow.com/q/424638/5223757 ) because it didn't work properly, and interfered with curation. The question formatting assistant failed obviously and catastrophically ( https://meta.stackoverflow.com/a/425167/5223757 ). The new question assistant failed in much the same ways ( https://meta.stackoverflow.com/a/432638/5223757 ), despite over a year of improvements, but was pushed through anyway. 127. This is an excellent piece of information that I didn’t have. If the company with most data can’t succeed, then it seems like a really hard problem. On the side, they can understand why humans couldn’t do it either. 128. Yeah, I suspect that a lot of the decline represented in the OP's graph (starting around early 2020) is actually discord and that LLMs weren't much of a factor until ChatGPT 3.5 which launched in 2022. LLMs have definitely accelerated Stackoverflow's demise though. No question about that. Also makes me wonder if discord has a licensing deal with any of the large LLM players. If they don't then I can't imagine that will last for long. It will eventually just become too lucrative for them to say no if it hasn't already. 129. I believe the community has seen the benefit of forums like SO and we won’t let the idea go stale. I also believe the current state of SO is not sustainable with the old guard flagging any question and response you post there. The idea can/should/might be re-invented in an LLM context and we’re one good interface away from getting there. That’s at least my hope. 130. I had a similar beautiful experience where an experienced programmer answered one of my elementary JavaScript typing questions when I was just starting to learn programming. He didn't need to, but he gave the most comprehensive answer possible attacking the question from various angles. He taught me the value of deeply understanding theoretical and historical aspects of computing to understand why some parts of programming exist the way they are. I'm still thankful. If this was repeated today, an LLM would have given a surface level answer, or worse yet would've done the thinking for me obliviating the question in the first place. I wrote a blog post about my experience at https://nmn.gl/blog/ai-and-learning 131. Had a similar experience. Asked a question about a new language feature in java 8 (parallell streams), and one of the language designers (Goetz) answered my question about the intention of how to use it. An LLM couldn't have done the same. Someone would have to ask the question and someone answer it for indexing by the LLM. If we all just ask questions in closed chats, lots of new questions will go unanswered as those with the knowledge have simply not been asked to write the answers down anywhere. 132. You can prompt the LLM to not just give you the answer. Possibly even ask it to consider the problem from different angles but that may not be helpful when you don't know what you don't know. 133. Has anyone tried building a modern Stack Overflow that's actually designed for AI-first developers? The core idea: question gets asked → immediately shows answers from 3 different AI models. Users get instant value. Then humans show up to verify, break it down, or add production context. But flip the reputation system: instead of reputation for answers, you get it for catching what's wrong or verifying what works. "This breaks with X" or "verified in production" becomes the valuable contribution. Keep federation in mind from day one (did:web, did:plc) so it's not another closed platform. Stack Overflow's magic was making experts feel needed. They still do—just differently now. 134. Oh, so it wasn't bad enough to spot bad human answers as an expert on Stack Overflow... now humans should spend their time spotting bad AI answers? How about a model where you ask a human and no AI input is allowed, to make sure that everyone has everyone else's full attention? 135. Why disallow AI input? Is it that poor? Surely it isn't. 136. The entire purpose of answering questions as an "expert" on S.O. is/was to help educate people who were trying to learn how to solve problems mostly on their own. The goal isn't to solve the immediate problem, it's to teach people how to think about the problem so that they can solve it themselves the next time. The use of AI to solve problems for you completely undermines that ethos of doing it yourself with the minimum amount of targeted, careful questions possible . 137. What's the point of AI on a site like that? Wouldn't you just ask an LLM directly if you were fine with AI answers? 138. You're absolutely correct, but the scary thing is this: What happens when a whole generation grows up not knowing how to answer another person's question without consulting AI? [edit] It seems to me that this is a lot like the problem which bar trivia nights faced around the inception of the smartphone. Bar trivia nights did, sporadically and unevenly, learn how to evolve questions themselves which couldn't be quickly searched online. But it's still not a well-solved problem. When people ask "why do I need to remember history lessons - there is an encyclopedia", or "why do I need to learn long division - I have a calculator", I guess my response is: Why do we need you to suck oxygen? Why should I pay for your ignorance? I'm perfectly happy to be lazy in my own right, but at least I serve a purpose. My cat serves a purpose. If you vibe code and you talk to LLMs to answer your questions...I'm sorry, what purpose do you serve? 139. I and many others already go the extra mile to ask multiple LLM's for hard questions or for getting a diversity of AI opinions to then internalize and cross check myself. There are apps that build up a nice sized user base on this small convenience aded of getting 2 answers at once REF https://lmarena.ai/ https://techcrunch.com/2025/05/21/lm-arena-the-organization-... All the major AI companies of course do not want to give you the answers from other AI's so this service needs to be a third party. But then beyond that there are hard/niche questions where the AI's are wrong often and humans also have a hard time getting it right, but with a larger discussion and multiple minds chewing the problem one can get to a more correct answer often by process of elimination. I encountered this recently in a niche non-US insurance project and I basically coded together the above as an internal tool. AI suggestions + human collaboration to find the best answer. Of course in this case everyone is getting paid to spend time with this thing so more like AI first Stack Overflow Internal. I have no evidence that an public version would do well when ppl don't get paid to commend and rate. 140. I had a conversation with a couple accountants / tax-advisor types about them participating in something like this for their specialty. And the response was actually 100% positive because they know that there is a part of their job that the AI can never take 1) filings requires you to have a human with a government approved license 2) There is a hidden information about what tax optimization is higher or lower risk based on their information from their other clients 3) Humans want another human to make them feel good that their tax situation is taken care of well. But also many said that it would be better if one wraps this in an agency so the leads that are generated from the AI accounting questions only go to a few people instead of making it fully public stackexchange like. So +1 point -1 point for the idea of a public version. 141. That seems like a horrible core idea. How is that different from data labeling or model evaluation? Human beings want to help out other human beings, spread knowledge and might want to get recognition for it. Manually correcting (3 different) automation efforts seems like incredible monotone, unrewarding labour for a race to the bottom. Nobody should spend their time correcting AI models without compensation. 142. I think this could be really cool, but the tricky thing would be knowing when to use it instead of just asking the question directly to whichever AI. It’s hard to know that you’ll benefit from the extra context and some human input unless you already have a pretty good idea about the topic. 143. Presumably over time said AI could figure out if your question had already been answered and in that case would just redirect you too the old thread instead. 144. AI is generally setup to return the "best" answer as defined as the most common answer, not the rightest, or most efficient or effective answer, unless the underlying data leans that way. It's why AI based web search isn't behaving like google based search. People clicking on the best results really was a signal for google on what solution was being sought. Generally, I don't know that LLMs are covering this type of feedback loop. 145. thanks for sharing that, it was simple, neat, elegant. this sent me down a rabbit hole -- I asked a few models to solve that same problem, then followed up with a request to optimize it so it runs more efficiently. chatgpt & gemini's solutions were buggy, but claude solved it, and actually found a solution that is even more efficient. It only needs to compute sqrt once per iteration. It's more complex however. yours claude ------------------------------ Time (ns/call) 40.5 38.3 sqrt per iter 3 1 Accuracy 4.8e-7 4.8e-7 Claude's trick: instead of calling sin/cos each iteration, it rotates the existing (cos,sin) pair by the small Newton step and renormalizes: // Rotate (c,s) by angle dt, then renormalize to unit circle float nc = c + dt*s, ns = s - dt*c; float len = sqrt(nc*nc + ns*ns); c = nc/len; s = ns/len; See: https://gist.github.com/achille/d1eadf82aa54056b9ded7706e8f5... p.s: it seems like Gemini has disabled the ability to share chats can anyone else confirm this? 146. Nice, that worked. It's even faster. yours yours+opt claude --------------------------------------- Time (ns) 40.9 36.4 38.7 sqrt/iter 3 2 1 Instructions 207 187 241 Edit: it looks like the claude algorithm fails at high eccentricities. Gave chatgpt pro more context and it worked for 30min and only made marginal improvement on yours, by doing 2 steps then taking a third local step. https://gist.github.com/achille/23680e9100db87565a8e67038797... 147. On the other hand, I once implemented something to be told later it was novel and probably the optimal solution in the space. An AI might be more likely to find it... 148. Models are NOT search engines. Even if LLMs were trained on the answer, that doesn't mean they'll ever recommend it. Regardless of how accurate it may be. LLMs are black box next token predictors and that's part of the issue. 149. Why did SO decide to do that to us? to not invest in ai and then, iirc, claim our contributions their ownership. i sometimes go back to answers i gave, even when answered my own questions. 150. The graph is scary, but I think it's conflating two things: 1. Newbies asking badly written basic questions, barely allowed to stay, and answered by hungry users trying to farm points, never to be re-read again. This used to be the vast majority of SO questions by number. 2. Experiencied users facing a novel problem, asking questions that will be the primary search result for years to come. It's #1 that's being canibalized by LLM's, and I think that's good for users. But #2 really has nowhere else to go; ChatGPT won't help you when all you have is a confusing error message caused by the confluence of three different bugs between your code, the platform, and an outdated dependency. And LLMs will need training data for the new tools and bugs that are coming out. 151. I’m going to argue the opposite. LLMs are fantastic at answering well posed questions. They are like chess machines evaluating a tonne of scenarios. But they aren’t that good at guessing what you actually have on your mind. So if you are a novice, you have to be very careful about framing your questions. Sometimes, it’s just easier to ask a human to point you in the right direction. But SO, despite being human, has always been awful to novices. On the other hand, if you are experienced, it’s really not that difficult to get what you need from an LLM, and unlike on SO, you don’t need to worry about offending an overly sensitive user or a moderator. LLMs never get angry at you, they never complain about incorrect formatting or being too lax in your wording. They have infinite patience for you. This is why SO is destined to be reduced to a database of well structured questions and answers that are gradually going to become more and more irrelevant as time goes by. 152. Yes, LLMs are great at answering questions, but providing reasonable answers is another matter. Can you really not think of anything that hasn't already been asked and isn't in any documentation anywhere? I can only assume you haven't been doing this very long. Fairly recently I was confronted with a Postgres problem, LLMs had no idea, it wasn't in the manual, it needed someone with years of experience. I took them IRC and someone actually helped me figure it out. Until "AI" gets to the point it has run software for years and gained experience, or it can figure out everything just by reading the source code of something like Postgres, it won't be useful for stuff that hasn't been asked before. 153. > - Success: all the basic questions were answered, and the complex questions are hard to ask. I think this is one major factor that is not getting enough consideration in this comment thread. By 2018-2020, it felt like the number of times that someone else had already asked the question had increased to the point that there was no reason to bother asking it. Google also continued to do a better and better job of surfacing the right StackOverflow thread, even if the SO search didn't. In 2012 you might search Google, not find what you needed, go to StackOverflow, search and have no better luck, then make a post (and get flamed for it being a frequently-asked question but you were phrasing yours in a different / incorrect way and didn't find the "real" answer). In 2017, you would search Google and the relevant StackOverflow thread would be in the top few results, so you wouldn't need to post and ask. In 2020, Google's "rich snippets" were showing you the quick answers in the screen real estate that is now used by the AI Overview answers, and those often times had surfaced some info taken from StackOverflow. And then, at the very end of 2022, ChatGPT came along and effectively acted as the StackOverflow search that you always wanted - you could phrase your question as poorly as you want, no one would flame you, and you'd get some semblance of the correct answer (at least for simple questions). I think StackOverflow was ultimately a victim of it's own success. Most of the questions that would be asked by your normal "question asker" type of user were eventually "solved" and it was just a matter of how easy it was to find them. Google, ChatGPT, "AI Overviews", Claude Code, etc have simply made finding those long-answered questions much easier, as well as answering all of the "new" questions that could be posed - and without all of the drama and hassle of dealing with a human-moderated site. 154. Not sure. As software becomes a commodity I can see the "old school" like tech slowing down (e.g. programming languages, frameworks frontend and backend, etc). The need for a better programming language is less now since LLM's are the ones writing code anyway more so these days - the pain isn't felt necessarily by the writer of the code to be more concise/expressive. The ones that do come out will probably have more specific communities for them (e.g. AI) 155. The new owners (well, not really new any more) are focused on adding AI to SO because it's the current hotness, and making other changes to try to extract more money that they're completely ignoring the community's issues and objections to their changes, which tend to be half-assed and full of bugs. 156. The obvious culprit here are the LLMs, but I do wonder whether Github's social features, despite its flaws, have given developers fewer reasons to ask questions on SO? Speaking from experience, every time I hit a wall with my projects, I would instinctively visit the project's repo first, and check on the issues / discussions page. More often than not, I was able to find someone with an adjacent problem and get close enough to a solution just by looking at the resolution. If it all failed, I would fall back to asking questions on the discussion forum first before even considering to visit SO. 157. As someone that spent a fair bit of time answering questions on StackOverflow, what stood out years ago was how much the same thing would be asked every day. Countless duplicates. That has all but ceased with LLMs taking all that volume. Honestly, I don't think that's a huge loss for the knowledge base. The other thing I've noticed lately is a strong push to get non-programming questions off StackOverflow, and on to other sites like SuperUser, ServerFault, DevOps, etc. Unfortunately, what's left is so small I don't think there's enough to sustain a community. Without questions to answer, contributors providing the answers disappear, leaving the few questions there often unanswered. 158. I do use Claude a lot, but I still regularly ask questions on https://bioinformatics.stackexchange.com/ . It's often just too niche, LLMs hallucinate stuff like an entire non-existent benchmarking feature in Snakemake, or can't explain how I should get transcriptome aligners to give me correct quantifications for a transcript. I guess it's too niche. And as a lonely Bioinformatician it can be nice to get confirmation from other bioinformaticians. Looking back at my Stack Exchange/Stack Overflow (never really got the difference) history, my earlier, more general programming questions from when I just started are all no-brainers for any LLM. 159. I joined Stackoverflow early on since it had a prevalence towards .NET and I’ve been working with Microsoft web technologies since the mid 90’s. My SO account is coming up to 17 years old and I have nearly 15,000 points, 15 gold badges, including 11 famous questions and similar famous answer badges, also 100 silver and 150 bronze. I spent far much time on that site in the early days, but through it, I also thoroughly enjoyed helping others. I also started to publish articles on CodeProject and it kicked off my long tech blogging “career”, and I still enjoy writing and sharing knowledge with others. I have visited the site maybe once a year since 2017. It got to the point that trying to post questions was intolerable, since they always got closed. At this point I have given up on it as a resource, even though it helped me tremendously to both learn (to answer questions) and solve challenging problems, and get help for edge cases, especially on niche topics. For me it is a part of my legacy as a developer for over 30 years. I find it deeply saddening to see what it has become. However I think Joel and his team can be proud of what they built and what they gave to the developer community for so many years. As a side note it used to state that was in the top 2% of users on SO, but this metric seems to have been removed. Maybe it’s just because I’m on mobile that I can’t see it any more. LLM’s can easily solve those easy problems that have high commonality across many codebases, but I am dubious that they will be able to solve the niche challenging problems that have not been solved before nor written about. I do wonder how those problems get solved in the future. 160. This is horrifying. Given the fact that when I need a question answered I usually refer to S.O. , but more recently have taken suggestions from LLM models that were obviously trained on S.O. data... And given the fact that all other web results for "how do you change the scroll behavior on..." or "SCSS for media query on..." all lead to a hundred fake websites with pages generated by LLMs based on old answers. Destroying S.O. as a question/answer source leaves only the LLMs to answer questions. That's why it's horrific. 161. The decline is not surprising. I am sure AI is replacing Stackoverflow for a lot of people. And my experience with asking questions was pretty bad. I asked a few very specific questions about some deep detail in Windows and every time I got only some smug comments about my stupid question or the question got rejected outright. That while a ton of beginner questions were approved. Definitely not a very inviting club. I found i got better responses on Reddit. 162. Interestingly, stagnation started around 2014 (in the number of questions asked no longer rising,) and a visible decline started in 2020 [1]: two years before ChatGPT launched! It’s an interesting question if the decline would have happened regardless of LLMs, just slower? [1] An annotated visualization of the same data I did: https://blog.pragmaticengineer.com/are-llms-making-stackover... 163. >StackOverflow was a pub where programmers had fun while learning programming. The product of that fun was valuable. I really like this description. I and others here who are talking about negative experiences there seem to decry how we enjoy programming (you see words like "fun" and "passion" used in these posts), and how SO decided to take this good faith and cheer and bludgeon users for often opaque reasons, just so they could power trip. As much as I have many reservations about LLMs, I can ask LLMs to be as emotionless (or even emotional but chipper/happy) as I want. On SO, you needed to prostrate yourself and self-criticize to even have the opportunity to be bludgeoned further by the moderators. Who tf would want to spend their time contributing there? Even if you contributed a decent or even great amount to the site, you would still get whacked over the head if you dared to ask a question of your own. This is why people jumped to LLMs, even when they were far less capable than they are now. Most people (SO moderators don't view others as "people", as is apparent in this thread) would rather receive mid-tier answers from an LLM (though LLMs have now exceeded this level of quality) while still having fun, than get castigated and "closed as duped" on SO. 164. This is a huge loss. In the past people asked questions of real people who gave answers rooted in real use. And all this was documented and available for future learning. There was also a beautiful human element to think that some other human cared about the problem. Now people ask questions of LLMs. They churn out answers from the void, sometimes correct but not rooted in real life use and thought. The answers are then lost to the world. The learning is not shared. LLMs have been feeding on all this human interaction and simultaneously destroying it. 165. Do I read that correctly — it is close to zero today?! I used to think SO culture was killing it but it really may have been AI after all. 166. Still a couple thousand away from 0. But yea the double whammy of toxic culture and LLMs did the trick. Decline already set in well before good enough LLMs were available. I wonder how reddit compares, though its ofc pretty different use case there 167. AI didn't necessarily kill SO because it was strictly better at giving technical answers (and it certainly wasn't better when GPTs initially burst onto the mass-appeal scene several years ago), but that it provided an alternative (even if subpar) where users could actually get responses to their questions (and furthermore not be ridiculed by psychopaths while doing so was the cherry on top). 168. IMO people underestimate the value of heavy moderation. But moderation heavy or light, good or bad. Why wait hours for an answer when an LLM gives it in seconds? 169. LLMs caused this decline. Stop denying that. You don't have to defend LLMs from any perceived blame. This is not a bad thing. The steep decline in the early months of 2023 actually started with the release of ChatGPT, which is 2022-11-30, and its gradually widening availability to (and awareness of) the public from that date. The plot clearly shows that cliff. The gentle decline since 2016 does not invalidate this. Were it not for LLMs, the site's post rate would now probably be at around 5000 posts/day, not 300. LLMs are to "blame" for eating all the trivial questions that would have gotten some nearly copy-pasted answer by some eager reputation points collector, or closed as a duplicate, which nets nobody any rep. Stack Overflow is not a site for socializing . Do not mistake it for reddit. The "karma" does not mean "I hate you", it means "you haven't put the absolute minimum conceivable amount of effort into your question". This includes at least googling the question before you ask. If you haven't done that, you can't expect to impose on the free time of others. SO has a learning curve. The site expects more from you than just to show up and start yapping. That is its nature. It is "different" because it must be. All other places don't have this expectation of quality. That is its value proposition. 170. Here’s how SO could still be useful in the LLM era: User asks a question, llm provides an immediate answer/reply on the forum. But real people can still jump in to the conversation to add additional insights and correct mistakes. If you’re a user that asks a duplicate question, it’ll just direct you to the good conversation that already happened. A symbiosis of immediate usually-good-enough llm answers PLUS human generated content that dives deeper and provides reassurances in correctness 171. Users could upvote whether Claude, Gemini or ChatGPT provided the best answer. The best of three is surfaced, the others are hidden behind a "show alternatives." However, I can see how this would be labelled "shoving AI into everything" and "I'm not on SO for AI." 172. AI is a vampire. Coming to your corner of the world, to suck your economic blood, eventually. It’s hard to ignore the accelerated decline that started in late 2022/early 2023. 173. LLMs absolutely body-slammed SO, but anyone who was an active contributor knows the company was screwing over existing moderators for years before this. Writing was on the walls 174. If by "body-slammed" you mean "trained on SO user data while violating the terms of the CC BY-SA license", then sure. In the best case scenario, LLMs might give you the same content you were able to find on SO. In the common scenario, they'll hallucinate an answer and waste your time. What should worry everyone is what system will come after LLMs. Data is being centralized and hoarded by giant corporations, and not shared publicly. And the data that is shared is generated by LLMs. We're poisoning the well of information with no fallback mechanism. 175. > If by "body-slammed" you mean "trained on SO user data while violating the terms of the CC BY-SA license", then sure. You know that's not what they meant, but why bring up the license here? If they were over the top compliant, attributing every SO answer under every chat, and licensing the LLM output as CC BY-SA, I think we'd still have seen the same shift. > In the best case scenario, LLMs might give you the same content you were able to find on SO. In the common scenario, they'll hallucinate an answer and waste your time. Best case it gives you the same level of content, but more customized, and faster. SO being wrong and wasting your time is also common. 176. Those popups were a big contributor for me to stop using SO. I stopped updating my uBlock origin rules when LLMs became good enough. I am now using the free Kimi K2 model via Groq over CLI, which is much faster. 177. it is indeed a shame. if you are doing anything remotely new and novel, which is essential if you want to make a difference in an increasingly competitive field, LLMs confidently leave you with non-working solutions, or sometimes worse they set you on the wrong path. I had similar worries in the past about indexable forums being replaced by discord servers. the current situation is even worse. 178. SO peaked long, long before LLMs came along. My personal experience is that GitHub issues took over. You can clearly see the introduction of ChatGPT in late 2022. That was the final nail in the coffin. I am still really glad that Stack Overflow saved us from experts-exchange.com - or “the hyphen site” as it is sometimes referred to. 179. Good. This is what Stack Overflow wanted. They ban anyone who asks stupid questions, if not marking everything off topic. LLMs are a solid first response for new users, with Reddit being a nice backup. 180. This is concerning on two fronts. The questions are no longer open (SO is CC-BY-SA) and if Q&A content dies then this herds even more people towards LLM use. It's basically draining the commons. 181. One thing you won’t get with in an LLM is genuine research. I once answered a 550 point question by researching the source code of vim to see how the poster’s question could be resolved. [0] [0] https://stackoverflow.com/questions/619423/backup-restore-th... 182. Good riddance. There were some ok answers there, but also many bad or obsolete answers (leading to scrolling down find to find the low-ranked answer that sort of worked), and the moderator toxicity was just another showcase of human failure on top of that. It selected for assholes because they thought they had a captive, eternally renewing audience that did not have any alternative. And that resulted in the chilling effect of people not asking questions because they didn't want to run the moderation gauntlet, so the site's usefulness went even further down. Its still much less useful for recent tech, than it is for ancient questions about parsing HTML with regex and that sort of thing. LLMs are simply better in every way, provided they are trained on decent documents. And if I want them to insult me too, just for that SO nostalgia, I can just ask them to do that and they will oblige. Looking forward to forgetting that site ever existed, my brain's health will improve. 183. Don't lose sight of one of the dreams of the early Internet: How do we most effectively make a marketplace for knowledge problems and solutions that connects human knowledge needs with AI and human responses? It should be possible for me to put a question out there (not on any specific forum/site specific to the question), and have AI resource answer it and then have interested people weigh in from anywhere if the AI answer is unsatisfactory. Stackoverflow was the best we could do at the time, but now more general approach is possible. 184. I have a SO profile and I both contributed and used the site for some time. I use the site from time to time to research something. I know a lot more about software than 15 years ago. I used to ask questions and answer questions a lot, but after I matured I have no time and whatever I earn is not worth my time. So perhaps the content would grow in size and quality if they rewarded users with something besides XP. I don't use AI for research so far. I use AI to implement components that fit my architecture and often tests of components. 185. I still would like to get other humans' experiences and perspectives when it comes to solving some problems, I hope SO doesn't go away entirely. With LLMs, at least in my experience, they'll answer your question best they can, just as you asked it. But they won't go the extra step to make assumptions based on what they think you're trying to do and make recommendations. Humans do that, and sometimes it isn't constructive at all like "just use a different OS", but other times it could be "I don't know how to solve that, but I've had better lack with this other library/tool". 186. I find this quite worrying: with this much decline SO might end up disappearing. This would be a very bad thing because in some answers there are important details and nuances that you only see by looking at secondary answers and comments. Also, this seems to imply that most people will just accept the solutions proposed by LLMs without checking them, or ever talking about the subject with other humans. 187. Why do you think people stop creating new posts just because SO collapsed? People on GitHub issues and Reddit answer programming questions everyday. SO was dying even before ChatGPT was released. LLMs just accelerated that process. 188. Ideally, you'd train them on the core documentation of the language or tool itself. Hopefully, LLMs lead to more thorough documentation at the start of a new language, framework, or tool. Perhaps to the point of the documentation being specifically tailored to read well for the LLM that will parse and internalize it. Most of what StackOverflow was was just a regurgitation of knowledge that people could acquire from documentation or research papers. It obviously became easier to ask on SO than dig through documentation. LLMs (in theory) should be able to do that digging for you at lightning speed. What ended up happening was people would turn to the internet and Stack Overflow to get a quick answer and string those answers together to develop a solution, never reading or internalizing documentation. I was definitely guilty of this many times. I think in the long run it's probably good that Stack Overflow dies. 189. Now imagine what happens when a new programming language comes along. When we have a question, we will no longer be able to Google it and find answers to it on Stack Overflow. We will ask the LLMs. They will work it out. From that moment, the LLM we used has the knowledge for solving this particular problem. Over time, this produces huge moat for the largest providers. I believe it is one of the subtler reasons why the AI race is so fierce. 190. Obviously LLMs ate StackOverflow, but perhaps developers could keep it alive for much longer if they wanted to . LLMs provide answers, but only humans provide human contact. And that last part is where SO failed by allowing a few people power trip over the rest of us. Kind of like reddit does at times, but harder. I'm not sad. 191. LLMs did not eat SO, it was SO that fed the LLMs too well. https://meta.stackexchange.com/questions/399619/our-partners... 192. Man after reading some of the comments and looking at the graph I have learned a lesson. I went to SO all the time to find answers to questions, but I never participated. I mean they made it hard, but given the amount of benefit I gained I should've overcome that friction. If I and people like me had, maybe we could have diluted the moderation drama that others talk about (and that I, as a greedy user, never saw). Now it's a crap-shoot with an LLM instead of being able to peruse great answers from different perspectives to common problems and building out my own solution. 193. Interesting timing. I just analyzed TabNews (Brazilian dev community) and ~50% of 2025 posts mention AI/LLMs. The shift is real. The 2014 peak is telling. That's before LLMs, before the worst toxicity complaints. Feels like natural saturation, most common questions were already answered. My bet, LLMs accelerated the decline but didn't cause it. They just made finding those existing answers frictionless. 194. It's unfortunate that SO hasn't found a way to leverage LLMs. Lots of questions benefit from some initial search, which is hard enough that moderators likely felt frustrated with actual duplicates, or close enough duplicates, and LLMs seem able to assist. However I hope we don't lose the rare gem answers that SO also had, those expert responses that share not just a programming solution but deeper insight. 195. I think that SO leveraging LLMs implicitly. Like I'll always ask LLM first, that's the easiest option. And I'll only come to SO if LLM fails to answer. 196. I recently wrote a blog post similar to this situation: https://ertu.dev/posts/ai-is-killing-our-online-interaction/ 197. When you see AI giving you back various coding snippets almost verbatim from SO, it really makes you wonder what will happen in the future with AI when it can't depend on actual humans doing the work first. 198. There's no doubt that generally LLMs are better. In addition SO had its issues. That being said I can't help but worry about losing humans asking questions and humans answering questions. The sentimentality aside, if humans aren't posing questions and if humans aren't recommending answers, what are the models going to use? 199. For me, my usage of SO started declining as LLMs rose. Occasionally I still end up there, usually because a chat response referenced a SO thread. I was willing to put up with the toxicity as long as the site still had technical value for me. But still, machines leave me wanting. Where do people go to ask real humans novel technical questions these days? 200. > Where do people go to ask real humans novel technical questions these days? I don't think such generic place exists. I just do my own research or abandon the topic. I think that in big companies you probably could use some internal chats or just ask some smart guy directly? I don't have that kind of connections and all online communities are full of people whose skill is below mine, so it makes little sense to ask something. I still do sometimes, but rarely receive competent answer. If you have some focused topic like a question about small program, of course you can just use github issues or email author directly. But if you have some open question, probably SO is the only generic platform out there. To put it differently, find some experts and ask which online place to the visit to help strangers. Most likely they just don't do it. So for me, personally, LLMs are the saviour. With enough forth and back I can research any topic that doesn't require very deep expertise. Sure, access to an actual expert willing to guide me would be better, but I just don't have that luxury. 201. Wonder if this is a good proxy for '# of Google Searches'. Or perhaps a forward indicator (sign of things to come), since LLMs are adopted by the tech-savvy first, then the general public a little later, so Stack Overflow was among the first casualties. 202. Stack overflow was useful with a fairly sanitized search like “mysql error 1095”. Agentic LLMs do there best work when able to access your entire repository or network environment for context, which is impossible to sanitize. For a season, private environments will continue to be able to use SO. But as LLMs capture all the good questions and keep them private, public SO will become less and less relevant. It’s sad to see a resource of this class go. 203. I just typed the literal phrase "mysql error 1095" into ChatGPT with no context, and it gave an answer that was no worse than SO for the same search. No need to give it anything about my repository, network environment, or even a complete sentence. 204. I think the bigger point we should realize is LLMs offer the EXACT same thing in a better way. Many people are still sharing answers to problems but they do it through an AI which then fine tunes on it and now that problem solution is shared with EVERYONE. Far better method of automated sharing of content 205. I'd still use SO at times if it weren't for how terribly it was managed and moderated. It offers features that LLMs can't, and I actually enjoyed answering questions enough to do it quite often at one time. These days I don't even think about it. 206. It's amazing to think that in the next few years, we may have software engineers entering the workforce who don't know what StackOverflow is... 207. I suspect a lot of the traffic shift is from Google replacing the top search result, which used to be Stack Overflow for programming questions, with a Gemini answer. 208. The result is not surprising! Many people are now turning to LLMs with their questions instead. This explains the decline in the number of questions asked. 209. Has AI summarization led to people either getting their answer from a search engine directly, and failing that, just giving up? 210. StackOverflow cemented my fears of asking questions. Even though there were no results for what I needed, I was too afraid to ask. Good riddance, now I’m never afraid to ask dumb questions to LLM and I’ve learned a lot more with no stress of judgement. 211. "firing up a sandbox VM and testing some solutions" If the LLM can start up a VM and test a solution, to identify a new unique problem, and find it's own solution. That would be pretty impressive. I'm not sure they are really to that point. But some AI's are winning the Math Olympiad, so maybe it is happening. I'm sure this is the overall goal. 212. Maybe the average question will be more "high level" now that all simple questions are answered by LLMs ? 213. Probably similar for google. My first line of search is always chatgpt 214. I doubt it. If I want to ask AI a simple question I type it into Google now. 215. LLMs are dogshit in many ways but when it comes to programming they are faster than people, respond instantaneously to further information, and can iterate until they understand the problem fully. Bonus is that you don’t get some dipshit being snarky. 216. Now that StackOverflow has been killed (in part) by LLMs, how will we train future models? Will public GitHub repos be enough? Precise troubleshooting data is getting rare, GitHub issues are the last place where it lives nowadays. 217. They would just use documentation. I know there is some synthesis they would lose in the training process but I’m often sending Claude through the context7 MCP to learn documentation for packages that didn’t exist, and it nearly always solves the problem for me. 218. Was already dying a decade ago, but AI pretty much guarantees we'll never see a public forum that useful ever again. AI may be fine for people asking the basic stuff or who don't care about maintenance, but for a time SO was the place to find extraordinary and creative solutions that only a human can come up with. When you were in a jam and found a gem on there it not only solved your problem, but brought clarity and deep knowledge to your entire situation in ways that I've never seen an LLM do. It inspired you to refactor the code that got you into that mess to begin with and you grew as a developer. This timeline shows the death of a lot more than just the site itself. 219. It makes sense to see the number of questions decline over time as people google questions and get results. It would be interesting to look at the number of comments and views of questions over time to see if that has declined as LLMs have driven declining engagement and discussion. 220. Why did SO traffic halve from it's maximum till the ChatGPT release date? Also, for a long time after initial release, ChatGPT was pretty much useless for coding questions. It only became more or less useful ~2 years ago. So there's about 4x decline from peak to explain with reasons that do not involve LLMs. What these could be? 221. llm killed stackoverflow 222. This is incredible. Anyone who claims LLMs aren't useful will need to explain how come almost every programmer can solve 95% of his problems with an LLM without needing anything else. This is real usefulness right here. EDIT: I'm not saying I'm loving what happened and what is becoming of our roles and careers, I'm just saying things have changed forever; there's still a (shrinking) minority of people who seem to not be convinced. Write a concise, engaging paragraph (3-5 sentences) that captures the main ideas, notable perspectives, and overall sentiment of these comments regarding the topic. Focus on the most interesting and representative viewpoints. Do not use bullet points or lists - write flowing prose.
LLMs replacing Stack Overflow
222