← Back to Total monthly number of StackOverflow questions over time
The rapid decline of StackOverflow signals a massive shift toward LLMs as the primary source for technical knowledge, yet this transition has sparked a "training data paradox" regarding where future models will find high-quality, novel information. Commenters fear that as human-to-human knowledge sharing disappears from the public commons, AI will be forced to ingest its own "slop" or stale data, potentially leading to a stagnated "eternal 2018" of intellectual capacity. While some suggest that models may eventually rely on private telemetry, GitHub repositories, or synthetic data, there is a pervasive sense of grief over the loss of a peer-reviewed ecosystem that once transformed hard-won human experience into a durable public resource. Without a new mechanism to replenish the digital commons, many worry that the future of AI will be characterized by "enshittified" proprietary models and the slow degradation of reliable, up-to-date information.
88 comments tagged with this topic