Summarizer

AI-generated content concerns

← Back to Total monthly number of StackOverflow questions over time

The prevailing sentiment regarding AI-generated content is one of deep skepticism rooted in frequent "hallucinations," outdated technical solutions, and over-zealous safety refusals that hinder legitimate technical work. Many contributors express a profound fear of "poisoning the well," arguing that the replacement of human-driven platforms like Stack Overflow with AI-generated "slop" creates a feedback loop of degrading information that threatens the long-term quality of the internet. While some developers find AI useful for rapid prototyping and code optimization when given sufficient context, others argue that it undermines the educational value of problem-solving and risks turning human experts into unpaid "AI gardeners" who must tediously curate machine errors. Ultimately, these perspectives reflect a sense of loss for the collaborative era of the internet, fearing a future where niche expertise is sidelined by black-box predictors that prioritize sounding correct over being accurate.

69 comments tagged with this topic

View on HN · Topics
I swear that about 3 of your replies look like LLM content or at best "LLM-massaged" messages :-(
View on HN · Topics
I was writing like a robot before robots could write, dammit!
View on HN · Topics
> Sad now though, since LLMs have eaten this pie. By regenerating an answer on command and never caring about the redundancy, yeah. The DRY advocate within me weeps.
View on HN · Topics
> Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc. That might work until an LLM encounters a question it's programmed to regard as suspicious for whatever reason. I recently wanted to exercise an SMTP server I've been configuring, and wanted to do it by an expect script, which I don't do regularly. Instead of digging through the docs, I asked Google's Gemini (whatever's the current free version) to write a bare bones script for an SMTP conversation. It flatly refused. The explanation was along the lines "it could be used for spamming, so I can't do that, Dave." I understand the motivation, and can even sympathize a bit, but what are the options for someone who has a legitimate need for an answer? I know how to get one by other means; what's the end game when it's LLMs all the way down? I certainly don't wish to live in such a world.
View on HN · Topics
1.5 years ago Gemini (the same brand!) refused to provide C++ help to minors because C++ is dangerous: https://news.ycombinator.com/item?id=39632959
View on HN · Topics
The problem that you worked out is only really useful if it can be recreated and validated, which in many cases it can be by using an LLM to build the same system and write tests that confirm the failure and the fix. Your response telling the model that its answer worked is more helpful for measuring your level of engagement, not so much for evaluating the solution.
View on HN · Topics
It's funny, because I had a similar question but wanted to be able to materialize a view in Microsoft SQL Server, and ChatGPT went around in circles suggesting invalid solutions. There were about 4 possibilities that I had tried before going to ChatGPT, it went through all 4, then when the fourth one failed it gave me the first one again.
View on HN · Topics
> this nails it I assume you’re taking about the ending where gippity tells you how awesome you are and then spits out a wrong answer?
View on HN · Topics
If the LLM is also writing the documentation, because the developers surely don’t want to, I’m not sure how well this will work out. I have some co-workers who have tried to use Copilot for their documentation (because they never write any and I’m constantly asking them questions as a result), and the results were so bad they actually spent the time to write proper documentation. It failed successfully, I suppose.
View on HN · Topics
Indeed, how documentation is written is key. But funny enough, I have been a strong advocate that documentation should always be written in Reference Docs style, and optionally with additional Scenario Docs. The former is to be consumed by engineers (and now LLMs), while the later is to be consumed by humans. Scenario Docs, or use case docs, are what millions of blog articles were made of in the early days, then we turned to Stack Overflow questions/answers, then companies started writing documentation in this format too. Lots of Quick Starts for X, Y, and Z scenarios using technology K. Some companies gave away completely on writing reference documentation, which would allow engineers to understand the fundamentals of technology K and then be able to apply to X, Y, and Z. But now with LLMs, we can certainly go back to writing Reference docs only, and let LLMs do the extra work on Scenario based docs. Can they hallucinate still? Sure. But they will likely get most beyond-basic-maybe-not-too-advanced scenarios right in the first shot. As for using LLMs to write docs: engineers should be reviewing that as much as they should be reviewing the code generated by AI.
View on HN · Topics
"In this imaginary world where everything is perfect and made to be consumed by LLMs, LLMs are the best tool for the job".
View on HN · Topics
world where everything is perfect and made to be consumed by LLMs I believe the parent poster was clearly and specifically talking about software documentation that was strong and LLM consumption-friendly, not "everything"
View on HN · Topics
Yeah, old news? It's how it is today with humans. You SHOULD be making things in a human/LLM-readable format nowadays anyway if you're in tech, it'll do you well with AIs resorting to citing what you write, and content aggregators - like search engines - giving it more preferential scores.
View on HN · Topics
> Like you go to some question and the accepted answer with the most votes is for a ten-year-old version of the technology. This is still a problem with LLMs as a result. The bigger problem is that now the LLM doesn’t show you it was a 10 year old solution, you have to try it, watch it fail, then find out it’s old, and ask for a more up to date example, then watch it flounder around. I’ve experienced this more times than I can count.
View on HN · Topics
Then you're doing it wrong? I'd need to see a few examples, but this is easily solved by giving the llm more context, any really. Give it the version number, give it a url to a doc. Better yet git clone the repo and tell it to reference the source. Apologies for using you as an example, but this is a common theme on people who slam LLMs. They ask it a specific/complex question with little context and then complain when the answer is wrong.
View on HN · Topics
I’ve specified many of these things and still had it fall on its face. And at some point, I’m providing so much detail that I may as well do it myself, which is ultimately what ends up happening. Also, it seems assuming the latest version would make much more sense than assuming a random version from 10 years ago. If I was handing work off to another person, I would expect to only need to specify the version if it was down level, or when using the latest stable release.
View on HN · Topics
This is exactly the issue that most people run into and it's literally the GIGO principle that we should all be familiar with by now. If your design spec amounts to "fix it" then don't be surprised at the results. One of the major improvements I've noticed in Claude Code using Opus 4.5 is that it will often read the source of the library we're using so that it fully understands the API as well as the implementation. You have to treat LLMs like any other developer that you'd delegate work to and provide them with a well thought out specification of the feature they're building or enough details about how to reproduce a bug for them to diagnose and fix it. If you want their code to conform to the style you prefer then you have to give them a style guide and examples or provide a linter and code formatter and let them know how to run it. They're getting better at making up for these human deficits as more and more of these common failure cases are recorded but you can get much better output now by simply putting some thought into how you use them.
View on HN · Topics
Have you tried using context7 or a similar MCP to have the agent automatically fetch up to date documentation?
View on HN · Topics
>>what happens now? I'll tell you what happens now: LLMs continue to regurgitate and iterate and hallucinate on the questions and answers they ingested from S.O. - 90% of which are incorrect. LLM output continues to poison itself as more and more websites spring up recycling outdated or incorrect answers, and no new answers are given since no one wants to waste the time to ask a human a question and wait for the response . The overall intellectual capacity sinks to the point where everything collaboratively built falls apart. The machines don't need AGI to take over, they just need to wait for us to disintegrate out of sheer laziness, sloth and self-righteous.... /okay. there was always a needy component to Stack Overflow. "I have to pass an exam, what is the best way to write this algorithm?" and shit like that. A lazy component. But to be honest, it was the giving of information which forced you to think, and research, and answer correctly , which made systems like S.O. worthwhile, even if the questioners were lazy idiots sometimes. And now, the apocalypse. Babel. The total confusion of all language. No answer which can be trusted, no human in the loop, not even a smart AI, just a babbling set of LLMs repeating Stack Overflow answers from 10 years ago. That's the fucking future. Things are gonna slide / in all directions / won't be nothin you can measure anymore. The blizzard of the world has crossed the threshold and it's overturned the order of the soul.[0] [0] https://www.youtube.com/watch?v=8WlbQRoz3o4
View on HN · Topics
> - I know I'm beating a dead horse here, but what happens now? Despite stratification I mentioned above, SO was by far the leading source of high quality answers to technical questions. What do LLMs train off of now? I wonder if, 10 years from now, LLMs will still be answering questions that were answered in the halcyon 2014-2020 days of SO better than anything that came after? Or will we find new, better ways to find answers to technical questions? To me this shows just how limited LLMs are. Hopefully more people realize that LLMs aren't as useful as they seem, and in 10 years they're relegated to sending spam and generating marketting websites.
View on HN · Topics
The meta post describing the policy of banning AI-generated answers from the site ( https://meta.stackoverflow.com/questions/421831 ) is the most popular of all time. Company interference with moderator attempts to enforce that policy lead to a moderator strike. The community is vehemently against the company's current repeated attempts to sneak AI into the system, which have repeatedly produced embarrassing results (see for example https://meta.stackoverflow.com/questions/425081 and https://meta.stackoverflow.com/questions/425162 ; https://meta.stackoverflow.com/questions/427807 ; https://meta.stackoverflow.com/questions/425766 etc.). What you propose is a complete non-starter.
View on HN · Topics
Your first example is a public announcement of an llm assisted ask question form. A detailed request for feedback on an experiment isn't "sneaking" and the replies are a tire fire of stupidity. One of your top complaints about users in this thread is they ask the wrong sort of questions so AI review seems like it should be useful. The top voted answer asks why SO is even trying to improve anything when there's a moderator strike on. What is this, the 1930s? It's a voluntary role, if you don't like it just don't do it. The second top voted answer says "I was able to do a prompt injection and make it write me sql with an injection bug". So? It also complains that the llm might fix people's bad English, meaning they ask the wrong question, lol. It seems clear these people started from a belief that ai is always bad, and worked backwards to invent reasons why this specific feature is bad. It's crazy that you are defending this group all over this HN thread, telling people that toxicity isn't a problem. I've not seen such a bitchy passive aggressive thread in years. Those replies are embarrassing for the SO community, not AI.
View on HN · Topics
> will we find new, better ways to find answers to technical questions? I honestly don't think they need to. As we've seen so far, for most jobs in this world, answers that sound correct are good enough. Is chasing more accuracy a good use of resources if your audience can't tell the difference anyway?
View on HN · Topics
How does that work if there's no new data for them to train on, only AI slurry?
View on HN · Topics
"A Human commented at ##:##pm" "An AI Bot commented at..." "A suspected AI Bot commented at..." "An unconfirmed Human commented at..."
View on HN · Topics
Fixing loads of LLM-generated content is neither easy nor fun. You'll have a very hard time getting people to do that.
View on HN · Topics
Hardly. - A huge number of developers will want to use such a tool. Many of them are already using AI in a "single player" experience mode. - 80% of the answers will be correct when one-shot for questions of moderate difficulty. - The long tail of "corrector" / "wiki gardening" / pedantic types fill fix the errors. Especially if you gamify it. Just because someone doesn't like AI doesn't mean the majority share the same opinion. AI products are the fastest growing products in history. ChatGPT has over a billion MAUs. It's effectively won over all of humanity. I'm not some vibe coder. I've been programming since the 90's, including on extremely critical multi-billion dollar daily transaction volume infra, yet I absolutely love AI. The models have lots of flaws and shortcomings, but they're incredibly useful and growing in capability and scope -- I'll stand up and serve as your counter example.
View on HN · Topics
People answer on SO because it's fun. Why should they spend their time fixing AI answers? It's very tedious as the kind of mistakes LLMs make can be rather subtle and AI can generate a lot of text very fast. It's a sisyphean taks, I doubt enough people would do it.
View on HN · Topics
I just think you could save a lot of money and energy doing all this but skipping the LLM part? Like what is supposed to be gained? The moment/act of actual generation of lines of code or ideas, whether human or not, is a much smaller piece of the pie relative to ongoing correction, curation, etc (like you indicate). Focusing on it and saying it intrinsically must/should come from the LLM mistakes the intrinsically ephemeral utility of the LLMs and the arguably eternal nature of the wiki at the same time. As sibling says, it turns it into work vs the healthy sharing of ideas. The whole pitch here just feels like putting gold flakes on your pizza: expensive and would not be missed if it wasn't there. Just to say, I'm maybe not as experienced and wise I guess but this definitely sounds terrible to me. But whatever floats your boat I guess!
View on HN · Topics
LLMs don't experience the world, so they have no reason a priori to know what is or isn't truthful in the training data. (Not to mention the confabulation. Making up API method names is natural when your model of the world is that the method names you've seen are examples and you have no reason to consider them an exhaustive listing.)
View on HN · Topics
Direct enshittification is intentional and wouldn’t affect open models. Indirect pollution via AI slop in the input and the same content manipulation mechanisms as SEO hacking is still a threat for open models.
View on HN · Topics
Ironically they could probably do some really useful deduplication/normalization/search across questions and answers using AI/embeddings today, if only they’d actually allowed people to ask the same questions infinite different ways, and treated the result of that as a giant knowledge graph. I was into StackOverflow in the early 2010s but ultimately stopped being an active contributor because of the stupid moderation.
View on HN · Topics
And everything is “fact checked” by the Grok LLM. Which… Yeah… https://en.wikipedia.org/wiki/Grok_(chatbot)#Controversies
View on HN · Topics
StackOverflow is famously obnoxious about questions badly asked, badly categorized, duplicated… It’s actually a topic on which StackOverflow would benefit from AI A LOT. Imagine StackOverflow rebrands itself as the place where you can ask the LLM and it benefits the world, whoch correctly rephrasing the question behind the scenes and creating public records for them.
View on HN · Topics
The company tried this. It fell through immediately. So they went away, and came back with a much improved version. It also fell through immediately. Turns out, this idea is just bad: LLMs can't rephrase questions accurately, when those questions are novel, which is precisely the case that Stack Overflow needs. For the pedantic: there were actually three attempts, all of which failed. The question title generator was positively received ( https://meta.stackexchange.com/q/388492/308065 ), but ultimately removed ( https://meta.stackoverflow.com/q/424638/5223757 ) because it didn't work properly, and interfered with curation. The question formatting assistant failed obviously and catastrophically ( https://meta.stackoverflow.com/a/425167/5223757 ). The new question assistant failed in much the same ways ( https://meta.stackoverflow.com/a/432638/5223757 ), despite over a year of improvements, but was pushed through anyway.
View on HN · Topics
This is an excellent piece of information that I didn’t have. If the company with most data can’t succeed, then it seems like a really hard problem. On the side, they can understand why humans couldn’t do it either.
View on HN · Topics
Seriously where will we get this info anymore? I’ve depended on it for decades. No matter how obscure, I could always find a community that was talking about something I needed solved. I feel like that’s getting harder and harder every year. The balkanization of the Internet + garbage AI slop blogs overwhelming the clearly declining Google is a huge problem.
View on HN · Topics
Has anyone tried building a modern Stack Overflow that's actually designed for AI-first developers? The core idea: question gets asked → immediately shows answers from 3 different AI models. Users get instant value. Then humans show up to verify, break it down, or add production context. But flip the reputation system: instead of reputation for answers, you get it for catching what's wrong or verifying what works. "This breaks with X" or "verified in production" becomes the valuable contribution. Keep federation in mind from day one (did:web, did:plc) so it's not another closed platform. Stack Overflow's magic was making experts feel needed. They still do—just differently now.
View on HN · Topics
Oh, so it wasn't bad enough to spot bad human answers as an expert on Stack Overflow... now humans should spend their time spotting bad AI answers? How about a model where you ask a human and no AI input is allowed, to make sure that everyone has everyone else's full attention?
View on HN · Topics
Why disallow AI input? Is it that poor? Surely it isn't.
View on HN · Topics
The entire purpose of answering questions as an "expert" on S.O. is/was to help educate people who were trying to learn how to solve problems mostly on their own. The goal isn't to solve the immediate problem, it's to teach people how to think about the problem so that they can solve it themselves the next time. The use of AI to solve problems for you completely undermines that ethos of doing it yourself with the minimum amount of targeted, careful questions possible .
View on HN · Topics
What's the point of AI on a site like that? Wouldn't you just ask an LLM directly if you were fine with AI answers?
View on HN · Topics
You're absolutely correct, but the scary thing is this: What happens when a whole generation grows up not knowing how to answer another person's question without consulting AI? [edit] It seems to me that this is a lot like the problem which bar trivia nights faced around the inception of the smartphone. Bar trivia nights did, sporadically and unevenly, learn how to evolve questions themselves which couldn't be quickly searched online. But it's still not a well-solved problem. When people ask "why do I need to remember history lessons - there is an encyclopedia", or "why do I need to learn long division - I have a calculator", I guess my response is: Why do we need you to suck oxygen? Why should I pay for your ignorance? I'm perfectly happy to be lazy in my own right, but at least I serve a purpose. My cat serves a purpose. If you vibe code and you talk to LLMs to answer your questions...I'm sorry, what purpose do you serve?
View on HN · Topics
I and many others already go the extra mile to ask multiple LLM's for hard questions or for getting a diversity of AI opinions to then internalize and cross check myself. There are apps that build up a nice sized user base on this small convenience aded of getting 2 answers at once REF https://lmarena.ai/ https://techcrunch.com/2025/05/21/lm-arena-the-organization-... All the major AI companies of course do not want to give you the answers from other AI's so this service needs to be a third party. But then beyond that there are hard/niche questions where the AI's are wrong often and humans also have a hard time getting it right, but with a larger discussion and multiple minds chewing the problem one can get to a more correct answer often by process of elimination. I encountered this recently in a niche non-US insurance project and I basically coded together the above as an internal tool. AI suggestions + human collaboration to find the best answer. Of course in this case everyone is getting paid to spend time with this thing so more like AI first Stack Overflow Internal. I have no evidence that an public version would do well when ppl don't get paid to commend and rate.
View on HN · Topics
Am I reading an AI trying to trick me into becoming its subordinate?
View on HN · Topics
I had a conversation with a couple accountants / tax-advisor types about them participating in something like this for their specialty. And the response was actually 100% positive because they know that there is a part of their job that the AI can never take 1) filings requires you to have a human with a government approved license 2) There is a hidden information about what tax optimization is higher or lower risk based on their information from their other clients 3) Humans want another human to make them feel good that their tax situation is taken care of well. But also many said that it would be better if one wraps this in an agency so the leads that are generated from the AI accounting questions only go to a few people instead of making it fully public stackexchange like. So +1 point -1 point for the idea of a public version.
View on HN · Topics
hehe, damn I did let an AI fix my grammer and they promptly put the classic tell of — U+2014 in there
View on HN · Topics
That seems like a horrible core idea. How is that different from data labeling or model evaluation? Human beings want to help out other human beings, spread knowledge and might want to get recognition for it. Manually correcting (3 different) automation efforts seems like incredible monotone, unrewarding labour for a race to the bottom. Nobody should spend their time correcting AI models without compensation.
View on HN · Topics
I think this could be really cool, but the tricky thing would be knowing when to use it instead of just asking the question directly to whichever AI. It’s hard to know that you’ll benefit from the extra context and some human input unless you already have a pretty good idea about the topic.
View on HN · Topics
Presumably over time said AI could figure out if your question had already been answered and in that case would just redirect you too the old thread instead.
View on HN · Topics
AI is generally setup to return the "best" answer as defined as the most common answer, not the rightest, or most efficient or effective answer, unless the underlying data leans that way. It's why AI based web search isn't behaving like google based search. People clicking on the best results really was a signal for google on what solution was being sought. Generally, I don't know that LLMs are covering this type of feedback loop.
View on HN · Topics
thanks for sharing that, it was simple, neat, elegant. this sent me down a rabbit hole -- I asked a few models to solve that same problem, then followed up with a request to optimize it so it runs more efficiently. chatgpt & gemini's solutions were buggy, but claude solved it, and actually found a solution that is even more efficient. It only needs to compute sqrt once per iteration. It's more complex however. yours claude ------------------------------ Time (ns/call) 40.5 38.3 sqrt per iter 3 1 Accuracy 4.8e-7 4.8e-7 Claude's trick: instead of calling sin/cos each iteration, it rotates the existing (cos,sin) pair by the small Newton step and renormalizes: // Rotate (c,s) by angle dt, then renormalize to unit circle float nc = c + dt*s, ns = s - dt*c; float len = sqrt(nc*nc + ns*ns); c = nc/len; s = ns/len; See: https://gist.github.com/achille/d1eadf82aa54056b9ded7706e8f5... p.s: it seems like Gemini has disabled the ability to share chats can anyone else confirm this?
View on HN · Topics
Nice, that worked. It's even faster. yours yours+opt claude --------------------------------------- Time (ns) 40.9 36.4 38.7 sqrt/iter 3 2 1 Instructions 207 187 241 Edit: it looks like the claude algorithm fails at high eccentricities. Gave chatgpt pro more context and it worked for 30min and only made marginal improvement on yours, by doing 2 steps then taking a third local step. https://gist.github.com/achille/23680e9100db87565a8e67038797...
View on HN · Topics
Models are NOT search engines. Even if LLMs were trained on the answer, that doesn't mean they'll ever recommend it. Regardless of how accurate it may be. LLMs are black box next token predictors and that's part of the issue.
View on HN · Topics
Why did SO decide to do that to us? to not invest in ai and then, iirc, claim our contributions their ownership. i sometimes go back to answers i gave, even when answered my own questions.
View on HN · Topics
Decide to do what? SO didn't claim contributions. They're still CC-BY-SA https://stackoverflow.com/help/licensing AFAICT all they did is stop providing dumps. That doesn't change the license. I was very active, In fact I'm actually upset at myself for spending so much time there. That said, I always thought I was getting fair value. They provided free hosting, I got answers and got to contribute answers for others.
View on HN · Topics
The new owners (well, not really new any more) are focused on adding AI to SO because it's the current hotness, and making other changes to try to extract more money that they're completely ignoring the community's issues and objections to their changes, which tend to be half-assed and full of bugs.
View on HN · Topics
This is horrifying. Given the fact that when I need a question answered I usually refer to S.O. , but more recently have taken suggestions from LLM models that were obviously trained on S.O. data... And given the fact that all other web results for "how do you change the scroll behavior on..." or "SCSS for media query on..." all lead to a hundred fake websites with pages generated by LLMs based on old answers. Destroying S.O. as a question/answer source leaves only the LLMs to answer questions. That's why it's horrific.
View on HN · Topics
I do use Claude a lot, but I still regularly ask questions on https://bioinformatics.stackexchange.com/ . It's often just too niche, LLMs hallucinate stuff like an entire non-existent benchmarking feature in Snakemake, or can't explain how I should get transcriptome aligners to give me correct quantifications for a transcript. I guess it's too niche. And as a lonely Bioinformatician it can be nice to get confirmation from other bioinformaticians. Looking back at my Stack Exchange/Stack Overflow (never really got the difference) history, my earlier, more general programming questions from when I just started are all no-brainers for any LLM.
View on HN · Topics
So the question for me is how important was SO to training LLMs? Because now that the SO is basically no longer being updated, we've lost the new material to train on? Instead, we need to train on documentation and other LLM output. I'm no expert on this subject but it seems like the quality of LLMs will degrade over time.
View on HN · Topics
It has often been claimed, and even shown, that training LLMs on their own outputs will degrade the quality over time. I myself find it likely that on well-measurable domains, RLVR improvements will dominate "slop" decreases in capability when training new models.
View on HN · Topics
Users could upvote whether Claude, Gemini or ChatGPT provided the best answer. The best of three is surfaced, the others are hidden behind a "show alternatives." However, I can see how this would be labelled "shoving AI into everything" and "I'm not on SO for AI."
View on HN · Topics
If by "body-slammed" you mean "trained on SO user data while violating the terms of the CC BY-SA license", then sure. In the best case scenario, LLMs might give you the same content you were able to find on SO. In the common scenario, they'll hallucinate an answer and waste your time. What should worry everyone is what system will come after LLMs. Data is being centralized and hoarded by giant corporations, and not shared publicly. And the data that is shared is generated by LLMs. We're poisoning the well of information with no fallback mechanism.
View on HN · Topics
> If by "body-slammed" you mean "trained on SO user data while violating the terms of the CC BY-SA license", then sure. You know that's not what they meant, but why bring up the license here? If they were over the top compliant, attributing every SO answer under every chat, and licensing the LLM output as CC BY-SA, I think we'd still have seen the same shift. > In the best case scenario, LLMs might give you the same content you were able to find on SO. In the common scenario, they'll hallucinate an answer and waste your time. Best case it gives you the same level of content, but more customized, and faster. SO being wrong and wasting your time is also common.
View on HN · Topics
it is indeed a shame. if you are doing anything remotely new and novel, which is essential if you want to make a difference in an increasingly competitive field, LLMs confidently leave you with non-working solutions, or sometimes worse they set you on the wrong path. I had similar worries in the past about indexable forums being replaced by discord servers. the current situation is even worse.
View on HN · Topics
When you see AI giving you back various coding snippets almost verbatim from SO, it really makes you wonder what will happen in the future with AI when it can't depend on actual humans doing the work first.
View on HN · Topics
This is a great example of how free content was exploited by LLMs and used against oneself to an ultimate destruction. Every content creator should be terrified of leaving their content out for free and I think it will bring on a new age of permanent paywalls and licensing agreements to Google and others, with particular ways of forcing page clicks to the original content creators.
View on HN · Topics
StackOverflow was immediately dead for me the day they declared that AI sellout of theirs. Pathetic thieves, they won't even allow deleting my own answers after that. Not that it would make the models unlearn my data, of course, but I wanted to do so out of principle. https://meta.stackexchange.com/questions/399619/our-partners...
View on HN · Topics
Aren't a lot of projects using LLMs to generate documentation these days?