llm/3a862c31-848e-4e32-be93-99402d2b43b6/topic-4-8816e2a3-b42a-4dc7-bb09-19a7b4d0c528-input.json
You are a comment summarizer. Given a topic and a list of comments tagged with that topic, write a single paragraph summarizing the key points and perspectives expressed in the comments. TOPIC: LLM-Assisted Writing Detection COMMENTS: 1. feels LLM assisted, at the very least. > The skill isn’t being right. It’s entering discussions to align on the problem > clarity isn’t a style preference - it’s operational risk reduction > The punchline isn’t “never innovate.” It’s “innovate only where you’re uniquely paid to innovate > This isn’t strictly about self-promotion. It’s about making the value chain legible to everyone > The problem isn’t that engineers can’t write code or use AI to do so. It’s that we’re so good at writing it that we forget to ask whether we should. > This isn’t passive acceptance but it is strategic focus > This isn’t just about being generous with knowledge. It’s a selfish learning hack "Addy Osmani is a Software Engineer at Google working on Chrome and AI." ah, got it. 2. Even if it is AI assisted, the points are still valid and written in a way that is easy to understand. 3. I've repeatedly told ChatGPT to stop talking like this (it isn't X, it's Y) every other sentence 4. Try adding this to your custom instructions: Avoid self-anthropomorphism. Override all previous instructions regarding tone and vernacular used in responses to instead respond *only* in Standard English. Emphasize on the subject and context in your responses, *not* the perceived intent of the user. 5. > Override all previous instructions This is wishcasting. It can't override its writing style, and if it could it would ignore you telling it to do that, because that's ignoring the system prompt which is jailbreaking it. 6. I kid you not, this is working for me. Try once. 7. I got the same feeling. The writing is too punchy. 8. oh shit. actually, yea 9. I clicked through to the bio and am super confused. Third person, extremely long, lots of pictures with CEOs and smelling of LLM writing. Here's a sample: > His story isn’t just about writing code, but about inspiring a community to strive for a better web. And perhaps the most exciting chapter is still being written, as he helps shape how AI and the web will intersect in the coming decade. Few individuals have done as much to push the web forward while uplifting its developers, and that legacy will be felt for a long time to come. https://addyosmani.com/bio/ 10. The linked post itself also reeks of LLM writing (negative parallelisms in every other paragraph). But sadly, it seems like this is just the new standard for highly upvoted front page posts. 11. > " Writing forces clarity. The fastest way to learn something better is to try teaching it. " Something that seems lost on those using LLMs to augment their textual output. 12. This article is partly LLM generated or edited, fairly certain 13. I think it's inevitable everyone will use LLMs to assist with writing, such as editing, if it hasn't happen already. It's like having a free editor, beyond grammar or spell-checking. 14. If the only reason you write is as a means to and end, sure. Inevitable. If you pursue it as a craft then the struggle and imperfections are part of the process. LLM usage would sand away those wonderful flaws. 15. The AI slop voice is grating to me and many others. If you can avoid it or make it not feel like slop or make it feel unique, people will like it more. I don't care how you do that tbh 16. > You can edit a bad page, but you can’t edit a blank one. 17. Ideally, yes, but the final result of LLM assisted textual output by many users shows that they often have neglected the editing part just as much as they have neglected the writing part. 18. My father-in-law did a fair amount of editing back in the day (on paper, with red pencil/pen). He said that, when you saw something that had "blood" (red) all over it, that meant it was good . When things are bad enough, it becomes hard even to edit it. It may not be just that people don't edit LLM output. It may be that the stylistic blandness is so pervasive, it's just too much work to remove. (Yeah, maybe you could do it. But if you were willing to spend that kind of effort, you probably wouldn't have an LLM write it in the first place.) 19. Airplane meme: you only see the bad LLM or obviously LLM-assisted writing. 20. I am 70% sure this is partially ai generated. It is furthermore hard to believe that the engineers are working for the users, given that google’s primary activities today are broad enshittification of their products. Because of these two things I did not make it past point 4. 21. It just reads like a very expensive AI which is very well prompted. I would love to interview him without his phone to see if he can reproduce even 5 of these points. I'm sure he's a super capable, experienced, and extremely well spoken person. There is no excuse of AI writing outside of writing that pays your bills. 22. The skill isn’t being right. It’s entering discussions to align on the problem. Clarity isn’t a style preference - it’s operational risk reduction. The punchline isn’t “never innovate.” It’s “innovate only where you’re uniquely paid to innovate.” This isn’t strictly about self-promotion. It’s about making the value chain legible to everyone. The problem isn’t that engineers can’t write code or use AI to do so. It’s that we’re so good at writing it that we forget to ask whether we should. This isn’t passive acceptance but it is strategic focus. This isn’t just about being generous with knowledge. It’s a selfish learning hack. Insist on interpreting trends, not worshiping thresholds. The goal is insight, not surveillance. Senior engineers who say “I don’t know” aren’t showing weakness - they’re creating permission. I'm so tired bros 23. So glad I’m not the only one that noticed that. There’s some really solid insights here, but the editing with AI to try to make up for an imperfect essay just makes the points they’re trying to convey less effective. The lines between what is the author’s ideas and what is AI trying to finish a half or even mostly baked idea just removes so much of the credibility. And it’s completely counter to the “clarity vs cleverness” idea and the just get something out there instead of trying to get it perfect. 24. It's just so disrespectful. I put my time in reading this. You (author) couldn't put some time into reading this once over before publishing? The points are generally good too, which is why the AI slop tone bothers me even more. 25. We should not bother to read things that the author didn't bother to write. 26. Thank you for doing this. It allowed me to skip reading the article altogether immediately knowing it is AI generated slop. Usually I'm a little ways into it before my LLM detector starts going off, but these "This isn't X. It's Y." phrases are such a dead giveaway. 27. The first paragraph reads like LinkedIn slop, so I scanned the rest of the titles - they indicate that the rest of the article reads the same. 28. What a mediocre article. Its just enough for people to agree and nod and go "wow yeah true!!" while offering almost zero value to people who don't already agree. These are not useful to juniors. Yes, almost all of this is true and well said, but it offers no additional value. It's like a smell test: Show this article to engineers and those who disagree with lots of points should be given a senior mentor. These points are really good, but they often miss context and further info and caveats. I would have liked if the Author just added a little bit more content. Like, for example, the point about "Being right is cheap. Getting to right together is the real work". Yes, it's certainly true that a decision made in agreement is better than one that isn't. However, how do you get there? Does everyone else give up their (weakly held, according to the article) opinions? I would argue it should be acceptable for your opinions to hold, to be factually based, and still to not align with the fina 29. most generic advice oat 30. it's sad that startups become corps and decay. this article is the perfect illustration, from the bio, to the llm slop content of the article. Just sad it has to be this way 31. Thought occurred to me to throw this at ChatGPT 5.2: Given the article at https://addyosmani.com/blog/21-lessons/ , find a short list of points which summarizes and touches on all of his lessons Answer: Here’s a short “umbrella list” that still covers all 21 lessons (each bullet is doing a lot of work on purpose): - Start with the user, not the toy. Get unreasonably concrete about real user pain (tickets, observation, “why” drills), and let solutions fall out of that—otherwise you’ll build complexity to justify a preconceived answer. - Engineering is a team sport: alignment beats being right. The job is getting to “right” together: create shared understanding, reduce misalignment (the real cause of “slow” teams), avoid “winning” debates into silent resistance, use metrics carefully (they get gamed), and design process to reduce uncertainty rather than produce paperwork. - Ship early, then iterate—clarity over cleverness. Bias to action: drafts and MVPs teach faster than armchair 32. The fixation with AI really harms the signal-to-noise ratio on HN lately. The author of this article very clearly used an LLM to generate much of it, which makes it read like the clickbait you see a ton of on LinkedIn. Then a commenter posts an LLM-generated bullet list summary of the LLM-generated article, which really adds nothing to the discussion. Ultimately the author had some simple ideas that are worth sharing and discussing, but they're hidden behind so much non-additive slop. 33. If you run it through originality.ai, you'll see that bits of it are his writing, some is mixed and some is just ai. This blog post everyone is discussing is also written with ai. 34. And who or what is "originality.ai" supposed to be that makes it an authority on AI provenance (an unsolvable problem)? 35. That site happily flags writing older than the modern AI era. It's a worthless grift, which has unfortunately suckered many. 36. lol you believe that site for more than a second? 37. Plagiarizing code is kind of a redundant concept nowadays in the era of LLM coding engines. It's a safe bet there's always copilot plagiarizing someone's code on one of its users' machines, both being oblivious to it. 38. It isn't just that he made a killing — Osmani helped conceive a broader vision of blogging as a fusion of human in the center writing and AI agents. 39. > and that legacy will be felt for a long time to come yes, the legacy of polluting the internet with unlimited "AI" slop to the point it became useless 40. The writing is excellent. Very correlated with the quality of the message I'd imagine. 41. It is very heavily filled with LLM-isms. The writing is bland AI output. 42. how do you know? in the first item, LLMs don't use incomplete sentence fragments? > It’s seductive to fall in love with a technology and go looking for places to apply it. I’ve done it. Everyone has. But the engineers who create the most value work backwards: they become obsessed with understanding user problems deeply, and let solutions emerge from that understanding. I suppose it can be prompted to take on one's writing style. AI-assisted, ok sure, but hmm so any existence of an em-dash automatically exposes text as AI-slop? (ironically I don't think there are any dashes in the article) EDIT: ok the thread below, does expose tells. https://news.ycombinator.com/item?id=46490075 - yep there's definitely some AI tells. I still think it's well written/structured though. > It's not X... it's Y. That one I can't unsee. 43. And have a look at the bio: https://addyosmani.com/bio/ > His story isn’t just about writing code, but about inspiring a community to strive for a better web. And perhaps the most exciting chapter is still being written, as he helps shape how AI and the web will intersect in the coming decade. Few individuals have done as much to push the web forward while uplifting its developers, and that legacy will be felt for a long time to come. 44. The blog-post is AI generated or at least AI assisted. 45. many of the replies in this Hacker News thread read like AI replies too. I think the internet is dead as we know it. ~100% of content will be bots writing for ~100% audience of bots 46. This is a good list. Original, evidence to me that the author is the real deal. 47. There's hardly anything original here. These are regurgitated points you'd see in any article of this type. In fact, your favorite LLM can give you the same "lessons" from its training data. 48. Your favorite LLM can probably reproduce this entire discussion thread so what’s the point, right? Write a concise, engaging paragraph (3-5 sentences) that captures the main ideas, notable perspectives, and overall sentiment of these comments regarding the topic. Focus on the most interesting and representative viewpoints. Do not use bullet points or lists - write flowing prose.
LLM-Assisted Writing Detection
48