Summarizer

LLM Input

llm/3fd5f01c-dce0-45f5-821d-a9c655fbe87c/topic-0-f1bb5f56-22ca-4df7-bffd-6a0f32909e96-input.json

prompt

The following is content for you to summarize. Do not respond to the comments—summarize them.

<topic>
AI-Generated Writing Detection # Extensive debate about whether the article was written by AI, with discussion of telltale signs like repetitiveness, fluff language, lack of benchmarks, and 'schmoozing salesman feel'. Some defend calling out AI writing while others find accusations obnoxious.
</topic>

<comments_about_topic>
1. This shows the downside of using AI to write up your project. I see the eloquent sentences, but don't get the message.

> This works, but the actual execution happened outside the model. The model specified the computation, then waited for an external system to carry it out.
> Our transformer also emits a program, but instead of pausing for an external tool, it executes that program itself, step by step, within the same transformer.

What's the benefit? Is it speed? Where are the benchmarks? Is it that you can backprop through this computation? Do you do so?

Why is it good that it's "inside" the model? Just making it more elegant and nice? The tool was already "inside" the overall hybrid system. What's the actual problem?

2. > If you have something to say about the text then say it.

I could point out the individual phrases and describe the overall impression in detail, or I can just compactly communicate that by using the phrase "AI". If it bothers you, read it as "AI-like", so there is a pretension.

I have no problem with using AI for writing. I do it too, especially for documentation. But you need to read it and iterate with it and give it enough raw input context. If you don't give it info about your actual goals, intentions, judgments etc, the AI will substitute some washed-out, averaged-out no-meat-on-the-bone fluff that may sound good at first read and give you a warm wow-effect that makes you hit publish, but you read into it all the context that you have in your head, but readers don't have that.

Formatting and language is cheap now. We need a new culture around calling out sloppy work. You would not have had a problem with calling out a badly composed rambling article 5 years ago. But today you can easily slap an AI filter on it that will make it look grammatical and feel narratively engaging, now it's all about deeper content. But if one points that out, replies can always say "oh, you can't prove that, can you?"

3. It's very hard to discuss this. To some people it's obvious, to some it isn't. To me, every single paragraphs is obvious fluff AI writing. One problem with it is the repetitiveness and the schmoozing salesman feel. The other is the lack of benchmarks and stuff. It's both. The two are connected because the AI has to lean in to its bullshitter persona when it's not given enough raw material to write up something strong. But whenever an AI writes in its default voice like this, it also indicates that the context was not well curated.

But anyway, yes, I can also just move on to the next article. Most of the time I indeed do that.

4. For what it’s worth, I agree with you; the article is LLM written although not with the usual gotchas, so they’re more subtle.

The subtle ones like this I don’t mind too much, as long as they get the content correct, which in this case leaves quite a bit to be desired.

I’m also noticing that some people around me appear to just be oblivious to some LLM signals that bother me a lot, so people consume media differently.

I absolutely do believe that AI generated content needs to be called out, although at this point it’s safe to say that pretty much all online content is LLM written.

5. I'm glad they shared too! Wish they shared without letting the LLM process it so heavily, it makes it too hard to read, it gives monotone importance to every piece of text. Mostly it does this by bringing everything up to a slight over-importance with tone and fluff language, and by turning everything into dry statements of fact.

As to why people call this out without going into great detail about the problems with the actual text, it's because this is happening all over the place and it's very disrespectful to readers, who dig into an article that looks very well written on the surface, only to discover it's a lot of labor to decode and often (but not always) a total waste of time. Asking for a critical report of the text is asking even more of a reader who already feels duped.

6. I got the same impression as the parent post. Even if its not AI-generated, the text reads like a politician's speech at a lot of places. Talks a lot, says little.

The idea itself was very cool, so I endured it. But it was not a pleasant read.

7. What are the AI tells? The only one I found is redundancy, but it makes sense because this is trying to be approachable to laymen.

Like, you have a great point (the benefit of this approach isn't explained), but that's a mistake humans frequently make.
</comments_about_topic>

Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.

topic

AI-Generated Writing Detection # Extensive debate about whether the article was written by AI, with discussion of telltale signs like repetitiveness, fluff language, lack of benchmarks, and 'schmoozing salesman feel'. Some defend calling out AI writing while others find accusations obnoxious.

commentCount

7

← Back to job