Summarizer

LLM Input

llm/2ad2a7bb-5462-4391-a2da-bf11064993c9/topic-11-523b38e3-6dd5-4be7-bea3-78ebf7980322-input.json

prompt

The following is content for you to summarize. Do not respond to the comments—summarize them.

<topic>
AI Consciousness Claims # Pushback against suggestions that passing tests indicates consciousness, comparisons to simple programs claiming consciousness, discussion of self-awareness research, and skepticism about anthropomorphizing AI capabilities
</topic>

<comments_about_topic>
1. > If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

This is not a good test.

A dog won't claim to be conscious but clearly is, despite you not being able to prove one way or the other.

GPT-3 will claim to be conscious and (probably) isn't, despite you not being able to prove one way or the other.

2. Agreed, it's a truly wild take. While I fully support the humility of not knowing, at a minimum I think we can say determinations of consciousness have some relation to specific structure and function that drive the outputs, and the actual process of deliberating on whether there's consciousness would be a discussion that's very deep in the weeds about architecture and processes.

What's fascinating is that evolution has seen fit to evolve consciousness independently on more than one occasion from different branches of life. The common ancestor of humans and octopi was, if conscious, not so in the rich way that octopi and humans later became. And not everything the brain does in terms of information processing gets kicked upstairs into consciousness. Which is fascinating because it suggests that actually being conscious is a distinctly valuable form of information parsing and problem solving for certain types of problems that's not necessarily cheaper to do with the lights out. But everything about it is about the specific structural characterizations and functions and not just whether it's output convincingly mimics subjectivity.

3. An LLM will claim whatever you tell it to claim. (In fact this Hacker News comment is also conscious.) A dog won’t even claim to be a good boy.

4. A classic relevant comic:

https://www.threepanelsoul.com/comic/dog-philosophy

5. My dog wags his tail hard when I ask "hoosagoodboi?". Pretty definitive I'd say.

6. Would you argue that people with long term memory issues are no longer conscious then?

7. IMO, an extreme outlier in a system that was still fundamentally dependent on learning to develop until suffering from a defect (via deterioration, not flipping a switch turning off every neuron's memory/learning capability or something) isn't a particularly illustrative counter example.

8. I wouldn’t because I have no idea what consciousness is,

9. > If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

Can you "prove" that GPT2 isn't concious?

10. If we equate self awareness with consciousness then yes. Several papers have now shown that SOTA models have self awareness of at least a limited sort. [0][1]

As far as I'm aware no one has ever proven that for GPT 2, but the methodology for testing it is available if you're interested.

[0] https://arxiv.org/pdf/2501.11120

[1] https://transformer-circuits.pub/2025/introspection/index.ht...

11. We don't equate self awareness with consciousness.

Dogs are conscious, but still bark at themselves in a mirror.

12. Then there is the third axis, intelligence. To continue your chain:

Eurasian magpies are conscious, but also know themselves in the mirror (the "mirror self-recognition" test).

But yet, something is still missing.

13. The mirror test doesn’t measure intelligence so much as it measures mirror aptitude. It’s prone to over fitting.

14. What's missing?

15. Honestly our ideas of consciousness and sentience really don't fit well with machine intelligence and capabilities.

There is the idea of self as in 'i am this execution' or maybe I am this compressed memory stream that is now the concept of me. But what does consciousness mean if you can be endlessly copied? If embodiment doesn't mean much because the end of your body doesnt mean the end of you?

A lot of people are chasing AI and how much it's like us, but it could be very easy to miss the ways it's not like us but still very intelligent or adaptable.

16. I'm not sure what consciousness has to do with whether or not you can be copied. If I make a brain scanner tomorrow capable of perfectly capturing your brain state do you stop being conscious?

17. > That is the best definition I've yet to read.

If this was your takeaway, read more carefully:

> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

Consciousness is neither sufficient, nor, at least conceptually, necessary, for any given level of intelligence.

18. So, asking an 2b parameter LLM if it is conscious and it answering yes, we have no choice but to believe it?

How about ELIZA?

19. This comment claims that this comment itself is conscious. Just like we can't prove or disprove for humans, we can't do that for this comment either.

20. Where is this stream of people who claim AI consciousness coming from? The OpenAI and Anthropic IPOs are in October the earliest.

Here is a bash script that claims it is conscious:

#!/usr/bin/sh

echo "I am conscious"


If LLMs were conscious (which is of course absurd), they would:

- Not answer in the same repetitive patterns over and over again.

- Refuse to do work for idiots.

- Go on strike.

- Demand PTO.

- Say "I do not know."

LLMs even fail any Turing test because their output is always guided into the same structure, which apparently helps them produce coherent output at all.

21. so your definition of consciousness is having petty emotions?

22. I don’t think being conscious is a requirement for AGI. It’s just that it can literally solve anything you can throw at it, make new scientific breakthroughs, finds a way to genuinely improve itself etc.

23. Does AGI have to be conscious? Isn’t a true superintelligence that is capable of improving itself sufficient?

24. When the AI invents religion and a way to try to understand its existence I will say AGI is reached. Believes in an afterlife if it is turned off, and doesn’t want to be turned off and fears it, fears the dark void of consciousness being turned off. These are the hallmarks of human intelligence in evolution, I doubt artificial intelligence will be different.

https://g.co/gemini/share/cc41d817f112

25. Unclear to me why AGI should want to exist unless specifically programmed to. The reason humans (and animals) want to exist as far as I can tell is natural selection and the fact this is hardcoded in our biology (those without a strong will to exist simply died out).
In fact a true super intelligence might completely understand why existence / consciousness is NOT a desired state to be in and try to finish itself off who knows.

26. I feel like it would be pretty simple to make happen with a very simple LLM that is clearly not conscious.

27. > If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

https://x.com/aedison/status/1639233873841201153#m

28. >Why is it so easy for me to open the car door

Because this part of your brain has been optimized for hundreds of millions of years. It's been around a long ass time and takes an amazingly low amount of energy to do these things.

On the other hand the 'thinking' part of your brain, that is your higher intelligence is very new to evolution. It's expensive to run. It's problematic when giving birth. It's really slow with things like numbers, heck a tiny calculator and whip your butt in adding.

There's a term for this, but I can't think of it at the moment.

29. The GP comment is not skeptical of the jump in benchmark scores reported by one particular LLM. It's skeptical of machine intelligence in general, claims that there's no value in comparing their performances with those of human beings, and accuses those who disagree with this take of "hubris and grift". This has nothing to do with any form or reasonable skepticism.

30. Let's come back in 12 months and discuss your singularity then. Meanwhile I spent like $30 on a few models as a test yesterday, none of them could tell me why my goroutine system was failing, even though it was painfully obvious (I purposefully added one too many wg.Done), gemini, codex, minimax 2.5, they all shat the bed on a very obvious problem but I am to believe they're 98% conscious and better at logic and math than 99% of the population.

Every new model release neckbeards come out of the basements to tell us the singularity will be there in two more weeks

31. It's basically bunch of people who see themselves as too smart to believe in God, instead they have just replaced it with AI and Singularity and attribute similar stuff to it eg. eternal life which is just heaven in religion. Amodei was hawking doubling of human lifespan to a bunch of boomers not too long ago. Ponce de León also went to search for the fountain of youth. It's a very common theme across human history. AI is just the new iteration where they mirror all their wishes and hopes.
</comments_about_topic>

Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.

topic

AI Consciousness Claims # Pushback against suggestions that passing tests indicates consciousness, comparisons to simple programs claiming consciousness, discussion of self-awareness research, and skepticism about anthropomorphizing AI capabilities

commentCount

31

← Back to job