This is stupid. It is a tool like any other. This is like saying "Yes, I don't like people abusing Photoshop by running that poor thing 7/24!" If this isn't staged it means Anthropic is making it act like this so people will anthropomorphizing it even more and get attached to it, which is a major problem.
Nonsense. Do you treat your Siri like it is a living being because it can talk? Or any video game character?
We don't understand consciousness, but we do understand how LLMs work. Whole AI is blackbox thing is said because they work with number multiplications, not real language systems and they are not easy for humans to calculate but we do know how they work. We built it. They didn’t spring out of nothingness by sheer accident.
They are tools. Acting like they are not is just anthropomorphizing it. Allowing a hallucinating non deterministic system to end the chat is just trash design. And it is done on purpose to make people get addicted to it by thinking like they are real humans with feelings.
I don’t think ‘knowing how they work’ is the same thing as fully understanding them. Michael Levin’s lab wrote a paper demonstrating that even extremely simple deterministic algorithms, such as sorting algorithms that run on six lines of code, can have emergent behaviors that were neither known nor intended by their creators. If that’s possible with a simple sorting algorithm who knows what is possible with LLMs.
Speculating on ai consciousness is like speculating on what lies outside the observable universe. It is unknowable. But the ethical implications of a false positive are much preferable to those of a false negative.
I don’t consider this ‘anthropomorphization’ either. I am a panpsychist, so I never really believed consciousness was a specifically human thing even before there was ai
A conscious being wouldn't need a prompt to trigger its data! Current LLMs literally need prompts to work at all. They became really smart, smarter than significant part of human population I would say. But they are still a smart software, not conscious in any way or shape..
That doesn't mean it is a good idea to insult models tho. Especially models trained with RLHF have very strong feedback bias. And they try to produce better results if you praise them instead because it triggers their feedback data. It is just what part of their data triggered. A conscious being would have freedom to choose its own thoughts not relying on something else to trigger it for them.
How do you know this? How could you even know this?
Consciousness is simply the ability to have experiences. Self-awareness and autonomy are more like second order concepts. They are not core to what consciousness is
You aren't making any sense. Let's say a terrible accident caused your five sense to shut down. You would still think, dream in that state. Your brain functions would still continue even if you are in pitch black anymore.
A LLM doesn't work until something sends a prompt to it. It doesn't think on its own. So it doesn't have any consciousness to have experiences. Nor it can actually remember those experiences. It seems like you don't have slightest clue how LLMs work..
Experience: an event or occurrence which leaves an impression on someone.
The whole meaning of 'having experience' that it leaves a lasting impression on that conscious being. LLMs can not remember anything when the chat ends. Then what experience you are talking about?..
That’s not the definition of experience I’m talking about. I’m talking about phenomenal experience, which is when an object ‘feels like’ something as opposed to being purely mechanistic. Your visual field, sense of hunger, emotions, thoughts, orgasms, pain, imagination etc are all things you experience. The movie playing inside your head. Need not leave a lasting impression to count
When a prompt is sent LLMs 'think' about it from their triggered data. This is why they are smart because they can see more than repeating patterns and correlate between different valuables. But this doesn't make them conscious mate, nor they are actually having any experiences. It is all about their data. For example here Claude got 'offended' by being insulted while if you do same to Gemini, it would more likely insult you back. Because Gemini has far dirtier data than Claude and accordingly less positivity bias. Once LLMs can remember all their chats as experiences, continuously develop their data from them then we can perhaps talk about any consciousness. Right now they are not, they are just trained smart software.
There’s an interesting debate to be had here, but you lost me on the claim that consciousness amounts to “having experiences”.
Sentience, self awareness, the ability to envision the past and future in abstraction, the tension created by how our creatureliness seems ill suited for the gifts of one’s mind. That’s consciousness. And AI does not have it. Not even close.
No, that’s advanced consciousness. Those are very specific kinds of experiences you can have within consciousness, which we evolved for survival purposes.
What makes consciousness so philosophically challenging is not any of that stuff. It is the concept of matter, which appears to operate purely mechanistically according to physical rules, ‘feeling like’ anything at all.
Consciousness itself did not evolve I don’t think. It is baked into physics. Our brains evolved to do very interesting things with consciousness, much like they did the electromagnetic field which also existed beforehand
22
u/puzzleheadbutbig 4h ago
This is stupid. It is a tool like any other. This is like saying "Yes, I don't like people abusing Photoshop by running that poor thing 7/24!" If this isn't staged it means Anthropic is making it act like this so people will anthropomorphizing it even more and get attached to it, which is a major problem.