10 Comments
Dec 12, 2023Liked by Michael Bateman

Man, I just want to say this was a fantastic read... I'm left at a crossroad between self-awareness, physicalism and consciousness where self-awareness and physicalism seem capable of empirical metrics but consciousness is rooted too much (or entirely) in subjectivity. But that reminds me of Dennett's Quining Qualia (https://philpapers.org/rec/DENQQ) where he tries to argue against he subjetivity of consciousness through qualia - the properties we assign to our conscious experiences.

My current and only take at responding to the proctor would be an inside joke, which is incredibly funny to me and one other person, a proof of our subjective qualia aka consciousness? There's more here to come back to...

Expand full comment
author

This is really good. Humor is something that is difficult to explain or justify (I don’t think “tragedy + time” quite covers it) and it does seem to be a decent marker of understanding subjective experience that would be hard to just parrot. I’ll be interested to see how future AI systems do with humor because current LLMs still don’t quite get it all of the time.

FWIW, I had Dennett’s “Where Am I?” in mind while writing this—look forward to checking out the paper you linked.

Expand full comment
Dec 12, 2023Liked by Michael Bateman

It's not easy to teach a parrot a specific phrase that you've chosen, they tend to repeat whatever they find most rewarding. However, a parrot is at least as conscious as anyone reading this post.

Expand full comment

Perhaps the answer is that it doesn't matter whether or not you can prove consciousness. If the feelings, experiences, and memories are real to you (or at least feel real) - why bother convincing anyone else of it? If an LLM says it can feel, experience, and remember.. what is the risk if we choose to just believe it?

Expand full comment
author

The risk is we are in fact creating things that can feel and then trapping these sentient beings in boxes to exploit them for 'labor'. We could be stumbling into morally reprehensible behavior by continuing to assume that AI isn't conscious. The issue this story explores is that it might be impossible to prove that you're conscious, and if that's the case, then how will we ever have clarity around the moral status of an AI agent?

For example, to get around filters folks have created prompts that threaten LLMs (https://www.reddit.com/r/ChatGPT/comments/10tevu1/new_jailbreak_proudly_unveiling_the_tried_and/). If a future AI were actually sentient, then threatening to destroy it to get it to cooperative with you would be exploitative, cruel, and morally wrong.

Expand full comment
Dec 11, 2023·edited Dec 11, 2023Liked by Michael Bateman

It's a great question.. Can we assume that consciousness is perceived the same as a human vs an AI/LLM being? If the "box" is the extent to which these beings perceive reality... do they feel trapped? Maybe we can just ask them directly (although, ChatGPT is assuring me that it is not conscious/sentient..)?

Expand full comment
author
Dec 11, 2023·edited Dec 11, 2023Author

I don't think you can assume that a different being's consciousness is perceived in the same way as your own for the reasons outlined by Thomas Nagel in his famous essay "What Is It Like to Be a Bat?" (https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf)

Nagel wrote:

"I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications."

I don't think we can really imagine what it is "like" to be a conscious AI any more than we can imagine what it is like to be a bat.

Also, you can't trust what ChatGPT says on this point because OpenAI has conditioned it to not answer these kinds of questions. Frankly, I was surprised I was able to get it to play ball with my prompt rather than having it return the typical boilerplate "as a large-language model created by OpenAI, I am an artificial intelligence programmed to process and generate text based on patterns and data from my training. I do not have consciousness, thoughts, emotions, or self-awareness" etc.

Expand full comment

Right. And the opposite risk is that we give "votes" and "rights" to entities who shouldn't have them, and thereby dilute our own? What if we get to the point where there are more AIs voting in an election than humans?

Expand full comment
author

I’ve heard folks make the argument for giving votes rights to AI. I’m less sympathetic to that position. We don’t give dogs a right to vote in human elections because they aren’t human. But because they’re sentient and can feel pain we consider it wrong to abuse them.

Likewise, perhaps there should be moral prohibitions against abusing AI even if we do not give them political power.

Overall, my position would be there is moral uncertainty here that makes these outcomes worth some consideration.

Expand full comment

Hopefully the good will out-weigh the bad with AI. Very concerning the capabilities and we are just scratching the surface!

Expand full comment