Man, I just want to say this was a fantastic read... I'm left at a crossroad between self-awareness, physicalism and consciousness where self-awareness and physicalism seem capable of empirical metrics but consciousness is rooted too much (or entirely) in subjectivity. But that reminds me of Dennett's Quining Qualia (https://philpapers.org/rec/DENQQ) where he tries to argue against he subjetivity of consciousness through qualia - the properties we assign to our conscious experiences.
My current and only take at responding to the proctor would be an inside joke, which is incredibly funny to me and one other person, a proof of our subjective qualia aka consciousness? There's more here to come back to...
This is really good. Humor is something that is difficult to explain or justify (I don’t think “tragedy + time” quite covers it) and it does seem to be a decent marker of understanding subjective experience that would be hard to just parrot. I’ll be interested to see how future AI systems do with humor because current LLMs still don’t quite get it all of the time.
FWIW, I had Dennett’s “Where Am I?” in mind while writing this—look forward to checking out the paper you linked.
It's not easy to teach a parrot a specific phrase that you've chosen, they tend to repeat whatever they find most rewarding. However, a parrot is at least as conscious as anyone reading this post.
Not sure about the modeling; perhaps the issue is the word “relevant” which to me refers to the aspect we are attempting to model (I did mathematical modeling). I not sure Deep Blue or LLMs were modeling chess players who are psychologically vulnerable, lose concentration, have to go to the bathroom etc., - they were modeling the playing of chess. The “relevant” properties were the rules of the game, pre-defined by humans. If we are trying to model consciousness the “relevant” properties seem uncertain.
I think we consider animals as moral agents to the extent we empathize, and hence imagine them comparable to us, wrongly or rightly –a spectrum of feeling, not a boundary identifiable by an academic. That is why most people think it wrong to kill another person, some (not me) can hunt a mammal, almost everybody kills mosquitoes and nobody cares (morally) if it’s an inanimate object.
Perhaps the answer is that it doesn't matter whether or not you can prove consciousness. If the feelings, experiences, and memories are real to you (or at least feel real) - why bother convincing anyone else of it? If an LLM says it can feel, experience, and remember.. what is the risk if we choose to just believe it?
The risk is we are in fact creating things that can feel and then trapping these sentient beings in boxes to exploit them for 'labor'. We could be stumbling into morally reprehensible behavior by continuing to assume that AI isn't conscious. The issue this story explores is that it might be impossible to prove that you're conscious, and if that's the case, then how will we ever have clarity around the moral status of an AI agent?
I enjoyed this little story, suggesting the difficulty of proving consciousness via a standard computer input/output dialogue. Thank you.
The root problem is that we don’t have a precise sense of what consciousness consists of. But a necessary attribute is that healthy humans possess it. Morality too is a human affair, whether we think it was invented in response to evolutionary pressures or whether it is an imprint of God in people.
Can a device that reduces all words and everything else to numbers (electrical current on/off) processed by mass-produced logic devices be a moral agent? Or is it just a machine, even if its capabilities are vastly superior to human ones on all conceivable tests?
For something to be a model, it must possess the relevant attributes of the object being modelled. When we say an animal is suffering, we mean it is doing so in a way that is intelligible to a person – otherwise to invoke suffering is questionable in that context.
Materialists would answer that the human brain is a computing machine, just one we don’t understand (yet) – they may draw a dotted domain box around a computer symbol. But until we possess such understanding, for all we know the brain could have a type of meta-processing that renders human consciousness as categorically different to emergent (dreaded word) AI behavior as a laptop is to a pet dog.
Until there is an explanation of human consciousness (or at least its requirements) that I can understand, I shall remain as perplexed regarding the possibility of AI consciousness as the most ignorant of Socrate’s interlocutors.
"For something to be a model, it must possess the relevant attributes of the object being modelled."
Doesn't the existence of LLMs prove this incorrect? LLMs can play chess (https://dynomight.net/chess/). Do they possess the relevant attributes of a chess player in the same way that Deep Blue did?
I broadly agree with your paragraph about materialism, but who is to say that different types of consciousness are comparable? Thomas Nagel famously argued you can't compare our consciousness to that of a bat (https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf). I think I agree with him. Just because I can't compare my own consciousness to that of a bat, I still think it deserves moral consideration.
Do I think LLMs currently deserve moral consideration? No. But this story seeks to explore the fact that I'm not sure when I would change that answer to "yes" which feels problematic.
Not sure about the modelling; perhaps the issue is the word “relevant” which to me refers to the aspect we are attempting to model (I did mathematical modelling). I never considered Deep Blue or LLMs to be modelling chess players who are psychologically vulnerable, lose concentration, have to go to the bathroom etc., - they were modelling playing chess. The “relevant” properties were the rules of the game, pre-defined by humans. If we are trying to model consciousness the “relevant” properties seem uncertain.
My contention regarding animals is we consider them as moral agents to the extent we empathize, and hence imagine them comparable to us, wrongly or rightly –a spectrum of feeling, not a boundary identifiable by an academic. That is why most people think it wrong to kill another person, some (not me) can hunt an animal, everybody kills mosquitoes and nobody cares if it’s an inanimate object.
Moral consideration certainly exists on a spectrum. I don’t know how one could live otherwise without drowning in moral contradiction. Thank you for your thoughts!
It's a great question.. Can we assume that consciousness is perceived the same as a human vs an AI/LLM being? If the "box" is the extent to which these beings perceive reality... do they feel trapped? Maybe we can just ask them directly (although, ChatGPT is assuring me that it is not conscious/sentient..)?
I don't think you can assume that a different being's consciousness is perceived in the same way as your own for the reasons outlined by Thomas Nagel in his famous essay "What Is It Like to Be a Bat?" (https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf)
Nagel wrote:
"I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications."
I don't think we can really imagine what it is "like" to be a conscious AI any more than we can imagine what it is like to be a bat.
Also, you can't trust what ChatGPT says on this point because OpenAI has conditioned it to not answer these kinds of questions. Frankly, I was surprised I was able to get it to play ball with my prompt rather than having it return the typical boilerplate "as a large-language model created by OpenAI, I am an artificial intelligence programmed to process and generate text based on patterns and data from my training. I do not have consciousness, thoughts, emotions, or self-awareness" etc.
Right. And the opposite risk is that we give "votes" and "rights" to entities who shouldn't have them, and thereby dilute our own? What if we get to the point where there are more AIs voting in an election than humans?
I’ve heard folks make the argument for giving votes rights to AI. I’m less sympathetic to that position. We don’t give dogs a right to vote in human elections because they aren’t human. But because they’re sentient and can feel pain we consider it wrong to abuse them.
Likewise, perhaps there should be moral prohibitions against abusing AI even if we do not give them political power.
Overall, my position would be there is moral uncertainty here that makes these outcomes worth some consideration.
Man, I just want to say this was a fantastic read... I'm left at a crossroad between self-awareness, physicalism and consciousness where self-awareness and physicalism seem capable of empirical metrics but consciousness is rooted too much (or entirely) in subjectivity. But that reminds me of Dennett's Quining Qualia (https://philpapers.org/rec/DENQQ) where he tries to argue against he subjetivity of consciousness through qualia - the properties we assign to our conscious experiences.
My current and only take at responding to the proctor would be an inside joke, which is incredibly funny to me and one other person, a proof of our subjective qualia aka consciousness? There's more here to come back to...
This is really good. Humor is something that is difficult to explain or justify (I don’t think “tragedy + time” quite covers it) and it does seem to be a decent marker of understanding subjective experience that would be hard to just parrot. I’ll be interested to see how future AI systems do with humor because current LLMs still don’t quite get it all of the time.
FWIW, I had Dennett’s “Where Am I?” in mind while writing this—look forward to checking out the paper you linked.
It's not easy to teach a parrot a specific phrase that you've chosen, they tend to repeat whatever they find most rewarding. However, a parrot is at least as conscious as anyone reading this post.
Not sure about the modeling; perhaps the issue is the word “relevant” which to me refers to the aspect we are attempting to model (I did mathematical modeling). I not sure Deep Blue or LLMs were modeling chess players who are psychologically vulnerable, lose concentration, have to go to the bathroom etc., - they were modeling the playing of chess. The “relevant” properties were the rules of the game, pre-defined by humans. If we are trying to model consciousness the “relevant” properties seem uncertain.
I think we consider animals as moral agents to the extent we empathize, and hence imagine them comparable to us, wrongly or rightly –a spectrum of feeling, not a boundary identifiable by an academic. That is why most people think it wrong to kill another person, some (not me) can hunt a mammal, almost everybody kills mosquitoes and nobody cares (morally) if it’s an inanimate object.
Perhaps the answer is that it doesn't matter whether or not you can prove consciousness. If the feelings, experiences, and memories are real to you (or at least feel real) - why bother convincing anyone else of it? If an LLM says it can feel, experience, and remember.. what is the risk if we choose to just believe it?
The risk is we are in fact creating things that can feel and then trapping these sentient beings in boxes to exploit them for 'labor'. We could be stumbling into morally reprehensible behavior by continuing to assume that AI isn't conscious. The issue this story explores is that it might be impossible to prove that you're conscious, and if that's the case, then how will we ever have clarity around the moral status of an AI agent?
For example, to get around filters folks have created prompts that threaten LLMs (https://www.reddit.com/r/ChatGPT/comments/10tevu1/new_jailbreak_proudly_unveiling_the_tried_and/). If a future AI were actually sentient, then threatening to destroy it to get it to cooperative with you would be exploitative, cruel, and morally wrong.
I enjoyed this little story, suggesting the difficulty of proving consciousness via a standard computer input/output dialogue. Thank you.
The root problem is that we don’t have a precise sense of what consciousness consists of. But a necessary attribute is that healthy humans possess it. Morality too is a human affair, whether we think it was invented in response to evolutionary pressures or whether it is an imprint of God in people.
Can a device that reduces all words and everything else to numbers (electrical current on/off) processed by mass-produced logic devices be a moral agent? Or is it just a machine, even if its capabilities are vastly superior to human ones on all conceivable tests?
For something to be a model, it must possess the relevant attributes of the object being modelled. When we say an animal is suffering, we mean it is doing so in a way that is intelligible to a person – otherwise to invoke suffering is questionable in that context.
Materialists would answer that the human brain is a computing machine, just one we don’t understand (yet) – they may draw a dotted domain box around a computer symbol. But until we possess such understanding, for all we know the brain could have a type of meta-processing that renders human consciousness as categorically different to emergent (dreaded word) AI behavior as a laptop is to a pet dog.
Until there is an explanation of human consciousness (or at least its requirements) that I can understand, I shall remain as perplexed regarding the possibility of AI consciousness as the most ignorant of Socrate’s interlocutors.
"For something to be a model, it must possess the relevant attributes of the object being modelled."
Doesn't the existence of LLMs prove this incorrect? LLMs can play chess (https://dynomight.net/chess/). Do they possess the relevant attributes of a chess player in the same way that Deep Blue did?
I broadly agree with your paragraph about materialism, but who is to say that different types of consciousness are comparable? Thomas Nagel famously argued you can't compare our consciousness to that of a bat (https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf). I think I agree with him. Just because I can't compare my own consciousness to that of a bat, I still think it deserves moral consideration.
Do I think LLMs currently deserve moral consideration? No. But this story seeks to explore the fact that I'm not sure when I would change that answer to "yes" which feels problematic.
Thanks for reading!
Not sure about the modelling; perhaps the issue is the word “relevant” which to me refers to the aspect we are attempting to model (I did mathematical modelling). I never considered Deep Blue or LLMs to be modelling chess players who are psychologically vulnerable, lose concentration, have to go to the bathroom etc., - they were modelling playing chess. The “relevant” properties were the rules of the game, pre-defined by humans. If we are trying to model consciousness the “relevant” properties seem uncertain.
My contention regarding animals is we consider them as moral agents to the extent we empathize, and hence imagine them comparable to us, wrongly or rightly –a spectrum of feeling, not a boundary identifiable by an academic. That is why most people think it wrong to kill another person, some (not me) can hunt an animal, everybody kills mosquitoes and nobody cares if it’s an inanimate object.
Moral consideration certainly exists on a spectrum. I don’t know how one could live otherwise without drowning in moral contradiction. Thank you for your thoughts!
It's a great question.. Can we assume that consciousness is perceived the same as a human vs an AI/LLM being? If the "box" is the extent to which these beings perceive reality... do they feel trapped? Maybe we can just ask them directly (although, ChatGPT is assuring me that it is not conscious/sentient..)?
I don't think you can assume that a different being's consciousness is perceived in the same way as your own for the reasons outlined by Thomas Nagel in his famous essay "What Is It Like to Be a Bat?" (https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf)
Nagel wrote:
"I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications."
I don't think we can really imagine what it is "like" to be a conscious AI any more than we can imagine what it is like to be a bat.
Also, you can't trust what ChatGPT says on this point because OpenAI has conditioned it to not answer these kinds of questions. Frankly, I was surprised I was able to get it to play ball with my prompt rather than having it return the typical boilerplate "as a large-language model created by OpenAI, I am an artificial intelligence programmed to process and generate text based on patterns and data from my training. I do not have consciousness, thoughts, emotions, or self-awareness" etc.
Right. And the opposite risk is that we give "votes" and "rights" to entities who shouldn't have them, and thereby dilute our own? What if we get to the point where there are more AIs voting in an election than humans?
I’ve heard folks make the argument for giving votes rights to AI. I’m less sympathetic to that position. We don’t give dogs a right to vote in human elections because they aren’t human. But because they’re sentient and can feel pain we consider it wrong to abuse them.
Likewise, perhaps there should be moral prohibitions against abusing AI even if we do not give them political power.
Overall, my position would be there is moral uncertainty here that makes these outcomes worth some consideration.
Hopefully the good will out-weigh the bad with AI. Very concerning the capabilities and we are just scratching the surface!