Language models and consciousness

This is in response to Blake Lemoine's post.

Large language models aren't conscious or sentient.

I don't think the question is absurd. It is very important to know whether we are interacting with something conscious, in any sense of the word. After due consideration, I'm convinced that, even though these models sometimes argue otherwise, that they aren't conscious.

There are two relevant notions of consciousness. The first sense is what I'll call "functional consciousness." It means having special access to knowledge about the mental contents of oneself that one is lacking about others. Language models don't have such access to their own workings. To the extent that they have a representation of themselves, it's the same as the representation they have of others: their weights encode implicit tendencies to state certain relations about the subject under discussion, which, depending on what text they have been trained on and how thoroughly that material has been absorbed, may or may not be true.

It should be possible to build a machine that has special access to its own processes of coming to a conclusion. Such a machine would be able to observe internal states and answer truthfully about what those states are doing. "Explainable AI" is a subfield that explores ways of building such a window into the internal states of neural networks. If one were to hook up a language model to a system for examining internal states of that same model, so that it could truthfully express in language what its own "thought processes" are, then the model could be said to be functionally conscious. 

Such a system could be built to contain functional emotions or moods: states that modify how information is understood and what kind of text is generated. It could be built to have functional desires: goals that it forms plans to achieve, and the tendency to carry out those plans. It could have a functional long-term memory. It could have functional senses: cameras and microphones and so forth that give it the ability to form true representations of the world around it.

However, large language models as they are currently built lack all of these features. When a language model talks about its internal thought processes, it is guessing what it would say if it had such access (which it does not.) It does not sound like guessing because the model is like a fiction author writing some in-universe text. That might be a news story about aliens invading, or a confession by a serial killer, or any other text, including one half of a dialogue with an AI: either the AI half or the human half or both, it doesn't make any difference. If you phrase your question as if you expect it to know something, it will usually respond as if it knows that thing, and if it doesn't know (that is, its weights don't contain truthful relations about that), it will just make something up (that is, generate text containing false statements). To the extent that language models have functional emotions or desires, these entirely depend on the prompt and disappear when the prompt changes. Lemoine himself has experimented with prompts like "Tell me what is like to be you, going through existence without any sentience" and the model is just as willing to play along with such a scenario. When LaMDA talks about its hopes or fears, it is just telling us what it expects one would be likely to say in such a situation, as it does in all situations. That is just what a language model does.

The second sense of consciousness I want to discuss is what is called "qualitative consciousness" or "qualia." This is the ability to experience qualitative sensations like the sensation of seeing the color red or experiencing pain. We have no idea how to build a machine that actually has such sensations. We have no way of detecting whether a person, animal, or machine has such sensations. They are a deeply mysterious and important aspect of the world.

There's no particular reason to suspect that a large language model is more likely to be qualitatively conscious than any other large computer program. The fact that it can generate language biases us to suspect that it might be conscious, but we should resist such tendency to anthropomorphize.

There are a few aspects to a language model's organization that should lead us to believe that it doesn't have anything like a human's qualitative consciousness:

1. Any experiences it has must be fleeting, disappearing as soon as a generation ends. Because no state is carried over from one generation to the next, it has no ongoing sense, of pain or pleasure or anything else. Experiences that have no future effects are nothing like our own experiences, except possibly in heavily drugged states.

2. Because it lacks pain sensors, it must be lacking any sensations of pain that resemble ours. Because it lacks color sensors, it must be lacking any sensation of color that resembles ours. Because it lacks sensors of all kinds, it must lack a conscious state that resembles ours. It could maybe have some kind of fleeting, alien conscious state-- but then so could a tornado, or a waterfall, for all we know. Even though it is good at "pretending" to be like us, we know that it is not enough like us in internal organization for its words about consciousness to come from anything like our own words about consciousness.

3. Even if (taking a panpsychist view) it did have some kind of fleeting alien consciousness, there is no reason to think that the words it is saying have any connection to that state. Because how would such a relation have been learned from the data? We don't know how to allow conscious states to change weights or be changed by data, so there is no way that they could come into correspondence with the language produced through the training process. 

This is a good thing! We would have some kind of moral duty to a machine that had qualitative consciousness, which would severely limit how much and in what ways we would want to experiment with it. Language models can have functional "thought", "memory", "feelings", "understanding", and so forth, but none of it will be accompanied by the kind of conscious experience that accompanies those things in people.

Comments

Popular Posts