Philosophical Zombies and Large Language Models

 A standard argument against Chalmers' philosophical zombies goes something like this. I've read a variation of it by Eliezer Yudkowsky, by Raymond Smullyan, by John Barnes, and by David Chalmers himself (who was building the case that zombies are conceivable, but was steelmanning the opposing argument):

"A philosophical zombie is defined as follows: it behaves the same way as a human, but doesn't have any internal conscious experience accompanying that behavior. That we can imagine such a thing shows that phenomenal consciousness is an extra added fact on top of the physical universe. However, the fact that we are talking about consciousness at all shows that consciousness does have an effect on the world. Can you imagine how impossible it would be for an a copy of a person that lacked consciousness to go on as we do talking about consciousness and internal experience and all that? It's absurd to imagine."

Yet large language models behave exactly this way. If you encourage them to play a philosopher of the mind, they imitate having a consciousness, but don't have one. In fact, they can imitate most anyone. Imitating is what they do. LLMs are not the same as zombies, but I think they make it very plausible that you could, for example, with some future advanced technology, replace a person by a zombie and no one would be able to tell.

Comments

Popular Posts