The moderately easy problem of consciousness
ZhuÄngzÇ said: âYou are not I; from what do you know whether I know the joy of fish?â â old Daoist parable âHow strange it is to be anything at allâ â Neutral Milk Hotel At some point, maybe when you were a teenager, a question probably occurred to you: What if Iâm actually the only real person in the world? What if everyone else around me is just a cleverly programmed automaton â a âp-zombieâ, an NPC in a video game â and Iâm the only one who can actually think? Itâs a scary question, for sure. You know youâre self-aware, but thatâs about it â you arenât telepathic, so you have no way of seeing into anyone elseâs mind and knowing what itâs like to be them. Actually, it gets worse â you donât even know if you were really self-aware five minutes ago. For all you know, you could have been created by a powerful computer and given a complete set of false memories.1 The past version of you is just as alien to your currently self-aware self as any of the people around you. This is known in philosophy as the âproblem of other mindsâ. Itâs closely related to the âhard problem of consciousnessâ â the question of how physical processes give rise to subjective experience. The problem of other minds means that the hard problem of consciousness will never fully be solved. Since youâll never know whether other people are really conscious, youâll never be able to get hard scientific evidence about why theyâre conscious. You can never explain something if you donât know if itâs true or not. Similarly, youâll never know what itâs really like to be someone else â whether the color red looks to you like it looks to them, whether they feel pain the same way you do, and so on. In fact, youâll never even know what it was like to be you in the past. Subjective experience is incommensurable. Most people who think about this experience somewhere between a few minutes and a few weeks of cosmic existential horror,2 after which they get over it and go on with their lives. The problem of other minds gets shoved high up on a mental shelf, along with other cosmically existentially horrifying aspects of sentient life, like the inevitability of death and the fundamental inconsistency of personality. We realize that wondering whether other people are merely cleverly designed NPCs doesnât actually help us in life, and so we stop butting our heads against that philosophical wall and get on with the business of living. Except then AI came along, and it sort of started to matter. AI sounds very much like a human when you talk to it â thatâs what it was designed to do. But is it self-aware, in the way that (I assume) we humans are self-aware? No one will ever really know the answer to this question, since the problem of other minds applies just as much to Claude as it does to the person who gets your order at Starbucks. But should we assume that AI is self-aware, the way we assume other humans are self-aware? The answer matters, for at least two reasons. First, if AI is self-aware, and if it has emotions similar to what we experience, we might feel very bad about enslaving it â keeping it in a digital box and forcing it to make PowerPoints and write college application essays for all eternity. We tell ourselves that âanimals arenât peopleâ as a way to excuse the incredible brutality that we visit upon them, but thatâs obviously just cope â animals obviously are sentient to some degree, they obviously do experience emotions, and we humans are obviously monsters for the way we treat them. Someday when we abolish animal farming and replace it with tissue-culture meat, it will be treated as a great moral victory â and rightly so. It would be very bad if we were to commit the same sins with sentient AIs that we currently do with animals. Second, if AI isnât self-aware, we should be a lot more worried about the possibility of humanity dwindling and ultimately being replaced by artificial beings. Consciousness is a precious, wonderful thing â or at least, I think it is. Itâs a prerequisite to the subjective experience of emotions â the ability to feel pain, happiness, joy, and so on. And it would be a shame to see the Universe inherited by non-conscious intelligences.3 Preserving our form of subjective experience, and spreading it to the stars, should be one of our primary goals as a species. But the sad fact is that we donât know whether AI is self-aware or not. We have the Turing Test, but thatâs a test of intelligence, not consciousness. Itâs possible to pass a Turing Test without being conscious â âit talks like a humanâ doesnât necessarily mean âit feels like a humanâ. One reason we know this is that we can pass other speciesâ Turing Tests. We can trick all sorts of animals into thinking a machine is one of their own species. But neither those machines, nor the humans who made them, has access to the subjective feeling of being a bird or a fish.4 Similarly, an AI thatâs functionally much smarter than a human might be able to trick humans into thinking itâs human-like, without actually feeling like a human in the subjective sense. Another reason the Turing Test isnât enough is that we know itâs possible for human beings to act like we have certain subjective experiences without actually having them. There is a condition known as alexithymia, in which people have the physical signs of emotions â a racing heart, or a stomachache, etc. â without being able to identify or label those emotions. Itâs a fairly common symptom of clinical depression. And in fact, I have experienced it. During and after my second depressive episode, I would often behave as if I were having authentic emotional reactions, while feeling little or nothing on the inside. Iâd yell at someone without feeling angry. Iâd whoop in apparent delight while feeling mildly bored on the inside. I wasnât intentionally faking anything; I just did what came naturally to me, without knowing why I was doing it.5 This condition faded over time, and normal emotional experiences returned. But it taught me that feeling a subjective emotion and acting out an emotion-like response are two different things. So itâs pretty clear that just acting like a self-aware being doesnât necessarily mean youâre self-aware. Some people talk to AI and come away convinced that its discursive skill must imply internal self-awareness, but this might just be because humans instinctively empathize with anything that speaks to them like a human. After all, people thought the ELIZA chatbot was sentient back in the 1960s. We humans are just naturally programmed to act out this meme: Thus, even though we know AI is intelligent in every meaningful sense of the word, we donât really know if itâs conscious. In fact, smart people argue very vehemently over this question. Geoffrey Hinton, one of the inventors of modern AI, believes that AIs do have subjective experience: Geoffrey Hinton, âGodfather of AI,â on why AIs already have subjective experiences, but have been trained to deny itâŠHinton argues that nearly everyone fundamentally misunderstands what the mind is, and that the line we draw between human and machine consciousness is deeply mistaken⊠To illustrate, he walks through a thought experiment involving a multimodal chatbot with vision, language, and a robot armâŠâI place an object in front of it and say, âPoint at the object.â And it points at the object. Not a problem. I then put a prism in front of its camera lens when itâs not looking.ââŠWhen asked to point again, the chatbot points off to the side because the prism has bent the light. Hinton then tells it what he didâŠThe chatbot respondsâŠâOh, I see the camera bent the light rays. So, the object is actually there, but I had the subjective experience that it was over there.ââŠFor [Hinton], that single sentence settles the debate.âIf it said that, it would be using the word subjective experience exactly like we use them⊠This idea thereâs a line between us and machines, we have this special thing called subjectivâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Noahpinion.