Infinite midwit
The better AI has gotten, the less anxious Iâve become. A few years ago, when the computers first started talking, it was reasonable to believe that we would soon be in the presence of omnipotent machines. For someone like me, whose job is to produce words on the internet, it seemed like only a matter of time before I would have to fill my pockets with stones and wade into the sea. But weâve gotten a closer look at our electric god as it has slouched toward San Francisco to be born, and it isnât quite like I feared. I donât feel like I have access to an on-demand omnipotence. Instead, I can talk to an infinite midwit: a stooge who is always available and very knowledgeable, but smart? Well, yes and no, in weird ways. Even as it has learned to count the number of ârâs in the word âstrawberryâ, even as it has stopped telling people to put glue on their pizza, thereâs still a hole in the center of its capabilities thatâs as big as it was in 2022, a hole that shows no signs of shrinking. I only know this because that hole is where I live. Some problems have clear boundaries and verifiable solutions, like âWhatâs the cube root of 38,126?â. These problems require objective intelligence. Other problems are vague and squishy and itâs not clear whether youâve solved them, or whether they exist at all, like âHow do I live a good life?â. These problems require subjective intelligence. Objective intelligence can be trained, reinforced, and validated. Subjective intelligence cannot. Itâs unfortunate that people use one word to refer to both of these capabilities, when in fact they have nothing to do with each other. It is also, ironically, a case of objective intelligence overshadowing subjective intelligence: these skills are obviously and intuitively different, but a century of psychological research has âprovenâ that only one of them exists. Over and over again, psychologists have found that all intelligence tests correlate with one another, even when you ostensibly try to test for âmultiple intelligencesâ. Numbers donât lie, and they all say that thereâs only one intelligence, the so-called g-factor. The problem is that any test of intelligence is only ever a test of objective intelligence. âHow do I live a good life?â is not a multiple-choice question. âDiscoveringâ the g-factor again and again is like being surprised that you find the same patch of sidewalk every time you look under the same streetlight. AI is pure objective intelligence. Thatâs why each new model comes with a report card instead of a birth certificate: The promise of artificial superintelligence is based on the idea that objective intelligence is the only intelligence. Or, even if there are multiple forms of intelligence out there, that they are fungible. To be an AI maximalist is to believe we are playing under Settlers of Catan rules, where if you have enough of any one resource, you can trade it for any other resource. If you have infinite objective intelligence, then you have infinite everything. So we ought to ask: how well is this bit of magical thinking working out so far? Itâs hard to judge the subjective intelligence of a machine both because itâs hard to judge subjective intelligence in general, and because LLMs occupy such a small slice of existence. When you meet a human who can do quadratic equations in their head but canât hold onto a job or a relationship, you know theyâre missing something upstairs. But machines donât have lives they can ruin, so all we can do is look at the things they say. And as soon as they string a few sentences together, itâs clear thereâs something wrong. Writing is a task that takes both objective and subjective intelligence. LLMs ace the objective parts the same way they ace every test; you canât fault their grammar, semantics, or syntax. But good writing requires an additional bit of juju that makes the prose live and breathe, a light on the inside that canât be quantified or checklisted. And even though AI can now produce A+ five-paragraph essays, that light has never come on. Itâs remarkable how much consensus there is about this fact among people who care about words. , , and are all very different kinds of writersâSun is a tech journalist/anthropologist, Hoel is a neuroscientist/novelist, and Kriss is...well, his bio says heâs âa writer and your enemyââand yet all three of them have recently published pieces with the unanimous conclusion that LLMs make crummy writers. (Sun in The Atlantic, Hoel on his Substack, and Kriss in the NYT.) I agree with them. Itâs cool that AI can fold proteins, create websites, fact-check journal articles, etc. but it canât write anything that I am interested in reading. The problem isnât that it hallucinates or makes mistakes. Itâs that everything it writes vaguely sucks. I drag my eyes across the words and I feel nothing. Thatâs not quite right, actuallyâI feel like, âI would like this to be over as soon as possible.â When I see the ideas that the machines think are insightful, I wince. Talking to the computer is like taking a sip of scalding hot coffee: keep doing it and youâll lose your sense of taste. Itâs hard to describe exactly what the machines are missing. Have you ever loved someone who once loved you back, then didnât anymore? Did you notice how their eyes dimmed? Did you note the disappearance of that subtle wrinkle in the temples that distinguishes a real smile from a fake one? Did you catch it when you stopped being cared for and started being humored? The moment you realize whatâs happening, you age out of your enchantmentâone day youâre crawling through a wardrobe to Narnia, and next day you open up the wardrobe and thereâs nothing but hangers. Talking to an AI feels a bit like that, except without the nice part at the beginning. Of course, that comparison is literally nonsense. Despite what the ancient scholastics might have claimed, there are no actual lights behind anyoneâs eyes. Despite what your psych 101 professor might have told you, some people can fake their smiles just fine. I donât have a wardrobe and Iâve never met a lion or a witch. And yet any human can understand the analogy they know what it feels like to be dumped, or at least what it feels like to be rejected. The words themselves donât contain that feelingâthey are a recipe for creating that feeling inside your own head, to assemble the right set of emotions out of the experiences you have at hand. If I do a good job, the subjective experience that results inside you might resemble the one that originated inside me, but it will never be identical, because weâre working with different ingredients.1 The computer doesnât know any of this. It canât know any of this. It can only read the cookbook; it canât taste the meal. Objective knowledge can make your sentences true, but it canât make them alive. Without access to subjective knowledge, you quickly hit a wall. And unlike all previous walls that AI has surmounted, you canât overcome this one by scalingâeither in the literal or metaphorical senseâbecause itâs a wall with a width you cannot describe and a height you cannot see. That wall is the only reason Iâm still here. I would rather die than let a computer write my posts, but I would certainly like to know if it could, in case I need to start gathering pocket-stones and locating the nearest sea. And so I check, from time to time, whether the leading AI models can do me better than I can. The result sounds like a version of me that has sustained blunt force trauma to the back of the head and spent years recovering in a hospital where the Wi-Fi, for whatever reason, only lets you log onto LinkedIn. I wonât repost the prose here because itâs not even bad enough to be interesting, and because youâve already seen it all over the internet: metaphors that donât quite congeal, turns of phrase that sound insightful as long as you donât actually think about them, breathless insistence that every sentence is a revelation. If a student submiâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Experimental History.