The Secret to Understanding AI
In the before timesâbefore machines could hallucinate, before compute was a nounâit was not uncommon to go several weeks without someone telling me the world was about to end. Similarly, a whole season might pass without anyone assuring me that it was also, simultaneously, about to become perfect. That particular luxury died on November 30, 2022, when OpenAI released ChatGPT to the public. What followed was less a news cycle than a weather eventâa tropical depression that would not budge. Within weeks, millions of people had their first experience with generative AI. Within months, every major technology company had announced its own version of a large language model, or a partnership, or a pivot. Venture capital arrived drooling. Most people in tech think about money, but AI-profit projections are differentâlike CFO fan fiction, written in Excel. In 2023, the McKinsey Global Institute estimated that $4.4 trillion in annual corporate profits could be up for grabs from generative AI alone. Morgan Stanley estimated $40 trillion more in operational efficiencies. The words artificial intelligence went from obscurity to a constant hum, present in every earnings call, every school-board meeting, and far too many arguments at dinner tables. Yet for all of the noise, a simple question stayed unanswered: What exactly was this new technology going to do for people? Not for corporations or the billionaires who aspired to become trillionaires, but for people with mortgages and sick parents and children struggling to learn things. Answers, when they came, were either so enormous as to be meaningless or so specific as to seem beside the point: AI would cure cancer and write your text messages. AI would create deadly superviruses and drain all meaning from our existence. I got to know some of the people delivering these competing prophecies, and they had a lot of overlapping traits. Brilliance, certainty; delight at being players in a turbulent drama. A hairball of motives. Accelerationistsâthe cure-cancer peopleâwere often in charge of, or funded by, or praying to be funded by the companies whose products they were predicting would save civilization. Doomersâthe extinction peopleâwere then led by Elon Musk, who sued OpenAI to try to reclaim its founding mission as a nonprofit serving humanity. (Although a more plausible read was that he wanted to hobble his archnemesis and former partner, Sam Altman, long enough for his own AI start-up, xAI, to catch up.) [Matteo Wong and Lila Shroff: Silicon Valley is in a frenzy over bots that build themselves] Geniuses, rivalries, clashing ideologiesâall lovely ingredients for a writer like me to work with. But documenting a state of confusion isnât the same as providing clarity, and after months of talking with the assorted zealots, I was getting a little loopy myself. I needed someone who could see the technology clearlyânot as a salvation or a catastrophe or a Powerball ticket, but as a tool. Danny Hillis was one of the first people on the internet, back when it was still called the ARPANET and the community of users was so small that he knew all of the other Dannys online. His work on parallel processing led to the creation of cloud computing, which laid a foundation for the rise of artificial intelligence. Danny listened to me rant about the AI industry with sympathy and bemusement. Heâs seen every gold rush in Silicon Valley, and his heart rate is as steady as the Buddhaâs. When I arrived at my exasperated codaââDanny, what is AI actually good for?ââhe was ready. âTry to imagine the tech without the tech companies,â he told me. To my embarrassment, it had not previously occurred to me that one could do that. Danny was certain that an AI counterculture had to be out there, beyond the tech megalopolises, full of people experimenting with AI in ways more meaningful than the latest chatbot-calendar integration. Why not write about them? Not long after, I discovered whole tribes of people who were tinkering with artificial intelligence to make things that matterâeducation, health care, government, human connectionâwork better. A Cleveland Clinic cardiologist was using AI to make lifesaving heart scans available to everyone; teachers in an Indiana school district were finding new ways to engage with students; technocrats were bringing their deeply unglamorous government agencies into modernity; a former physicist was racing to build AI-powered translation for nonverbal autistic kids, including her son. Like the accelerationists, these people are plenty frustrated with bureaucracies and ideas that have aged into obsolescence. But they donât believe in the techno-optimist philosophy known as âMove fast and break things,â because they donât want to break things; they want to fix things. They had run into a problem that defied conventional solutions, and were stubborn or desperate enoughâor just cared enoughâto keep going, even if it meant having to learn more about technology than they had ever wanted to. The downsides of AI are real: misuse, malfunction, the temptation to replace people instead of teaching them new skills. Itâs easy to understand why some people would prefer that AI just go away; no one is in the market for another existential risk. But hereâs the thing about defensive crouches: They donât actually stop anything. They just ensure that you get whacked in the back of the head. The people in the AI counterculture have figured out that the only effective response to a transformative technology is not to hide from it but to get your hands dirty and make it work to preserve and improve the things you care about. Thatâs not naive optimismâitâs enlightened self-interest. A week before the 2024 presidential election, I went to Washington, D.C., for the least sexy reason: Iâd heard that the IRS was up to something. Let me rephrase. People who work inside the tight circle of government information technology kept whispering the equivalent of Psst. Yâknow whatâs going on at the IRS? When I would answer that I did not, theyâd smile and tease me with rumors of some secret AI Fight Club inside the federal government that may or may not exist. Who could say? It seemed unlikely the IRS was working on a supercool, supersecret AI project, because the IRS runs on ancient tech and has never once flirted with being cool. As for secrecy, I had entered its headquarters to meet then-Commissioner Danny Werfel within two weeks of requesting an interview. But after a few minutes in Werfelâs waiting room, I began to wonder. Dull-blue carpet. Walls the color of cafeteria pudding. The roomâs center of antigravityâits unâfocal pointâwas a faux-mahogany cabinet displaying unloved plaques and seasonal gourds. I had never been in a place so perfectly optimized to kill all curiosity. If a diabolical genius were hiding an incredible AI project, this is the anteroom heâd build. Werfel is trim and boyish, and he welcomed me into his office with the slightly besieged air of someone used to getting kneecapped whenever he stands. Werfel knew what I wanted to discuss, and cautiously allowed that âthereâs a trajectory for artificial intelligence that has a net positive impact on society and government.â But he raised a hand to indicate he would go no further: complications first. The IRS is bound by rules about âinherently governmentalâ functions and cannot simply replace its employees with AI. It has a duty to serve all taxpayers equally, whether they file on smartphones or with pencil and paper, so imposing chatbots on them isnât an option. In any case, the IRS has some of the strictest privacy and cybersecurity requirements in the world, and many AI products donât meet them. Werfel sidestepped politicsâcommissioners are appointed to a five-year term that is intended to span presidenciesâwhile acknowledging that the IRS is inherently political. From 2010 to 2021, as the annual flow of tax returns increased by 15 million, its budget was slashed by moreâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to The Atlantic Ideas.