FAQ about the book and our writing process
The AI Snake Oil book was published last week. Weâre grateful for the level of interest â itâs sold about 8,000 copies so far. Weâve received many questions about the book, both its substance and the writing process. Here are the most common ones. We do! The book is not an anti-technology screed. If our point was that all AI is useless, we wouldnât need a whole book to say it. Itâs precisely because of AIâs usefulness in many areas that hype and snake oil have been successful â itâs hard for people to tell these apart, and we hope our book can help. We also recognize that the harms we describe are usually not solely due to tech, and much more often due to AI being an amplifier of existing problems in our society. A recurring pattern we point out in the book is that "broken AI is appealing to broken institutions" (Chapter 8). Thereâs a humorous definition of AI that says âAI is whatever hasnât been done yetâ. When an AI application starts working reliably, it disappears into the background of our digital or physical world. We take it for granted. And we stop calling it AI. When a technology is new, doesnât work reliably, and has double-edged societal implications, weâre more likely to call it AI. So itâs easy to miss that AI already plays a huge positive role in our lives. Thereâs a long list of applications that would have been called AI at one point but probably wouldnât be today: Robot vacuum cleaners, web search, autopilot in planes, autocomplete, handwriting recognition, speech recognition, spam filtering, and even spell check. These are the kinds of AI we want more of â reliable tools that quietly make our lives better. Many AI applications that make the news for the wrong reasons today â such as self-driving cars due to occasional crashes â are undergoing this transition (although, as we point out in the book, it has taken far longer than developers and CEOs anticipated). We think people will eventually take self-driving cars for granted as part of our physical environment. Adapting to these changes wonât be straightforward. It will lead to job loss, require changes to transportation infrastructure and urban planning, and have various ripple effects. But it will have been a good thing, because the safety impact of reliable self-driving tech canât be overstated. AI is an umbrella term for a set of loosely related technologies and applications. To answer questions about the benefits or risks of AI, its societal impact, or how we should approach the tech, we need to break it down. And thatâs what we do in the book. Weâre broadly negative about predictive AI, a term we use to refer to AI thatâs used to make decisions about people based on predictions about their future behavior or outcomes. Itâs used in criminal risk prediction, hiring, healthcare, and many other consequential domains. Our chapters on predictive AI have many horror stories of people denied life opportunities because of algorithmic predictions. Itâs hard to predict the future, and AI doesnât change that. This is not because of a limitation of the technology but because of inherent limits to predicting human behavior grounded in sociology. (The book owes a huge debt to Princeton sociologist Matt Salganik; our collaboration with him informed and inspired the book.) Generative AI, on the other hand, is a double-edged technology. We are broadly positive about it in the long run, and emphasize that it is useful to essentially every knowledge worker. But its rollout has been chaotic, and misuses have been prevalent. Itâs as if everyone in the world has simultaneously been given the equivalent of a free buzzsaw. As we say in the book: See the overview of the chapters here. We know that book publishing moves at a slower timescale than AI. So the book is about the foundational knowledge needed to separate real advances from hype, rather than commentary on breaking developments. In writing every chapter, and every paragraph, we asked ourselves: will this be relevant in five years? This also means that thereâs very little overlap between the newsletter and the book. The AI discourse is polarized because of differing opinions about which AI risks matter, how serious and urgent they are, and what to do about them. In broad strokes: The AI safety community considers catastrophic AI risks a major societal concern, and supports government intervention. It has strong ties to the effective altruism movement. e/acc is short for effective accelerationism, a play on effective altruism. It is a libertarian movement that sees tech as the solution and rejects government intervention. The AI ethics community focuses on materialized harms from AI such as discrimination and labor exploitation, and sees the focus on AI safety as a distraction from those priorities. In the past, the two of us worked on AI ethics and saw ourselves as part of that community. But we no longer identify with any of these labels. We view the polarization as counterproductive. We used to subscribe to the âdistractionâ view but no longer do. The fact that safety concerns have made AI policy a priority has increased, not decreased policymakersâ attention to issues of AI and civil rights. These two communities both want AI regulation, and should focus on their common ground rather than their differences. These days, much of our technical and policy work is on AI safety, but we have explained how we have a different perspective from the mainstream of the AI safety community. We see our role as engaging seriously with safety concerns and presenting an evidence-based vision of the future of advanced AI that rejects both apocalyptic and utopian narratives. It depends on what one means by writing the book. The book is not just an explainer, and developing a bookâs worth of genuinely new, scholarly ideas takes a long time. Hereâs a brief timeline: 2019: Arvind developed an early version of the high-level thesis of the book 2020: We started doing research and publishing papers that informed the book mid-2022: Started writing the book and launched this newsletter Sep 2023: Submitted the initial author manuscript Jan 2024: Submitted the final author manuscript after addressing peer reviewersâ feedback May 2024: Final proofs done Sep 2024: Publication Doing the bulk of the writing in a year required a lot of things to go right. Hereâs the process we used. We figured out the structure up front. Changes that affect multiple chapters are much harder to pull off than changes within a chapter. Since weâd been thinking about the topics of the book for years before we started writing, we already knew at a high level what we wanted to say. Throughout, we had periodic check-ins with our editor, Hallie Stebbins. Early on, Hallie helped us sanity check our decisions about structure, and sharing our progress with her gave us something to look forward to. In the later stages, her input was critical. We divided up the chapters between us. Of course, we were both involved in every chapter, but itâs way less messy if one person takes the lead on each one. For this to work well, we had to both use the same âvoiceâ. Can you tell who took the lead on which chapter? We sent Hallie our drafts of each chapter as we completed them (after a couple of rounds of internal editing), instead of waiting till the end. Weâre glad we did! Although weâre decent writers, Hallie had, on average, a couple of edits or suggestions per paragraph, mostly to fix awkward wording or point out something that was confusing. While the line edits made the book dramatically more readable, even more important was her high-level feedback. Notably, she repeatedly asked us âhow does this relate to the AI Snake Oil theme?â which helped keep us focused. Oh, and Hallie couldnât tell who took the lead on which chapter, which was a big relief! We wrote the introductory chapter last. We know far more people will read the intro than the rest of the book, in part because itâs available online, so we reallâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to AI Snake Oil.