AI's big messaging pivot
Something big happened in the world of AI the other day: Sam Altman, founder and CEO of OpenAI, and probably the person who’s most commonly regarded as the face of the industry, declared that the purpose of AI is not to take people’s jobs: And he recently called AI CEOs “tone-deaf” for declaring that AI is going to take people’s jobs: In fact, this shift represents more evolution than revolution. Years ago, Altman did seem to generally agree with the folk consensus that AI’s purpose is to make most or all humans obsolete; in 2014 he warned that we could be faced with “a new idle class”, and explored the idea of Universal Basic Income as a remedy. In 2021 he wrote that “The price of many kinds of labor…will fall toward zero.” But in recent years, Altman has consistently stated that although AI will destroy many occupations, it will create new tasks for humans to do. In 2024 he wrote that “I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today)”, and in 2025 he declared that “We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.” He has reiterated that prediction in interviews. OpenAI’s mission statement, meanwhile, continues to define the company’s goal as the creation of Artificial General Intelligence (AGI), which it defines as “highly autonomous systems that outperform humans at most economically valuable work”. That “most” does leave some wiggle room. But perhaps more importantly, the company is talking about AGI less and less — its 2026 statement of principles mentions the term only twice, as compared with 12 times in the 2018 version. OpenAI also removed a clause about AGI in its agreement with Microsoft, meaning that the term no longer defines its contractual business obligations. So although Altman has never been quite as doomer-ish as some of his colleagues when it comes to AI and jobs, you can definitely feel the winds shifting. In fact, there has always been a contingent of tech leaders who have been broadly optimistic about AI and jobs, and who are now speaking up more vociferously. Nvidia’s Jensen Huang has consistently predicted that AI will create more jobs than it destroys, but recently he has harshly criticized AI CEOs who go around saying that their technology is a job-killer: Venture capital titan Marc Andreessen, meanwhile, has come out swinging against the AI job loss narrative: Cynical observers will see this all as just a messaging pivot, in response to the AI industry’s deteriorating popularity. Back in March I wrote about how the AI industry’s sales pitch was basically “Our product’s purpose is to put you and your descendants on welfare forever, and it may also wipe out your whole species”: That was a bad sales pitch, to put it mildly, and it’s not surprising that voters have reacted negatively to this message. Basically every recent poll shows the American public turning very strongly against AI. Here’s a representative example from Pew: In fact, the anti-AI turn seems especially strong among Independents: This raises the possibility that AI will become the focus of populist rage, and that politicians from both parties will compete to win swing voters over by promising to take action against the industry. This may already be happening. Bernie Sanders has moved past traditional progressive concerns about data center water use and copyright infringement, and has instead been warning about catastrophic AI risk: Meanwhile, Donald Trump is reportedly considering a policy of having the White House vet AI models before they’re released, due to concerns about new models’ cyber capabilities: President Trump, who promoted a hands-off approach to artificial intelligence and gave Silicon Valley free rein to roll out the technology, is considering the introduction of government oversight over new A.I. models, according to U.S. officials and people briefed on the deliberations…The administration is discussing an executive order to create an A.I. working group that would bring together tech executives and government officials to examine potential oversight procedures…Among the potential plans is a formal government review process for new A.I. models…The discussions signal a stark reversal in the Trump administration’s approach to A.I…[Trump’s] noninterventionist policy began changing last month after the start-up Anthropic announced a new A.I. model called Mythos. Mythos is so powerful at identifying security vulnerabilities in software that it could lead to a cybersecurity “reckoning,” said Anthropic[.] [emphasis mine] Neither Bernie’s concern nor Trump’s is explicitly about protecting jobs; both are about the risk of misuse. But it’s hard not to see the generally souring mood on AI, especially among Independents, as an invitation to populists like Trump and Bernie to make political hay by reining in the industry. Meanwhile, some politicians and industry figures are starting to talk openly about the possibility of nationalizing the big AI labs. Matteo Wong and Lila Shroff report: Washington is getting antsy about the power imbalance [between AI companies and the government]. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI…In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization… The government could regulate AI companies like it does utilities…[S]hould AI models displace large swaths of the labor market, such that a handful of companies run most of the economy, “then some kind of nationalization becomes potentially imperative,” Samuel Hammond [of FAI] told us—to distribute wealth and simply ensure the proper functioning of society. Both Anthropic and OpenAI have already suggested possible versions of such redistributive measures… Perhaps the most likely fate for American AI companies is a future of soft nationalization—a world in which the government doesn’t fully control AI labs and their models, but instead enacts an escalating series of policies and establishe[s] close partnerships with private companies to shape the technology. Different figures in the industry want quasi-nationalization to different degrees. Jensen Huang, who has fought hard against export controls, is probably more anti-nationalization, as is Marc Andreessen, who makes his living from funding startups (and would thus probably not like to see government ties entrench the market position of incumbent players). But even folks like Altman and Amodei who might be inclined to accept quasi-nationalization would certainly like to negotiate favorable terms for that partnership. To that end, it helps to have the government not view your industry as a dangerous job-killer. So basically, it makes sense for leading figures in the industry to alter the basic sales pitch and reassure anxious humans that they’ll still have jobs. In Altman’s case, there also might be some element of competitive positioning here. The loudest voice predicting human obsolescence has certainly been Anthropic founder and CEO Dario Amodei, who has been shouting from the rooftops about a coming job-pocalypse: To a seasoned observer, Anthropic’s perspective here is pretty clear. They basically think AI progress is inevitable, and that AGI is eventually going to put most human beings on the welfare rolls. Thus, they see themselves as sounding the alarm — warning society to beef up its welfare state and its redistributionary mechanisms before the inevitable coming of job-annihilating AGI. If you accept that AI progress is as inevitable as the tides, then this is an eminently reasonable position. But most people probably do not accept this. They probably see AI progress as something that we — human society — choose to do or not to do. And so to them, Dario isn’t sounding a warning — he’s m…
Send this story to anyone — or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Noahpinion.