Your AI Use Is Breaking My Brain
A few years ago, while I was covering the rise of AI slop on Facebook, I asked my friends and family if they were getting AI spam fed into their timelines and if they could send me examples. A handful of them responded, sending me obviously AI-generated science fiction scenescapes, shrimp Jesus, and forlorn, starving children begging for sympathy. But a few of my friends sent me images that they thought were AI but were not. Their mental guard was up to the point where they were looking at human-made art and photos and thought it safer to dismiss them as AI rather than be fooled by it. To browse the internet today, to consume any sort of content at all, is to be bombarded with AI of all sorts. People think things that are fake are real, things that are real are fake. Much has been written about “AI psychosis,” the nonspecific, nonscientific diagnosis given to people who have lost themselves to AI. Less has been said about the cognitive load of what other people’s AI use is doing to the rest of us, and the insidious nature of having to navigate an internet and a world where lazy AI has infiltrated everything. Our brains are now performing untold numbers of calculations per day: Is this AI? Do I care if it’s AI? Why does this sound or look or read so weird? Does this person just write like this? Is this a person at all? I see AI content where I’m conditioned to expect and ignore it: In Google’s “AI Overviews” that famously told us to eat glue pizza, in engagement-bait LinkedIn posts, and throughout our Facebook and Instagram feeds. But increasingly I have the feeling that it’s everywhere, coming from all directions, completely unavoidable. It’s not exactly that I have a revulsion to AI-assisted content or don’t want to get fooled by it. It’s that something is happening where my brain has become the AI police because everything feels incredibly uncanny. I will be going about my day reading, watching, or listening to something and, suddenly, I notice that something is wildly off. Quite simply, I feel like I’m going nuts. An example: Last week, in a desperate attempt to avoid yet another take on the White House Correspondents Dinner shooting, I was listening to an episode of Everyone’s Talkin’ Money, a podcast I’ve been listening to off-and-on for years about taxes (yikes). This podcast has been going on for years, has a human host named Shari Rash, and hundreds of episodes. Rash started reading the intro script: “The shift I want you to make today—and this is the shift that changes everything—is starting to see your tax return as information—not a bill, not a badge of shame, but information.” The script went on and on and on like this, with AI writing trope after AI writing trope. My brain shut down and stopped paying attention to the script and started wondering if Rash was using AI just for the intro script? What about for the research? Did she edit the script at all? I turned the podcast off. Later that day, I was scrolling the Orioles Hangout forums, a small community of diehards obsessed with the Baltimore Orioles that I have been lurking on for decades. Until recently, it had been one of the few places on the internet that I could safely assume was not full of AI. Except now, it is. The site’s administrator has started using AI to analyze player performance and to help him write some of his posts. To his credit, he explains how he’s using AI and prefaces these posts by noting they are AI-assisted analysis. Some of them are interesting. But now, most days I’m browsing the forums, I will see arguments between posters who have been there for years that seem overly generic or don’t really make sense. One recent post arguing about the timetable for an injured player’s return suggested a ludicrously long recovery. One poster pointed this out: “You said 10-18 months and I said it won’t take that long for a position player.” The poster responded: “You’re right I did. The 10-18 months was an AI generated answer … consider it a small cautionary tale about trusting AI and another on the benefits of seeking out actual medical research on questions like this.” Every day I now scroll the forum and see people noting that they plugged something into ChatGPT or Gemini and have copy pasted the answers for other people to see. In this 30-year-old community of human beings discussing sports, AI is unavoidable. It is, of course, not just me. Friends send me screenshots of texts they’ve gotten from people they’ve started dating, wondering if they’re using ChatGPT to flirt. I’ve gotten obviously AI-generated apologies or excuses from people trying to bail on a social engagement. I’ve been to weddings where the speeches felt—and were—partially AI-generated. A recent PEW poll showed that people believe it is important to be able to tell whether an image, video, or piece of writing was AI-generated, AI-assisted, or written by a human. And it showed that a majority of people do not believe that they are able to tell the difference between AI-generated works and human made works. Studies have repeatedly shown that humans judge AI-generated art and writing more harshly than human works, and a study published in the Journal of Experimental Psychology found that when people know or perceive a piece of writing to be AI-generated, it is “stubbornly difficult to mitigate” and “remarkably persistent, holding across the time period of our study; across different evaluation metrics, contexts, and different types of written content.” Put simply, it is not just me who hates AI writing or finds it annoying. Even if AI writing can be “fine,” it very often feels bland, weird, formulaic. The writer Eve Fairbanks wrote a thread the other day that I thought more or less nailed it: “The tell for AI isn’t rhythm, wording, or fact errors. It’s that problems with *all these elements* exist equally & at once.”
Send this story to anyone — or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to 404 Media.