I swear the UFO is coming any minute
This is the quarterly links ânâ updates post, a selection of things Iâve been reading and doing for the past few months. First up, a series of unfortunate events in science: When Prophecy Fails is supposed to be a classic case study of cognitive dissonance: a UFO cult predicts an apocalypse, and when the world doesnât end, they double down and start proselytizing even harder: âI swear the UFO is coming any minute!â A new paper finds a different story in the archives of the lead author, Leon Festinger. Up to half of the attendees at cult meetings may have been undercover researchers. One of them became a leader in the cult and encouraged other members to make statements that would look good in the book. After the failed prediction, rather than doubling down, some of the cultists walked back their statements or left altogether. Between this, the impossible numbers in the original laboratory study of cognitive dissonance, and a recent failure to replicate a basic dissonance effect, things arenât looking great for the phenomenon.1 But that only makes me believe in it harder! Another classic sadly struck from the canon of behavioral/brain sciences: the neurologist Oliver Sacks appears to have greatly embellished or even invented his case studies. In a letter to his brother, Sacks described his blockbuster The Man Who Mistook His Wife for a Hat as a book of âfairy tales [...] half-report, half-imagined, half-science, half-fableâ. This is exactly how the Stanford Prison Experiment and the Rosenhan experiment got debunkedâsomeone started rooting around in the archives and found a bunch of damning notes. Iâm confused: back in the day, why was everybody meticulously documenting their research malfeasance? If you ever took PSY 101, youâve probably heard of this study from 1974. You show people a video of a car crash, and then you ask them to estimate how fast the cars were going, and their answer depends on what verb you use. For example, if you ask âHow fast were the cars going when they smashed into each other?â people give higher speed estimates than if you ask, âHow fast were the cars going when they hit each other?â (Emphasis mine). This study has been cited nearly 4,000 times, and its first author became a much sought-after expert witness who testifies about the faultiness of memory. A blogger named Croissanthology re-ran the study with nearly 10x as many participants (446 vs. 45 in the original). The effect did not replicate. No replication is perfect, but no original study is either. And remember, this kind of effect is supposed to be so robust and generalizable that we can deploy it in court. I think the underlying point of this research is still correct: memory is reconstructed, not simply recalled, so what we remember is not exactly what we saw. But our memories are not so fragile that a single word can overwrite them. Otherwise, if you ever got pulled over for speeding, you could just be like, âOfficer, how fast was I going when my car crawled past you?â In one study from 1995, physicians who were shown multiple treatment options were more likely to recommend no treatment at all. The researchers thought this was a âchoice overloadâ effect, like âahhh thereâs too many choices, so Iâll just choose nothing at allâ. In contrast, a new study from 2025 found that when physicians were shown multiple treatment options, they were somewhat more likely to recommend a treatment. I think âchoice overloadâ is like many effects we discover in psychology: can it happen? Yes. Can the opposite also happen? Also yes. When does it go one way, and when does it go the other? Ahhh youâre showing me too many options I donât know. Okay, enough dumping on other peopleâs research. Itâs my turn in the hot seat. In 2022, my colleague Jason Dana and I published a paper showing that people donât know how public opinion has changed. Like this: A new paper by Irina Vartanova, Kimmo Eriksson, and Pontus Strimling reanalyzes our data and finds that actually, people are great at knowing how public opinion has changed. What gives? We come to different conclusions because we ask different questions. Jason and I ask, âWhen people estimate change, how far off are they from the right answer?â Vartanova et al. ask, âAre peopleâs estimates correlated with the right answer?â These approaches seem like they should give you the same results, but they donât, and Iâll show you why. Imagine you ask people to estimate the size of a house, a dog, and a stapler. Vartanovaâs correlation approach would say: âPeople know that a house is bigger than a dog, and that a dog is bigger than a stapler. Therefore, people are good at estimating the sizes of things.â Our approach would say: âPeople think a house is three miles long, a dog is two inches, and a stapler is 1.5 centimeters. Therefore, people are not good at estimating the sizes of things.â I think our approach is the right one, for two reasons. First, ours is more useful. As the name implies, a correlation can only tell you about the relationships between things. So it canât tell you whether people are good at estimating the size of a house. It can only tell you whether people think houses are bigger than dogs. Second, I think our approach is much closer to the way people actually make these judgments in their lives. If I asked you to estimate the size of a house, you wouldnât spontaneously be like, âWell, itâs bigger than a dog.â Youâd just eyeball it. I think people do the same thing with public opinionâthey eyeball it based on headlines they see, conversations they have, and vibes they remember. If I asked you, âHow have attitudes toward gun control changed?â you wouldnât be like, âWell, theyâve changed more than attitudes toward gender equality.â2 While these reanalyses donât shift my opinion, Iâm glad people are looking into shifts in opinions at all, and that they found our data interesting enough to dig into. THE LOOP is a online magazine produced by my friends Slime Mold Time Mold. The newest issue includes: a study showing that people maybe like orange juice more when you add potassium to it a pseudonymous piece by me scientific skepticism of the effectiveness of the Squatty Potty, featuring this photo: This issue of THE LOOP was assembled at Inkhaven, a blogging residency that is currently open for applications. I visited the first round of this program and was very impressed. Also at Inkhaven, I interviewed the pseudonymous blogger Gwern about his writing process. Gwern is kind of hard to explain. Heâs famous on some parts of the internet for predicting the âscaling hypothesisââthe fact that progress in AI would come from dumping way more data into the models. But he also writes poetry, does self-experiments, and sustains himself on $12,000 a year. He reads 10 hours a day every day, and then occasionally writes for 30 minutes. Hereâs what he said when I was like, âVery few people do experiments and post them on the internet. Why do you do it?â I did it just because it seemed obviously correct and because⊠Yeah. I mean, it does seem obviously correct. For more on what I learned by interviewing a bunch of bloggers, see I Know Your Secret. I really like this article by the artist known as fnnch: How to Make a Living as an Artist. Itâs super practical and clear-headed writing on a subject that is usually more stressed about than thought about. Hereâs a challenge: which of these seven images became successful, allowing fnnch to do art full time? Iâll give the answer at the bottom of the post. Anyone who grew up in the pre-internet days probably heard the myth that âyou swallow eight spiders every year in your sleepâ, and back then, we just had to believe whatever we heard. Post-internet, anyone can quickly discover that this âfactâ was actually a deliberate lie spread by a journalist named Lisa Birgit Holst. Holst included the âeight spidersâ myth in a 1993 article in a magazine called PC Insider, using it as an example of exactly the kind ofâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Experimental History.