Against "Brain Damage"
I increasingly find people asking me âdoes AI damage your brain?â It's a revealing question. Not because AI causes literal brain damage (it doesn't) but because the question itself shows how deeply we fear what AI might do to our ability to think. So, in this post, I want to discuss ways of using AI to help, rather than hurt, your mind. But why the obsession over AI damaging our brains? Part of this is due to misinterpretation of a much-publicized paper out of the MIT Media Lab (with authors from other institutions as well), titled âYour Brain on ChatGPT.â The actual study is much less dramatic than the press coverage. It involved a small group of college students who were assigned to write essays alone, with Google, or with ChatGPT (and no other tools). The students who used ChatGPT were less engaged and remembered less about their essays than the group without AI. Four months later, nine of the ChatGPT users were asked to write the essay again without ChatGPT, and they performed worse than those who had not used AI initially (though were required to use AI in the new experiment) and showed less EEG activity when writing. There was, of course, no brain damage. Yet the more dramatic interpretation has captured our imagination because we have always feared that new technologies would ruin our ability to think: Plato thought writing would undermine our wisdom, and when cellphones came out, some people worried that not having to remember telephone numbers would make us dumber. But that doesnât mean we shouldnât worry about how AI impacts our thinking. After all, a key purpose of technology is to let us outsource work to machines. That includes intellectual work, like letting calculators do math or our cellphones record our phone numbers. And, when we outsource our thinking, we really do lose something â we canât actually remember phone numbers as well, for example. Given that AI is such a general purpose intellectual technology, we can outsource a lot of our thinking to it. So how do we use AI to help, rather than hurt us? The least surprising place where AI use can clearly hurt your mental growth is when you are trying to learn or synthesize new knowledge. If you outsource your thinking to the AI instead of doing the work yourself, then you will miss the opportunity to learn. We have evidence to back up this intuition, as my colleagues at Penn conducted an experiment at a high school in Turkey where some students were given access to GPT-4 to help with homework. When they were told to use ChatGPT without guidance or special prompting, they ended up taking a shortcut and getting answers. So even though students thought they learned a lot from ChatGPT's help, they actually learned less - scoring 17% worse on their final exam (compared to students who didn't use ChatGPT). What makes this particularly insidious is that the harm happens even when students have good intentions. The AI is trained to be helpful and answer questions for you. Like the students, you may just want to get AI guidance on how to approach your homework, but it will often just give you the answer instead. As the MIT Media Lab study showed, this short-circuits the (sometimes unpleasant) mental effort that creates learning. The problem is not just cheating, though AI certainly makes that easier. The problem is that even honest attempts to use AI for help can backfire because the default mode of AI is to do the work for you, not with you. Does that mean that AI always hurts learning? Not at all! While it is still early, we have increasing evidence that, when used with teacher guidance and good prompting based on sound pedagogical principles, AI can greatly improve learning outcomes. For example, a randomized, controlled World Bank study finds using a GPT-4 tutor with teacher guidance in a six week after school program in Nigeria had "more than twice the effect of some of the most effective interventions in education" at very low costs. While no study is perfect (in this case, the control was no intervention at all, so it is impossible to fully isolate the effects of AI, though they do try to do so), it joins a growing number of similar findings. A Harvard experiment in a large physics class found a well-prompted AI tutor outperformed active classes in learning outcomes; a study done in a massive programming class at Stanford found use of ChatGPT led to increased exam grades; a Malaysian study found AI used in conjunction with teacher guidance and solid pedagogy led to more learning; and even the experiment in Turkey that I mentioned earlier found that a better tutor prompt eliminated the drop in test scores from plain ChatGPT use. Ultimately, it is how you use AI, rather than use of AI at all, that determines whether it helps or hurts your brain when learning. Moving away from asking the AI to help you with homework to helping you learn as a tutor is a useful step. Unfortunately, the default version of most AI models wants to give you the answer, rather than tutor you on a topic, so you might want to use a specialized prompt. While no one has developed the perfect tutor prompt, we have one that has been used in some education studies, and which may be useful to you and you can find more in the Wharton Generative AI Lab prompt library. Feel free to modify it (it is licensed under Creative Commons). If you are a parent, you can also act as the tutor yourself, prompting the AI âexplain the answer to this question in a way I can teach my child, who is in X grade.â None of these approaches are perfect, and the challenges in education from AI are very real, but there is reason to hope that education will be able to adjust to AI in ways that help, and not hurt, our ability to think. That will involve instructor guidance, well-built prompts, and careful choices about when to use AI and when it should be avoided. Just like in education, AI can help, or hurt, your creativity depending on how you use it. On many measures of creativity, AI beats most humans. To be clear, there is no one definition of creativity, but researchers have developed a number of flawed tests that are widely used to measure the ability of humans to come up with diverse and meaningful ideas. The fact that these tests were flawed wasn't that big a deal until, suddenly, AIs were able to pass all of them. The old GPT-4 beat 91% of humans on the a variation of the Alternative Uses Test for creativity and exceeds 99% of people on the Torrance Tests of Creative Thinking. And we know these ideas are not just theoretically interesting. My colleagues at Wharton staged an idea generation contest: pitting ChatGPT-4 against the students in a popular innovation class that has historically led to many startups. Human judges rating the ideas showed that that ChatGPT-4 generated more, cheaper and better ideas than the students. The purchase intent from these outside judges was higher for the AI-generated ideas as well. And yet, anyone who has used AI for idea generation will notice something these numbers don't capture. AI tends to act like a single creative person with predictable patterns. You'll see the same themes over and over like ideas involving VR, blockchain, the environment, and (of course) AI itself. This is a problem because in idea generation, you actually want a diverse set of ideas to pick from, not variations on a theme. Thus, there is a paradox: while AI is more creative than most individuals, it lacks the diversity that comes from multiple perspectives. Yet studies also show that people often generate better ideas when using AI than when working alone, and sometimes AI alone even outperforms humans working with AI. But, without caution, those ideas look very similar to each other when you see enough of them. Part of this can be solved with better prompting. In a paper I worked on with Lennart Meincke and Christian Terwiesch, we found that better prompting can generate much more diverse ideas, if not quite as good as a grouâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to One Useful Thing.