Your Brain's Learning Rate
TLDR: Your brain has 86 billion neurons. An advanced AI has over a trillion parameters. Both learn the same way: prediction, feedback, and reward. The difference is you can control your brain's âlearning rate,â and that dial is called curiosity. Science shows it triggers the same dopamine reward circuitry used by AI reinforcement learning. Lose it, and your neural network stops updating. Here's how to crank it back up. I was watching a conversation between David Brooks and the Yale Jackson School of Global Affairs recently, and one data point stopped me cold. Developmental psychologist Susan Engel at Williams College tracked how many questions children ask per hour. At age five, the average kid asks 107 questions per hour. Theyâre relentless. They want to know why the sky is blue, why dogs have tails, why grandmaâs hair is white. Their brains are running at full throttle, pulling in data from every direction. Then school starts. By first grade, the entire class asks 2.3 questions per hour combined. By fifth grade? 0.48 questions per hour. Less than one question every two hours from a room full of eleven-year-olds. Engel sat in the back of a science classroom watching kids discover an old-fashioned balance scale. They were experimenting with it, testing weights, genuinely doing science. The teacher shut it down: âEnough of that. Iâll give you time to experiment at recess. Thereâs no time for experiments now. Weâre doing science.â Read that again. No time for experiments... during science class. Engelâs conclusion is brutal: if you lose your curiosity by age 11, you probably donât get it back. Source: Susan Engel, Williams College, âChildrenâs Need to Know: Curiosity in Schoolsâ (Harvard Educational Review, 2011) I disagree with Engel on one thing. I think you CAN get it back. But you have to understand what curiosity actually is, neurologically. And thatâs where it gets interesting. I spend a lot of time with AI companies. Iâve watched frontier models go from party tricks to systems that can reason, code, and hold complex conversations. And the more I learn about how LLMs work, the more I realize: your brain is running the same algorithm. Consider the parallels. Your brain has roughly 86 billion neurons connected by an estimated 100 trillion synapses. GPT-4 has approximately 1.8 trillion parameters across its mixture-of-experts architecture. Both are massive pattern-recognition networks. Both learn by prediction. Hereâs how an LLM trains: it reads a sentence, predicts the next word, checks whether it was right, and adjusts its internal weights. Right answer? Strengthen that pathway. Wrong answer? Weaken it and try again. Billions of repetitions, trillions of adjustments. Your brain does the same thing. Every experience is a prediction. You reach for a coffee cup and predict its weight. You start a sentence and predict how the other person will react. When reality matches your prediction, your synapses strengthen. When it doesnât, your brain recalibrates. Neuroscientists call this predictive coding, and a 2024 study in Nature Machine Intelligence by Gavin Mischler and colleagues at Columbia University found that as LLMs become more advanced, their internal representations actually become more similar to human brain activity during speech processing. âYour brain is the original foundation model, pre-trained by evolution, fine-tuned by experience.â But hereâs the critical difference. An LLMâs learning rate is set by engineers. They decide how aggressively the model updates its weights in response to new data. Too high and itâs unstable. Too low and it stops learning. In your brain, that learning rate has a name. Itâs called curiosity. And unlike an LLM, you can adjust it yourself. In 2014, neuroscientist Matthias Gruber and his team at UC Davis put people in an fMRI scanner and asked them trivia questions. Some questions triggered intense curiosity (âHow many miles of blood vessels are in the human body?â). Others didnât (âWhat is the state bird of Delaware?â). What they found, which is published in the journal Neuron, changed our understanding of how curiosity works. When participants were highly curious, their ventral tegmental area (VTA) and nucleus accumbens lit up. These are the same brain regions activated by food, sex, and addictive drugs. Curiosity hijacks your reward circuitry. It is not a nice-to-have personality trait. Itâs a neurochemical event. But that wasnât even the most interesting finding. During the curiosity state, participants were shown random faces, completely unrelated to the trivia. Later, they remembered those faces significantly better than faces shown during low-curiosity moments. Curiosity didnât just help them learn the answer they wanted. It supercharged their memory for everything happening at that moment. This is exactly how reinforcement learning works in AI. When an LLM gets a reward signal through RLHF (Reinforcement Learning from Human Feedback), it does more than strengthen the specific output. It also adjusts the surrounding weights. The reward ripples through the network. âCuriosity is your brainâs RLHF. Itâs the reward signal that tells 86 billion neurons: pay attention, something important is happening, encode everything.â Without that signal, your brain does what an untrained model does: it defaults to cached responses. You stop updating. You become, in AI terms, a frozen model. And this is about much more than learning faster. In 1996, researchers Gary Swan and Dorit Carmelli at SRI International followed 1,118 older men over five years as part of the Western Collaborative Group Study. They measured curiosity at baseline and then tracked who survived. The result: highly curious people had significantly higher survival rates, even after controlling for age, smoking, cardiovascular disease, and other risk factors. They replicated the finding in 1,035 older women. A 2025 study published in Nature Scientific Reports confirmed the mechanism: higher trait curiosity was directly associated with greater cognitive reserve, the brainâs buffer against age-related decline. Curious brains keep building new connections. Incurious ones atrophy. When I started Fountain Life, we focused on early detection through full-body MRI, AI-powered diagnostics, and advanced blood work. But the data keeps pointing to something we canât put in a scanner: mindset is a biological variable. Curious people donât merely think differently. Their brains physically maintain themselves better. At 64, I track my biological age markers obsessively. Iâm not going to pretend supplements and sleep donât matter. But Iâve become convinced that the relentless drive to learn new things is doing as much for my neurons as any peptide in my medicine cabinet. Source: Comparative analysis based on Mischler et al., Nature Machine Intelligence (2024); Gruber et al., Neuron (2014) Hereâs where I disagree with the pessimists. A 2025 study from UC Santa Barbara, led by Madeleine Gross and Jonathan Schooler and published in the journal Mindfulness, proved that curiosity is trainable. They built a smartphone app that gave participants daily âcuriosity challengesâ: listen to a podcast instead of your usual playlist, ask a friend what they learned this week, try a new recipe. After just three weeks, users showed significant increases in trait-level curiosity across three dimensions: epistemic curiosity (desire to learn), perceptual curiosity (interest in new sensory experiences), and mindful curiosity (deeper awareness of the world). Curiosity wasnât fixed. It was a muscle they hadnât been using. Based on the research and over a decade of running Abundance360, here are five concrete strategies: 1. Create information gaps on purpose. Carnegie Mellon psychologist George Loewenstein identified this mechanism in 1994: curiosity fires when you know enough to realize what you DONâT know, but not enough to close the gap. Before any meeting, read one article about the topic anâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Metatrends.