OpenAI Just Killed Its Flashiest Product. The $600 Billion Reason Why Should Terrify You — Or Make You Rich.
Last Tuesday, OpenAI posted six words on X: “We’re saying goodbye to Sora.” No buildup. No transition plan. Just — gone. Sora was supposed to be the future of video. The product that would make Hollywood sweat. Disney had committed $1 billion to a partnership around it. A billion dollars. And Disney found out it was dead less than an hour before the rest of us. Here’s what actually happened: Sora peaked at about one million users and then collapsed to under 500,000. It was burning through $1 million per day in compute costs. Every time someone generated a 10-second clip of themselves flying over Paris, OpenAI was lighting money on fire. Meanwhile, across town, Anthropic’s Claude Code was quietly stealing OpenAI’s lunch with enterprise clients. Real revenue. Real retention. Real ROI. So Sam Altman made the call. Kill the spectacle. Redirect the GPUs. Win the war that actually matters. And that war? It’s being fought with a number so large it’s almost meaningless: $600 billion. 01 — The $600B GPU War: Why Big Tech is spending more on AI infrastructure than the entire US energy sector (free) 02 — The Sora Autopsy: What OpenAI’s biggest product kill reveals about AI’s real economics (free) 03 — The Three Investment Layers of the AI Infrastructure Boom 🔒 04 — Layer 1: The Power Bottleneck — 3 companies solving AI’s #1 constraint 🔒 05 — Layer 2: The Cooling Crisis — the overlooked $89B market that keeps GPUs alive 🔒 06 — Layer 3: The Picks & Shovels — semiconductor supply chain plays hiding in plain sight 🔒 07 — Entry Zones, Catalysts & Risk Matrix for Each Pick 🔒 Let me put this number in context. Amazon, Microsoft, Google, Meta and Oracle will collectively spend over $600 billion on infrastructure this year. That’s a 36% increase from 2025. Roughly 75% of it — $450 billion — goes directly into AI infrastructure: GPUs, servers, data centers, networking. For perspective: Amazon’s capital expenditure alone — $200 billion — is larger than the entire US energy sector’s combined spending on drilling, extraction, refining, and distribution. This is not speculative. This is not “AI hype.” This is cash leaving balance sheets at a pace that makes the dot-com era look quaint. Goldman Sachs projects total hyperscaler capex from 2025 through 2027 will reach $1.15 trillion. That’s more than double the $477 billion spent from 2022 through 2024. And here’s the detail that should make you sit up: these companies are spending faster than they can generate cash. For the first time in Big Tech history, aggregate capex now exceeds internal free cash flow. The hyperscalers raised $108 billion in debt in 2025 alone. Morgan Stanley projects the sector will need $1.5 trillion in new debt over the next few years just to fund the buildout. Google’s co-founder Larry Page was quoted saying: “I’m willing to go bankrupt rather than lose this race.” He wasn’t joking. Microsoft has $80 billion in unfulfilled Azure orders that it physically cannot deliver — not because demand is soft, but because it can’t get enough electricity to the data centers fast enough. Every hyperscaler reports the same thing: they are supply-constrained, not demand-constrained. This is the most important sentence in this entire newsletter: the bottleneck is not demand. The bottleneck is physical infrastructure. Power. Cooling. Chips. Cables. Concrete. Land. The companies that solve those bottlenecks are the ones that will capture the overflow from a $600 billion annual spending frenzy. Not the ones building chatbots. Not the ones making demo videos. The ones pouring concrete, pulling copper, and keeping servers from melting. OpenAI didn’t kill Sora because the technology failed. It killed Sora because the economics failed. The math was simple and brutal: Sora generated video at roughly $1 million per day in compute cost. Its user base was cratering. And every GPU cycle spent rendering a 10-second fantasy clip was a GPU cycle not training GPT-5.4 or powering the enterprise products that actually generate revenue. The Sora shutdown wasn’t an isolated incident. In March alone, OpenAI quietly shut down or curtailed several other products and features. The company is consolidating around two things: ChatGPT and the API. Everything else is being sacrificed. This is the template for 2026. The “launch everything and see what sticks” era of AI is over. What’s replacing it is a ruthless capital allocation discipline driven by one question: does this product generate enough revenue to justify the GPU hours it consumes? The answer, increasingly, splits the AI world into two categories: Category 1: The Compute Consumers. AI model companies, content generators, video tools. They burn GPU cycles. They need ever-more infrastructure. Their margins are thin or negative. Category 2: The Compute Enablers. The companies that build, power, cool, and connect the data centers where everything runs. They sell into a market that is supply-constrained and growing at 36% annually. Their pricing power is increasing. If you’re an investor, the lesson from Sora’s death is not “AI video is dead.” It’s that the real money in AI is not in the applications. It’s in the infrastructure underneath them. Every AI company — whether they build chatbots, generate video, write code, or design drugs — needs the same thing: more compute, more power, more cooling, more bandwidth. The applications come and go. Sora proved that in six months. But the infrastructure demand only compounds. The question is: which infrastructure companies are best positioned to capture a disproportionate share of this $600 billion annual spend? That’s exactly what I mapped out below. I spent the last two weeks reverse-engineering the capex breakdowns from all five hyperscalers’ earnings calls, cross-referencing them with supply chain data from Omdia, CreditSights, and Goldman Sachs Research. What emerged is a clear three-layer framework for the AI infrastructure investment stack: Layer 1 — Power: The single biggest constraint. Microsoft can’t fulfill $80B in Azure orders because it can’t get electricity to its data centers. This layer alone represents a... Layer 2 — Cooling: The overlooked crisis. A single AI GPU rack generates 10x the heat of a traditional server rack. Liquid cooling is no longer optional — it’s... Layer 3 — Semiconductor Supply Chain: Everyone knows Nvidia. But Nvidia captures roughly 90% of the $180B GPU market. The real opportunity is in the Tier 2 and Tier 3 suppliers that Nvidia itself depends on — the companies making... The full edition includes: ✓ 9 specific companies across three infrastructure layers — with ticker symbols, current valuations, and revenue exposure to AI capex ✓ Entry zones for each pick based on technical levels and upcoming catalysts ✓ The risk matrix — what could go wrong, and which positions have the widest margin of safety ✓ The “capex reversal” scenario — what happens to these picks if hyperscalers pull back, and which ones survive regardless ✓ The contrarian play — one company that benefits whether AI capex goes up OR down This edition alone is worth the annual subscription. Not because I say so — because the math does.
Send this story to anyone — or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Future Digest.