Import AI 445: Timing superintelligence; AIs solve frontier math proofs; a new ML research benchmark
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If youâd like to support this, please subscribe. Economist: Donât worry about AI-driven unemployment, because people like paying for the âhuman touchâ: âŠEven when you have the technology to automate something, you might still pick a humanâŠAdam Ozimek, chief economist at the Economic Innovation Group, has written a blog noting that even if AI gets much, much better and is capable of doing all the work that people do, there will still be some jobs for humans because people seem to have a preference for humans over machines in certain domains. âThere are many jobs and tasks that easily could have been automated by now - the technology to automate them has long existed - and yet we humans continue to do them,â he writes. âThe reason is that demand will always exist for certain jobs that offer what I call âthe human touch.â Some examples here: Live music, actors, waiters, travel agents, and many types of sales job. And it seems like as you want to spend more and more on a given good or experience, you may want more contact with people: âthe human touch also appears to be what economists call a ânormal good,â which means the demand for it goes up as income goes up,â he writes. Some examples here might include fancy restaurants, and other conciergeâlike experiences. Why this matters - one path through the AI revolution could be a rise in human-to-human work: My assumption is that âpeople like peopleâ, and there is a high chance that even if AI automates huge chunks of the current economy there will be a boom in demand for âhuman artisansâ for a range of new jobs we canât yet imagine, and for refinement of existing human professions. Thereâs also a chance that through a combination of economic growth and progressive policy work from governments that wages for these jobs could go up massively. Read more: AI and the Economics of the Human Touch (Agglomerations, Substack). *** Facebook makes a better recommender system, and figures out some recommender scaling laws: âŠKunlun is another nice example of what industrial AI looks like⊠Facebook has published details on Kunlun, a recommendation system which is more efficient than previous ones developed by the ad behemoth. Along with this, Facebook has also figured out a predictable âscaling lawâ for Kunlun models, making it easier for the company to invest hitherto unprecedented compute in these models for a more predictable return. This is a big deal because recommendation systems are what companies like Facebook use for advertising, which is both a) how they make the vast majority of their money, and b) has a tremendous impact on the buying and attention habits of the billions of people that use Facebook and other social platforms. Recommenders are different to LLMs: Weâve had scaling laws for LLMs like Claude and ChatGPT for a while, but itâs been harder to develop the same scaling laws for recommender models. This is because recommender models work quite differently to LLMs, and so building scaling models here is âan open challenge for systems that jointly model both sequential user behaviors and non-sequential context featuresâ. Recommender models also tend to be a lot less efficient than LLMs: Recommendation systems achieve only 3-15% Model FLOPs Utilization (MFU), compared to 40-60% for LLMs, due to heterogeneous feature spaces resulting in small embedding dimensions, irregular tensor shapes, and memory-bound operations Kunlun: The bulk of the paper involves a discussion of the design of Kunlun, which is basically a well optimized recommender system with resulting better MFU. Kunlun contains a Kunlun Transformer Block for context-aware sequence modeling via GDPA-enhanced personalized feed-forward networks and multi-head self-attention, as well as a Kunlun Interaction Block âfor bidirectional information exchange through personalized weight generation, hierarchical sequence summarization, and global feature interactionâ. There are a bunch of other tricks Facebook used to build Kunlun and you can read the paper to learn more. Ultimately, Kunlun improves MFU from 17% to 37% on NVIDIA B200 GPUs. Why this matters - a scaling law for money: The key insight in the paper is that Kunlun models scale predictably, exhibiting the kind of power-law scaling behavior that language models exhibit. But where with LLMs scaling laws are typically assessed via a reduction in loss on an underlying dataset, here its normalized entropy (NE). In Facebook experiments, they discover reliable scaling laws for both NE gains in terms of the amount of gigaflops dumped into training the model, as well as related scaling laws for improvement in NE according to the number of layers used. The Kunlun models have been âdeployed across major Meta Ads models, delivering a 1.2% improvement in topline metricsâ. What weâre seeing here is the optimization of some of the most societally significant AI systems in the world - ones which direct billions of eyeballs towards a variety of products and online information - colliding with a greater degree of performance predictability; by developing these scaling laws, Meta has made it easier for it to spend even more compute on making these models even better, by making the investments in them more predictable in terms of the intelligence return on capital investment. Read more: Kunlun: Establishing Scaling Laws for Massive-Scale Recommendation Systems through Unified Architecture Design (arXiv). *** Superintelligence could save and extend lives, so we should go for it: âŠPausing or slowing down might make sense at the very end of the exponential, but itâs risky⊠Nick Bostrom, an academic who introduced many people to the notion of superintelligence and AI risk, has written a paper laying out the idea that if superintelligence can improve human health, then itâs worth pursuing even if thereâs a non-zero chance of it causing the death of the species. âYudkowsky and Soares maintain that if anyone builds AGI, everyone dies. One could equally maintain that if nobody builds it, everyone diesâ, Bostrom writes in Optimal Timing for Superintelligence. âIf the transition to the era of superintelligence goes well, there is tremendous upside both for saving the lives of currently existing individuals and for safeguarding the long-term survival and flourishing of Earth-originating intelligent life. The choice before us, therefore, is not between a risk-free baseline and a risky AI venture. It is between different risky trajectories, each exposing us to a different set of hazards.â Why we should pursue superintelligence, even with a chance of doom: If you think about all the humans alive today and the different life expectancies they experience - especially those in the developing world - then youâre drawn to the view that every moment you waste in deploying superintelligence, you increase human suffering. âWhen we take both sides of the ledger into account, it becomes clear that our individual life expectancy is higher if superintelligence is developed reasonably soon. Moreover, the life we stand to gain would plausibly be of immensely higher quality than the life we risk forfeiting,â Bostrom writes. Key variables: The key variables here are, of course, the risk of a superintelligence killing us all, and also the rate at which safety research can reduce this chance. Under this view, developing superintelligence becomes a favorable thing to do under most circumstances. The speed of progress and maturity of AI safety research may have some impact on the timeline: âWhen the initial risk is low, the optimal strategy is to launch AGI as soon as possible - unless safety progress is exceptionally rapid, in which case a brief delay of a couple of months may be warranted. As the initial risk increases, optimal wait times become longer. But unless the starting risk is very high and safety progress is sluggish, the preferred dâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Import AI.