Import AI 456: RSI and economic growth; radical optionality for AI regulation; and a neural computer
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv, cappuccinos, and feedback from readers. If youâd like to support this, please subscribe. Regulate? Donât regulate. Thereâs a third way: Radical Optionality: âŠGovernments should invest in the tools now that they might need in a future crisis⊠Researchers with the Institute for Law & AI have written about âradical optionalityâ, an approach whereby governments might give themselves the tools that they may need in the future if powerful AI starts to massively disrupt the world. âAt its core, radical optionality is about preserving democratic governmentsâ ability to make good decisions about how to govern transformative AI systems as circumstances evolve. In the short term, this means avoiding overregulation while rapidly building the institutions, information channels and legal authorities needed to respond competently to a broad range of scenarios.â The key idea - invest now for an uncertain future: Given the immense stakes of AI development, âgovernments should be willing to spend an extraordinary amount of money, effort, and political capital on preserving optionalityâ, they write. In other words: Itâs such a big deal you should be fine spending a bunch of money now with an uncertain return. âGovernments should be wary of counterproductive interventions, but not much concerned with the actual pecuniary cost of any realistic measure that seems likely to have net-positive resultsâ. Specifics: They also recommend several specific interventions in a few categories: Information-gathering authorities: Transparency requirements, where companies need to publish information about their AI systems. Reporting requirements, where companies are compelled to share certain information with a government agency. Once these are in place, establish an auditing regime so some third-party can verify the veracity of what the transparency and reporting rules target. Whistleblower protections: Ensure that employees at frontier labs can report information about risks. Information-sharing within and between governments: Ensure that governments can effectively coordinate and facilitate discussions, especially those dealing with sensitive information about the progress of AI. This may be especially important for strengthening and protecting supply chains deemed critical to AI development. Flexible rules and definitions: Avoiding premature regulation by potentially making conditional âif-thenâ regulatory commitments, or an approach whereby a high-level target is set (e.g., mitigating risk) and companies are free to define the specifics of how they do that. This is bound up in the need to come up with flexible definitions, or definitions that can evolve over time. Assessments and evaluations: Develop government and third-party capacity to assess the capabilities and safety aspects of AI systems. Improve security of model weights and algorithmic secrets: Invest more in locking down the weights of neural nets as well as the algorithmic secrets behind some of the best systems. This can be achieved through promulgating voluntary standards for physical and cybersecurity. Hiring and talent: A meta-investment which would help with all of the above is investing more in the kind of technical talent needed to effectively pull off any of these interventions. Core to this is increasing the funding of AISI (UK) and CAISI (US) and their counterparts in other countries. Arguments and counterarguments: The authors go through some of the more obvious counter-arguments to these ideas and provide some responses: Encouraging dramatic regulatory action: The above ideas âarenât weighty substantive authorities that lend themselves to abuseâ, they claim. (I might push back on this, noting that a sufficiently motivated government can tend to come up with a far more forceful version of an authority than those who originally drafted the authority might have conceived). Democratic legitimacy: Optimizing for flexibility might cause the need to de-emphasize some things that relate more to democratic legitimacy, e.g., empowering agencies to waive notice and comment periods for some kinds of rulemaking. Concentration of power and government abuse: The authors are âbasically convincedâ that thereâs significant risk of governments asserting control over the development of AI systems - for this reason, they donât recommend things like massively expanding the scope of emergency authorities such as the Defense Production Act. One way of mitigating this might be to get governments to âuse only law-following AI systemsâ. Whatâs wrong with private governance? Why not just do that: While the authors are supportive of ideas in the âregulatory marketsâ vein, they also think any governance that relies primarily on a bunch of private sector actors (e.g, independent verification organizations) will still come back to relying on some basic pocket of technical competence within the government. Why this matters - setting the world up for success: I agree with all the recommendations here and have advocated for many of them in recent years. It seems to me like there are a multitude of things we could be doing to better prepare as a society for the potentially absolutely massive changes to come. âThe cost of implementing these policies is modest, relative to the potential benefits. The cost of failing to act, by contrast, is potentially catastrophic,â the authors write. I agree. Read more: Radical Optionality (official paper website). *** A Schmidhuber Special - neural computers: âŠMaybe an operating system is just a passing fad.. Hereâs a fun paper, Neural Computers, from Meta and KAIST which asks the question âcan a neural network act as a traditional computer? The Neural Computer (NC) is a neural system that unifies computation, memory, and I/O in a learned runtime state.â The paper is interesting for a couple of reasons: 1) itâs from Juergen Schmidhuber, who is something of a legend in the AI community, and conceptualized many important things early (e.g, generative models, world models, aspects of generative adversarial networks, early thoughts about benchmarking on video games), and 2) the idea is so outrageous and simple that it might just work (albeit requiring a lot more computation and data than todayâs models have). The big idea: As one of the authors put it, with todayâs AI, âa new machine form is starting to emergeâ. They then ask: âIf agents are getting better at real work, world models are getting better at internal simulation, and conventional computers are already rebuilding their substrate for AI, could there be a new runtime that brings execution, rollout, and capability retention into the same learning machine?... my own guess is that a mature [neural computer] points toward a different substrate: something more like a 10T-1000T machine that is sparser, more addressable, and a little more circuit-likeâ. Two experiments: This is mostly a conceptual paper which does some early prototyping, exploring whether you can use a powerful generative video model (Wan 2.1) and some well-curated training data to create some neural computers based on a command-line interface (CLI) and a graphical user-interface (GUI). Both approaches work, albeit in a very âwright brothers before takeoffâ sense - just barely gesturing at a much larger future. CLI: âThe NC learns to render and execute basic command-line workflows. It often stays aligned with the terminal buffer and captures common âphysicsâ of everyday CLI use (e.g., fast scrollback, prompt wrapping, window resizing), though symbolic stability remains limited.â GUI: âWe evaluate standard world-model designs across data quality, cursor supervision, action injection, and action encoding, using global fidelity, post-action responsiveness, and cursor-accuracy measurements.â The prototype works: âOur experimental insights indicate that current NCs can already learn to realize elementary runtime primitivâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Import AI.