Import AI 441: My agents are working. Are yours?
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If youâd like to support this, please subscribe. Import A-Idea An occasional essay series: My agents are working. Are yours? As I walked into the hills at dawn I knew that there was a synthetic mind working on my behalf. Multiple minds, in fact. Because before Iâd started my hike I had sat in a coffee shop and set a bunch of research agents to work. And now while I hiked I knew that machines were reading literally thousands of research papers on my behalf and diligently compiling data, cross-referencing it, double-checking their work, and assembling analytic reports. What an unsteady truce we have with the night, I thought, as I looked at stars and the dark and the extremely faint glow that told me the sun would arrive soon. And many miles away, the machines continued to work for me, while the earth turned and the heavens moved. Later, feet aching and belly full of a foil-wrapped cheese sandwich, I got back to cell reception and accessed the reports. A breakdown of scores and trendlines for the arrival of machine intelligence. Charts on solar panel prices over time. Analysis of the forces that pushed for and against seatbelts being installed in cars. I stared at all this and knew that if I had done this myself it wouldâve taken me perhaps a week of sustained work for each report. I am well calibrated about how much work this is, because besides working at Anthropic my weekly âhobbyâ is reading and summarizing and analyzing research papers - exactly the kind of work that these agents had done for me. But theyâd read more papers than I could read, and done a better job of holding them all in their head concurrently, and they had generated insights that I might have struggled with. And they had done it so, so quickly, never tiring. I imagined them like special operations ghosts who hadnât had a job in a while, bouncing up and down on their disembodied feet in the ethereal world, waiting to get the API call and go out on a mission. These agents that work for me are multiplying me significantly. And this is the dumbest theyâll ever be. This palpable sense of potential work - of having a literal army of hyper-intelligent loyal colleagues at my command - gnaws at me. Itâs common now for me to feel like Iâm being lazy when Iâm with my family. Not because I feel as though I should be working, but rather that I feel guilty that I havenât tasked some AI system to do work for me while I play with Magna-Tiles with my toddler. At my company, people are going through the same thing - figuring out how to scale themselves with this, to figure out how to manage a fleet of minds. And to do so before the next AI systems arrive, which will be more capable and more independent still. All of us watch the METR time horizon graph and see in it the same massive future that we saw years ago with the AI & Compute graph, or before that in the ImageNet 2012 result when those numbers began their above-trend climb, courtesy of a few bold Canadians. I sleep in the back of an Uber, going down to give a talk at Stanford. Before I get in the car I set my agents to work, so while I sleep, they work. And when we get to the campus I stop the car early so I can walk and look at the eucalyptus trees - a massive and dangerous invasive species which irrevocably changed the forest ecology of California. And as I walk through these great organic machines I look at my phone and study the analysis my agents did while I slept. The next day, I sit in a library with two laptops open. On one, I make notes for this essay. On the other, I ask Claude Cowork to do a task Iâve been asking Claude to do for several years - scrape my newsletter archives at jack-clark.net and help me implement a local vector search system, so I can more easily access my now vast archive of almost a decade of writing. And while I write this essay, Claude does it. I watch it occasionally as it chains together things that it could do as discrete skills last year, but wasnât able to do together. This is a task Iâve tried to get Claude to help me with for years but every time Iâve run into some friction or âugh-factorâ that means I put it down and spend my time elsewhere. But this time, in the space of under an hour, it does it all. Maps and scrapes my site. Downloads all the software. Creates embeddings. Implements a vector search system. Builds me a nice GUI I can run on my own machine. And then I am staring at a new interface to my own brain, built for me by my agent, while I write this essay and try to capture the weirdness of what is happening. My agents are working for me. Every day, I am trying to come up with more ways for them to work for me. Next, I will likely build some lieutenant agents to task out work while I sleep, ensuring I waste no time. And pretty soon in the pace of a normal workday, I will be surrounded by digital djinn, working increasingly of their own free will, guided by some ever higher level impression of my personality and goals, working on my behalf for my ends and theirs. The implications of all of this for the world - for life as people, for inequality between people, for what the sudden multiplication of everyoneâs effective labor does for the economy - are vast. And so I plan out my pre-dawn hikes, walking in the same ink-black our ancestors have done, thinking about the gods which now fill the air as fog, billowing and flowing around me and bending the world in turn. *** Anti-AI rebels make a tool to poison AI systems: âŠPoison Fountain is how to take the fight to the machines⊠Anti-AI activists have built a useful technical weapon with which to corrupt AI systems - Poison Fountain, a service that feeds junk data to crawlers hoovering up data for AI training. How it works: Poison Fountain appears to generate correct-seeming but subtly incorrect blobs of text. Itâs unclear about exactly how many bits of poisoned training data there is, but you can refresh a URL to see a seemingly limitless amount of garbage. Motivation: âWe agree with Geoffrey Hinton: machine intelligence is a threat to the human species. In response to this threat we want to inflict damage on machine intelligence systems,â the authors write. âSmall quantities of poisoned training data can significantly damage a language model. The URLs listed above provide a practically endless stream of poisoned training data. Assist the war effort by caching and retransmitting this poisoned training data. Assist the war effort by feeding this poisoned training data to web crawlers.â Why this matters - the internet will become a predator-prey ecology: The rise of AI and increasingly AI agents means that the internet is going to become an ecology full of a larger range of lifeforms than before - scrapers, humans, AI agents, and so on. Things like Poison Fountain represent how people might try to tip the balance in this precarious ecology, seeking to inject things into this environment which make it more hospitable for some types of life and less hospitable for others. Read more: Poison Fountain (RNSAFFN). *** If we want good outcomes from AI, think about the institutions we need to direct intelligence: âŠNanotechnology pioneer reframes AI away from singular systems to an ecology⊠Eric Drexler, one of the godfathers of nanotechnology, has spent the past decades thinking about the arrival of superintelligence. One of his most useful things was intuiting, before ChatGPT, that humanityâs first contact with truly powerful AI wouldnât be some inscrutable independent agent, but rather a bunch of AI services that start to get really good and interact in a bunch of ways - you can check out this 2018 talk on âReframing Superintelligenceâ to learn more. Now, he has published a short paper, âFramework for a Hypercapable Worldâ, on how to get good outcomes for humanity from a world replete with many useful AI services. Donât think of AI as a singular entitâŠ
Send this story to anyone â or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to Import AI.