WE TELL OURSELVES STORIES about the future of AI. “Stories” is the word. Spare me the Kurzweil curves. We have before us something wildly complex and wildly unprecedented. The terms “chaotic” and “emergent,” in their technical senses, are not out of place. For all we know, we’re the well-fed turkey the day before Thanksgiving. We’re not dealing in estimates or projections. We’re dealing in stories.
Still, some of the stories are more informed than others. And Sam Hammond, an economist at the Foundation for American Innovation, is a very informed man. In his telling, we’re likely heading toward techno-feudalism. The logic is this: AI will make individuals and small groups more capable and powerful; central governments will not keep pace with the rate of change; and so most of us will wind up living like demigods in largely autonomous city states. Sam offers up a bonkers timeline on his Substack, Second Best. Here he is on what things might look like around 2045:
The World Wide Web is a wild west of deepfakes and intelligent malware, reminiscent of the early days when one mis-click would unleash a flood of popups and .exe downloads. This has forced the development of new protocols, certificate authorities, and access lists that use AI to monitor network traffic for security threats and deny routing to unvetted users and algorithms. Telecom providers contract directly with neighborhoods and private cities, using geofencing and network firewalls to strictly control the traffic in and out of each local area network. . . .
There are now individuals as powerful as today’s large corporation, and large corporations as powerful as today’s nation-states. Many city governments thus abandon their historic charters and reincorporate as Singapore-esque company towns. The corporate structure provides a means for cities to pool investors’ capital and finance public goods through land rents, the most important of which is security. AVs entering city limits must pass through checkpoints that automatically scan for contraband and log their passenger’s identity; waste water is continuously monitored for genetically engineered pathogens; and EM pulse guns scan the airspace for unauthorized drone swarms. Unless you’re rich enough to afford private security, few venture beyond city limits except for travel between secure zones. The agglomeration externalities to AI are simply too great, as rural holdouts run the risk of being ransacked by roving militias or the agents of the synthetic drug cartels.
It’s an increasingly post-scarcity world in everything except land and capital. Yet between fusion, solar and advanced geothermal, energy is not only cheap and abundant but also locally generated. Paired with robotic labor, this enables a radical re-localization of supply chains, putting globalization in reverse.
In this story, the last three-hundred years—the march toward large liberal democratic nations, the administrative state, mass culture, and a flat, legible world—were a historical aberration. In this story, it’s dangerous beyond the city walls, governing ideologies abound, information is siloed, and the outside world is mysterious. “Globalization in reverse”—in every way.
This world sounds pretty scary. There’re the drone swarms to think about. In general, fragmentation comes with huge drawbacks. For good reasons, we equate division with conflict, isolation with darkness.
I have to say, though, that it sounds like a very interesting world, too. In this space, I recently stuck up for the vibrancy of our cultural production. (Compare 2024 Shōgun with 1980 Shōgun. We’re fine.) But if you want a full-blown renaissance, some fragmentation might be a healthy thing.
Genetic drift is higher in small populations. The physicist Freeman Dyson once asked a provocative question: what if this dynamic drives cultural evolution? Normally, we think of big cities as the great incubators of ideas. But for bold leaps of imagination, Dyson proposed, a city of ten-million people might be no match for a thousand isolated villages of ten-thousand people each. One lucky village might, in its idiosyncrasy, hit on stunning cultural advances. Think of Classical Athens or the Republic of Florence. By modern standards, these were small populations, divided into even smaller social classes. Yet each produced a “starburst of geniuses” that “changed our ways of thinking irreversibly.”
Perhaps some cyberpunk city state will give us the next cultural milestone on a par with monotheistic religion or the Scientific Revolution. Dyson will smile down on it.
Sam was my guest on the latest Tech Policy Podcast. The conversation was organized around the question of AI and state capacity. Can government institutions cope with the coming technological disruption of AI? (Clearly not, if Sam’s bonkers timeline pans out.) We also ranged into a bunch of other topics. It got mind-bending. This is a choice episode. Please check it out.
CITY JOURNAL ASKED ME for my thoughts on the Google Gemini thing—recap: Google built an AI image generator that played out the paperclip maximizer scenario, but for diversity—and I obliged. Thought number one: releasing a race-obsessed chatbot is bad! That part of the article is not so original. But I went on to find more positive stuff in the wreckage than most. For instance, I think this makes it less likely that Congress passes a demented law that stifles AI competition. Some on the left point to the Gemini episode as grounds for precisely such regulation. See, they exclaim, AI can go haywire. Not wrong. But now when someone says “AI safety,” many conservatives will (correctly) hear, “Make politically correct AI the only option.”
Inspired by Dyson (and Sam!), I closed the piece with some speculation about how AI could prompt cultural innovation:
While the “anti-white chatbot” affair will pass, fights over AI outputs will continue. An AI’s responses can be improved, but they can’t be “fixed.” When the topic is sensitive or value-laden, an AI’s answers will always disappoint or offend someone. Consequently, no company will succeed at building the one AI chatbot. These products will continue to proliferate, with distinct ones catering to specific needs, tastes, and worldviews. This raises the possibility of balkanization, the demise of shared reality, and rising civil strife—a danger we should take seriously.
Here again, though, there is a positive angle worth considering. AI could, somewhat paradoxically, serve both tradition and progress. It could enable parents to teach children at home, in keeping with their social or religious values. (Picture an AI tutor devoted to math, chemistry, and Augustine, Aquinas, and Dante.) This trend could allow distinct communities to prosper—a thousand semi-isolated (but AI-empowered) ideological villages blooming. Over time, heterodox thinking could grow. Which could lead to something truly interesting: an explosion of ideas that are genuinely, startlingly new.
As Joan Didion said, we tell ourselves stories in order to live.
Tech Policy Podcast #369: AI and State Capacity
Startlingly New, City Journal (Mar. 2024)