The Thought Leaders Come for AI
New podcast episode with Adam Thierer; some of my past work on AI and existential risk.
This week I had the flu. The flu causes Americans to miss something like 75 million days of work a year. Fortunately my recent case was a mild one, and I was down only about a day all told. Still, a day in bed is a day lost. (A lost day, in this instance, that almost derailed the writing of this piece!) Plus, being that sick . . . sucks. Who likes having a headache, nausea, and fatigue?
What if AI could detect the flu early—by video, say, during a telehealth appointment—thereby reducing its spread? For that matter, what if AI could nearly eliminate the flu altogether? As the technology masters protein analysis, it could help researchers develop dramatically more effective flu vaccines. Sign me up.
And of course the flu is one of the lesser maladies at which AI is being aimed. Adam Thierer, a technology and innovation expert at R Street Institute, recently joined me on the Tech Policy Podcast. Before we waded into the maelstrom of new demands for AI regulation—the main topic of our conversation—I asked Adam to sum up AI’s near-term potential. What benefits are we likely to see in the next decade or so? Sure, Adam responded, AI is going to drive economic growth. But then he said:
What’s really important is the way that AI has the potential to improve the individual health and happiness of every single human being. That’s what really matters most about all technological innovation. And probably most important there is just the quality of our lives, and our health in particular. AI and [machine learning] tools are already helping with early heart-attack detection, stroke detection, with cancer treatments. It’s addressing things like super-bugs, and how to monitor for sepsis and mental addictions. These are things that are happening in real time.
Check out the full episode.
Adam thinks that firms working on AI should be free to engage in what he calls “permissionless innovation.” He believes in the value of trial and error, in the power of emergence, in the magic of human creativity. Mistakes will be made, to be sure. But Adam trusts in the common law, and in the large body of legal rules that already exists, to discourage negligence and punish misbehavior.
Adam opposes what he calls a “Mother may I?” approach to regulation. Many experts, policymakers, and even private firms want to impose a licensing and surveillance regime on (in Adam’s words) “the entire AI production stack.” When it comes to AI, in other words, the government would restrict and monitor retail-level applications, training sets (e.g., large-language models), data centers, and more. As Adam relayed on the show, one commentator has even proposed creating an “AI island”—a “single, high[ly] secure facility” that conducts all of the world’s “frontier [AI] research.”
It’s the same old story. A bunch of self-appointed moral guardians want technological progress to occur according to a plan. A plan that everyone will follow. A plan, needless to say, that they—the activists, the intellectuals, and the regulators; the so-called thought leaders—will draw up and impose. There must be order. There must be direction. There must be a neat little plan! And the Very Smart People are just the folks to give us one.
Please, Very Smart People, just don’t.
Adam and I kept recurring to Oxford University, that nest of AI doomers. One such doomer is Toby Ord, a philosophy professor and senior research fellow at Oxford’s Future of Humanity Institute. There is a one-in-ten chance, Ord proclaims, that in the next hundred years AI will kill us all. The claim is nonsense. It’s incoherent—the kind of proposition that real scientists deride as “not even wrong.” I get into this subject in a review of Ord’s 2020 book The Precipice: Existential Risk and the Future of Humanity.
That book could have been titled Very Smart Person Crafts Neat Little Plan. From my review:
Ord … formulat[es] a “grand strategy for humanity.” It has three steps. The third, to “achieve our potential,” is the only desirable, obtainable, or coherent one. We will check it off (or not) regardless of what Ord cares to say about it. Scientists, researchers, and entrepreneurs are not all following some philosopher’s program.
At any rate, that third step must wait, Ord insists. The two others must precede it. In the first step, we will obtain “existential security.” We will “reach a place of safety—a place where existential risk is low and stays low.” We will do this by giving more money and power to government agencies, such as the World Health Organization; by creating new government mandates and entitlements, such as legislative “representation” for future generations; and by creating new international governing institutions, such as a court that considers the safeness of scientific experiments.
Then, in the second step, we will undertake what Ord calls “the Long Reflection.” We will think and talk our way to “a final answer to the question of which is the best kind of future for humanity.” Moral philosophy will “play a central role” in this process. “The conversation should be courteous and respectful to all perspectives,” Ord writes; but it also must be “robust,” because it is to “deliver a verdict that stands the test of eternity.”
The first step can be described as the precautionary principle run amok. Scaremongers excel at political debate. Cries for more safety lend themselves to slogans; warnings about the dangers of too much safety do not. And harms that arise from action (say, deaths from a novel drug the FDA approves) are usually more visible than harms that arise from inaction (deaths from the absence of a drug the FDA delays). It is in the nature of government to say no.
. . .
“Precautionary principle” is just a polite way to say “sclerosis by design.” Rent-seekers and entrenched interests benefit. It can’t be assumed that anyone else does. Letting people try new things creates hazards, but so does letting the government get in people’s way. Moving is risky. Standing still is risky. No default option is not risky. As Michael Crichton observed, the precautionary principle, properly applied, forbids the precautionary principle. (And therefore, added Crichton, the principle “cannot be spoken of in terms that are too harsh.”)
No one, not even a government of Toby Ords, can deliver “existential security.” We cannot know what we would need to know. In fact, the great threat might lie in ennobling the really smart people who assume otherwise. The specialists who offer an answer when “I don’t know” is the only plausible response. Perhaps the surest way to get us all killed is to ask a panel of experts to save us. Like the servant fleeing for Samarra, they’ll blindly rush to an appointment with Death.
If the flaw in the first step is that experts aren’t wizards, the flaw in the second is that professional moral philosophers aren’t experts. Professors of moral theory specialize in arguing about moral theory with other professors of moral theory. Their main talent is lobbing meaningless abstractions at one another. What qualifies these insular theologians to guide the world is unclear, although their own conviction that they can do so is remarkably persistent. Ord’s “Long Reflection” taps into an abiding conceit that the wise philosophers can form the virtuous plan that produces the beautiful society. Not even philosophy departments run like that.
Did you see that one-line statement on AI risk a couple weeks ago? The one signed by all those Very Smart People? (Ord is in there, as you could have guessed.) It reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The comparison to nuclear weapons catches the eye. Adam and I discussed it. In a recent interview, Astro Teller, CEO of X (Google’s “moonshot factory”), offers an astute observation about it:
The nuclear bomb makes a good headline. The mushroom cloud. We all have emotions attached to that. But the emotions that we’ve attached to our fears, our frustrations—understandably—about nuclear bombs translated, in the ’60s and the ’70s, into such a negative narrative about nuclear energy that we as a society completely missed the boat. The disaster which is climate change right now would not be happening if we as a society had not let our fears about the first thing translate into an inability to use the upside of nuclear power to save us from what is now arguably the biggest problem in the world.
The Very Smart People don’t know how things will play out. We face radical uncertainty no matter what we do. “There is no ‘ongoing statis’ option on the table,” Tyler Cowen writes. “So we should take the plunge” with AI. Or as Astro puts it:
If humanity can’t survive the discovery of new knowledge . . . . . I don’t believe that. I believe in humanity. I think it could be bumpy at times. But I believe in humanity and I believe that we can survive discovering new knowledge.
Sign me up.
Tech Policy Podcast #346: Who’s Afraid of Artificial Intelligence? (June 2023). Guest: Adam Thierer.
World to End; Experts Hardest Hit, Forbes.com (May 2020). My review of Mr. Ord’s neat little plan.
Bonus: Doubting The AI Mystics, Forbes.com (Dec. 2019). My review of Melanie Mitchell’s fabulous book Artificial Intelligence: A Guide for Thinking Humans.