Why Science Fiction Informs How People Think About Science Facts
Or, why everyone sees "The Terminator" when AI is discussed
The hard-working staff here at Drezner’s World has taken note of the uptick in concern about the technological race for developing artificial general intelligence (AGI). In this context, “taken note” means “I have bookmarked a shit-ton of stuff to read but am really busy with other commitments so I don’t want to weigh in just yet.”
But cryptography expert Matthew Green asked a somewhat more concrete question about the AI safety debate that seems more manageable:
The ‘AI safety’ community seems to be split between people who have (potentially) addressable concerns about biased content, and people who saw Terminator too many times. Is this really the limit of our vision on this topic?
That’s a good question… I wonder who could attempt an answer? [Hey, wait a minute — you’ve written about technology and international relations and you’ve written and taught about the use of pop culture metaphors to explain real-world problems! You’re a great person to answer Green’s question!—ed. I know!] Now that I think bout it, I might be qualified to answer this question.
First of all, it seems worth pointing out that the “AI safety community” has expanded an awful lot as of late. There was the March 2023 open letter signed by some big names in Silicon Valley urging a six-month pause on AI development. That Elon Musk signed it might be off-putting to some, but it’s worth noting that this is an area where Musk’s risk-aversion has been pretty consistent: he first signed a letter of this kind back in 2015.
Furthermore, other big players in the AI community are sounding similar warnings. Ian Hogarth, the co-author of an annual “State of AI” report, authored a long piece in the Financial Times warning that developers of God-like AI are focusing way too much on investments in AI capabilities without thinking about ensuring AI stays in alignment with human ethics. According to Hogarth, leading AI firms “are running towards a finish line without an understanding of what lies on the other side.” Geoffrey Hinton left Google so he could be more forthcoming in his warnings about the dangers of AGI.
What is interesting is how many of these stories rely in part on fictional analogies to get across the concerns voiced by Hinton, Hogarth, et al. The Wired story about Hinton opens with a clip of Snoop Dogg saying, “I heard the old dude that created AI saying, ‘This is not safe ’cause the AIs got their own mind and these motherfuckers gonna start doing their own shit.’ And I’m like, ‘Is we in a fucking movie right now or what?”
In his FT essay, Hogarth relies a variety of analogies and metaphors to get across his point. Some of them are comparisons to other real-world technological races — suggesting that society treats the development of AGI systems the way it treats “gain of function” research in biotechnology. But he also references fictional tropes, including Jurassic Park and the Shoggoth meme.
In other words, both experts and citizens use the availability heuristic to draw on fictional narratives to try to get a grasp on a novel problem. And the key thing to realize is that when we are talking about end-of-the-world types of events, there is not a lot of examples to draw from in actual human history.1 This is particularly true when the existential threat comes from new technology rather than the old standbys of plague and pestilence. These are sufficiently rare events such that humans are just as likely to look for fictional examples as real-world examples. Research suggests that people do not process fiction and nonfiction in fundamentally different ways.
When the conversation turns to AGI, the real-world comparisons to draw on are pretty sparse; the “gain-of-function” analogy might be the best I have seen. So it is not surprising that even AI experts are reaching to vivid fictional examples of apocalyptic AI to issue their warnings. As Charli Carpenter and Kevin Young have observed, the use of fictional tropes frame the conversation and prime the audience with a particular way of interpreting events.
So my answer to Green is that when thinking about novel ways of the world ending, there simply is not enough real-world history to draw from in the analysis, and amn overwhelming temptation to find ways to make the problem more understandable to experts and citizens alike. In these circumstances, the surprise is not that folks are falling back on Terminator analogies; the surprise is that it does not happen more frequently.
Pre-human history has better and more vivid examples.
Pleased to see you tackling this. After reading a Politico editorial that said we needed a AI Manhattan Project because of safety/existential threat, I'm genuinely confused as to why we've gone from "check out this thing that can write bad term papers" to "pull the plug, they're going to kill us all" in a matter of months.
I have faith in the foresight of humans. If they don’t have the foresight to control their creations, they deserve to be held captive by them.