Yesterday I finished teaching my “End of the World and What Comes After” class, and it was a bittersweet feeling. On the one hand, as any professor knows, an end to teaching is the beginning of more researching, reading and writing. Woo-hoo!! More flexibility with my time — yeah, baby!!!
On the other hand, I really enjoyed teaching this class this year. I benefitted from having a wonderful cross-section of Fletcher School students enroll in the class. One of the unexpected pleasures of this class was having students who focused on “hard security” issues in the same room as students who were concentrating on environmental change in the same room as students specializing in artificial intelligence. It was a great mix of perspectives that helped make the student questions and feedback all the more interesting.
I like to close the course out with lectures about preparing for crises and preventing catastrophic outcomes — if for no other reason than ending this course with, “and that’s why climate change seems reverse-engineered to exploit all of humanity’s cognitive, societal, organizational, and political weaknesses!!” is too much of a bummer. And as per usual, I started by explaining how thinking about and investing in measures to prevent existential risks are really hard, because:
Human beings are not hard-wired to think about emergent threats as opposed to urgent crises;
Disaster prevention — as opposed to preparation — is the primary purpose of very few organizations;
The political science suggests that while there are rewards for crisis response, credit-claiming is almost impossible for effective crisis prevention — because voters rarely if ever engage in counterfactual reasoning.
But this time around, however, I realized that I had to spend more time deconstructing some of the absurd theories that have attempted to address questions of existential risk: ideas ranging from rationalism to effective accelerationism to longtermism.
This was frustrating. Each of these approaches start with some not-entirely-unreasonable assumptions (“Prediction markets have some utility!” “Technological innovation is worth incentivizing!” “We should think about future generations in taking actions in the present!”) and then going to such logical extremes that you wind up writing insane sentences like, “Any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” And it’s tricky to reject these philosophies while taking care not to reject the plausible initial assumptions that some of these worldviews posit.
Why have these strange philosophical approaches mushroomed in recent years? Let me postulate that, for all the reasons listed above, the interest group environment for thinking about existential risks is pretty thin. As with questions of national security, it takes a lot for interest groups to form about existential risks that, for many, seem like pure public goods issues about dangers that are decades away.
In a thin interest-group environment, those who are willing to spend money can have an outsized influence on public discourse and public policy. In other words, the lack of broader societal interests magnifies the importance of those actors willing to play in this sandbox.
So who are those remaining players? Really wealthy dudes who read a lot of bad science fiction, apparently! Because they can absorb negative shocks in the near future without much trouble, plutocrats can afford to think about far future risks.1 At the same time, Henry Farrell has written at length about the bad sci-fi origins of both effective accelerationism and longtermism. In a weird way, this is unsurprising: the kind of low-probability, high-impact risks like super-intelligent AI has no real-world parallels; it is unsurprising that those who do think about the risks draw from the “synthetic experiences” of popular culture, particularly science fiction.
The institutes that these billionaires have funded on this topic have started to run into some administrative headwinds, however. Earlier this month, for example, Nick Bostrom had to shutter his Future of Humanity Institute at Oxford University. Why? One possibility is that folks recognized the flaws in “longtermism.” According to the Guardian’s Nick-Robins Early:
Bostrom is a proponent of the related long-termism movement, which held that humanity should concern itself mostly with long-term existential threats to its existence such as AI and space travel. Critics of long-termism tend to argue that the movement applies an extreme calculus to the world that disregards tangible current problems, such as climate change and poverty, and veers into authoritarian ideas. In one paper, Bostrom proposed the concept of a universally worn “freedom tag” that would constantly surveil individuals using AI and relate any suspicious activity to a police force that could arrest them for threatening humanity.
Bostrom and long-termism gained numerous powerful supporters over the years, including Musk and other tech billionaires. Bostrom’s Institute received £13.3m in 2018 from the Open Philanthropy Project, a non-profit financially backed by Facebook co-founder Dustin Moskovitz.
Andrew Anthony had a follow-up in the Guardian suggesting that Bostrom had some other issues:
Both Bostrom and the institute, which brought together philosophers, computer scientists, mathematicians and economists, have been subject to a number of controversies in recent years. Fifteen months ago Bostrom was forced to issue an apology for comments he’d made in a group email back in 1996, when he was a 23-year-old postgraduate student at the London School of Economics. In the retrieved message Bostrom used the N-word and argued that white people were more intelligent than black people.
The apology did little to placate Bostrom’s critics, not least because he conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics….
It was Émile Torres, a former adherent of longtermism who has become its most outspoken critic, who unearthed the 1996 email. Torres says that it’s their understanding that it “was the last straw for the Oxford philosophy department”.
Torres has come to believe that the work of the FHI and its offshoots amounts to what they call a “noxious ideology” and “eugenics on steroids”. They refuse to see Bostrom’s 1996 comments as poorly worded juvenilia, but indicative of a brutal utilitarian view of humanity. Torres notes that six years after the email thread, Bostrom wrote a paper on existential risk that helped launch the longtermist movement, in which he discusses “dysgenic pressures” – dysgenic is the opposite of eugenic.
So that all sounds horrible.
As someone who is teaching about existential risk, my takeaway on ideas like effective accelerationism and longtermism are twofold. First, as fun as it might be to knock down these kinds of intellectual strawmen, there needs to be an alternative set of ideas that offer a better way of thinking about these issues. Obviously there are generic approaches like pragmatism that can be of some use, or meta-factors that could be stressed within organizations. But bad ideas will not be rejected without a better set of ideas on the offer.
This leads to my second takeaway: philanthropic foundations cannot leave this playing field to the Elon Musks, Sam Altmans, and Marc Andreessens of the world. They need to incentivize the intellectual firepower necessary to think better about existential risk. Absent more funding, the logic of the Ideas Industry will kick in. Scholars and wonks interested in existential risk will gravitate to those plutocrats and resist speaking truth to money. And that leads to even more bad thinking that will have to be critiqued in the future.
I suspect a lot of these plutocrats legitimately believe that either through transhumanism or some other form of bioengineering that they will prolong their life long past conventional actuarial tables would predict.
Bostrum's comments about 'dysgenics' included the section "Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species".
Seems like a lot of words to say that you think the reason you can't get laid is because women don't appreciate your intellectual genius..................
I applaud your focus on existential risk. There is far too little of such discussion on Substack, and more generally across the culture. Substack so often seems like just a giant trivia machine.
You write, "They need to incentivize the intellectual firepower necessary to think better about existential risk."
We need to exercise some caution in regards to "intellectual firepower" experts. Here's why.
People who do intellectual work for a living aren't really in a position to publicly explore ideas too far outside of what the group consensus considers to be normal, reasonable and realistic. If they do so they risk being rebranded as crackpots, which can be fatal to their reputations and careers, and thus their mortgage payments etc. This becomes an issue when reasonable and realistic ideas have consistently failed to solve some problem.
Experts seem typically unable or unwilling to prioritize topics in order of importance. As evidence, how many academics have focused their careers on nuclear weapons, the one and only man made threat which can destroy the modern world in minutes without warning? Almost all academics are focused on topics of far less importance.
For 75 years we have utterly failed to meet the challenge presented by nuclear weapons. And yet, even those who are nuclear weapons experts and activists continue to endlessly repeat all the same things that have never worked. The experts write their expert articles about details. The activists shout their slogans and wave their banners. And none of it ever works. And nobody ever says, "What we're doing shows no sign of ever working."
Here's why... To say "nothing we're doing works" undermines what really matters to them, their careers. And because of this intellectual bankruptcy, I've never seen any "expert" say the following.
If nuclear weapons were to magically vanish, the next day violent men would turn their attention to other means of projecting power via methods of mass destruction. And before you know it, we'd be right back in the same old mess.
So the problem isn't actually nuclear weapons, but those who would use them, violent men.
But if one makes one's living as an expert, one can't say that, because the only way to end violent men is to end the entire male gender. There is unlikely to be a shorter path to career destruction than saying things like that. Only those who have nothing to lose are in a position to explore such options.
https://www.tannytalk.com/s/peace
What experts are actually expert at is establishing themselves in society as experts. Getting the degrees, getting the jobs, building their careers, wearing the suit and tie, writing the books, speaking at conferences etc. That is, experts are expert at the expert business.
If the experts were were experts on existential risk, we wouldn't be facing so many existential risks.