Three Readings About Artificial Intelligence
If you're into that sort of thing.
The hard-working staff here at Drezner’s Word likes to stay on top of the zeitgeist, to be plugged into the cutting edge. We like to have our finger on the pulse, our ear to the ground, and our eyes on the imminent horizon. We like to be plugged in to the Important Things That Are Happening Now, is what I am saying here. And so the staff has begun to notice that maybe, just maybe, this thing called artificial intelligence — “AI” for those in the know — might be kind of a big deal.1
That is why you subscribe to Drezner’s World — it’s this kind of cutting edge information amirite?!
Seriously, there are three things that I have read recently about AI that are worth perusing. Let’s order them from essays that have important near-term implications to those that should have more long-lasting effects.
First, there is the short-term question of whether the current, massive surge of investment in AI is creating a bubble. As Brad DeLong recently observed, “If not for sky-high optimism about AI, the economy would now be in recession with high probability.” Furthermore, an increasing amount of AI investment is fueled by debt, raising some valid questions about the effect on credit markets more generally — not to mention what would happen if the investments in AI do not yield the anticipated rate of return.
This leads to the first interesting read: the FT’s Gillian Tett suggesting how the AI investment bonanza could be analogous to what Richard Feynman labeled “cargo cult science”:
Calculations by the FT suggests that ten lossmaking AI start-ups — such as OpenAI, Anthropic and Elon Musk’s xAI — now command a collective valuation of close to $1tn, while venture capital has poured $161bn into AI overall this year.
More startling still, few of these entities expect to turn a profit anytime soon — and these valuations are being boosted by variants of cross-cutting vendor financing, like recent deals between OpenAI, Nvidia, Oracle, AMD and Broadcom.
The net result is a pattern of circular flows that echo some of the hairball of interconnections that emerged between banks and insurance companies via credit derivatives before 2008. And those, remember, resulted in unseen concentrations of risk — and subsequent contagion when the bubble burst….
Every Big Tech executive is investing in massive data centres, even though Bain reckons some $2tn of revenue will be needed to fund this by 2030. And charismatic figures like Sam Altman, CEO of OpenAI, keep promising fresh magic. Or as Stephan Eberle, a software engineer, laments: “Watching the industry’s behaviour around AI, I can’t shake this feeling that we’re all building bamboo aeroplanes [like cargo cults] and expecting them to fly….
Anyone engaged in this AI frenzy needs to watch out for that hairball of interconnections, hedge their bets — and read up on those Melanesian cargo cults.
It should be noted that Tett does not think AI is devoid of value. Indeed, in that sense it is like just about every other asset bubble in financial history. All of them began in reaction to a genuine shift in the value of an underlying asset; it is when the reaction becomes overreaction that the trouble starts.
The underlying value of AI is important to remember, however — especially if the bubble does pop. Strategists widely view as the most significant general purpose technology in decades. This means whichever national economy is best poised to develop and exploit it might be poised for hegemonic leadership.
This was at the heart of Jeffrey Ding recently Princeton University Press book, Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition. Ding posited that dominance of a leading sector like AI matters less than whether all sectors of an economy are poised to adapt and exploit the new technology.
The latest issue of Asia Policy has a roundtable on Ding’s book, and the hard-working staff here at Drezner’s World contributed one response. An excerpt:
A crucial element of Ding’s causal story is the long time lag between the development of a GPT and its suffusion throughout the rest of the economy: “the GPT mechanism involves a protracted gestation period between a GPT’s emergence and resulting productivity boosts.” Such a lag is not terrifically surprising; economists such as Zvi Griliches have previously argued that it can take a generation for a new technology to be optimized in any particular sector. Robert Solow famously said, “You can see the computer age everywhere but in the productivity statistics,” just before the boom of the 1990s when the effects of computerization finally became visible in the productivity statistics.
The problem is not whether such a lag exists—Ding is persuasive in demonstrating its plausibility. The problem is that this poses a considerable quandary for policymakers and analysts alike. If Ding’s thesis is correct, then power transitions are set in motion decades before they actually occur. Such a long-term perspective is beyond the political incentives of even the most far-sighted policymakers. Much of the grand strategy literature is devoted to the maintenance of hegemony and the forestalling of power transitions. If Ding’s argument is correct, even the best grand strategy is for naught if another country moves down the technology diffusion curve more quickly.
You can read the other responses — from Xinyue Wei and Etel Solingen, David C. Kang, and Victor Seow — as well as Ding’s response by clicking here. See if you can find the Mad Men reference.
Finally, it is worth considering just how transformative AI could be — not just in terms of its economic effects, but how it might rejigger the ways in which society itself is organized. We have already witnessed U.S. leaders lean on crude AI tools to help inform their decision-making. If this continues, what does it mean for politics more generally?
Fortunately, Friend of the Newsletter Henry Farrell has published an Annual Review of Political Science essay on this very topic, entitled, “AI as Governance,” that should be accessible to everyone. In it, Farrell lays down a challenge to the political science discipline that should be widely shared:2
It is possible that AI may not have the large-scale economic, organizational, and political consequences that many expect. Its benefits might be limited or outweighed by other desiderata, it may turn out to have irredeemable flaws, or it may provoke insuperable political opposition. However, new social technologies only very rarely have an immediate and dramatic impact, especially when their use requires large-scale organizational change.
It is wise at least to consider the possibility that AI may also have large-scale consequences, even if it takes years or decades to see them….
Understanding how AI can act as a technology of governance, affecting how markets, bureaucracy, and democracies coordinate action, will generate multiple important research agendas. It should also make political scientists pay proper attention to the blind spots in their own collective understanding of politics. We do not, as a discipline, regularly think hard about how technology affects the deep structures of the political economy, government in democratic and autocratic regimes, and democratic representation and feedback. That has to change.
Amen.
Enjoy your readings!
Or this is because over at our Space the Nation podcast, Ana Marie Cox and I are working our way through the Tron franchise.
Especially among doctoral candidates interested in possible dissertation topics.

I continue to marvel how the current AI bubble makes the cheerleaders of the IT bubble in 2001 blanche with envy.
PE ratios of 100? That’s for peasants! Let’s take a glorified car company and try 250+ instead! Seven intertwined company’s eating each others tails driving the entirety of the stock market? Etc.
This is going to end badly.
From a supply chain point of view, the Chinese are well-positioned to take advantage of a lead in AI. However, from the point of view of technology and human values, no society has yet come to terms with how automation will integrate with the fate of humans. This will probably require a new vision of planetary consciousness far beyond the conduct we are seeing in economics and morality.