Gone Conferencing
A few reading suggestions while I attend the 2024 American Political Science Association annual meeting
The hard-working staff here at Drezner’s World is in Philadelphia attending the annual meeting of the American Political Science Association (APSA). This might well be the first normal APSA since 2019. The pandemic mucked things up for a few years, and last year’s APSA was marred by striking hotel employees.
Anyway, while I am busy walking, talking, and drinking with other political scientists, here are three essays that are worth reading as universities and political science departments kick into gear for the fall semester.
First, there is Justin Grimmer’s Politico essay, “Don’t Trust the Election Forecasts.” Grimmer, a Stanford political scientist, argues that the forecasting models at places like FiveThirtyEight and Silver Bulletin are wildly overvalued in the marketplace of ideas:
I’m a political scientist who develops and applies machine learning methods, like forecasts, to political problems. The truth is we don’t have nearly enough data to know whether these models are any good at making presidential prognostications. And the data we do have suggests these models may have real-world negative consequences in terms of driving down turnout
Statistical models that aggregate polling data and use it to estimate the probability of each candidate winning an election have become extremely popular in recent years. Proponents claim they provide an unbiased projection of what will happen in November and serve as antidotes to the ad hoc predictions of talking-head political pundits. And of course, we all want to know who is going to win.
But the reality is there’s far less precision and far more punditry than forecasters admit.….
Even under best-case scenarios, determining whether one forecast is better calibrated than another can take 28 to 2,588 years. Focusing on accuracy — whether the candidate the model predicted to win actually wins — doesn’t lower the needed time either. Even focusing on state-level results doesn’t help much, because the results are highly correlated. Again, under best-case settings, determining whether one model is better than another at the state level can take at least 56 years — and in some cases would take more than 4,000 years’ worth of elections.
The reason it takes so long to evaluate forecasts of presidential elections is obvious: There is only one presidential election every four years. In fact, we are now having only our 60th presidential election in U.S. history.
It would be safe to say that Nate Silver disagrees with Grimmer’s take. I find myself mostly but not completely agreeing with Grimmer. He is correct to point out that the number of election observations is just too small to properly assess the accuracy of the various forecasting models. Furthermore, Grimmer’s implicit point — that many of these models cause their audience to treat genuine uncertainty as quantifiable risk — is well taken. So as a cautionary warning, Grimmer’s essay resonates.
That said, I am unpersuaded by Grimmer’s claim that forecasting models depress turnout. Sure, that’s what his political experiment showed, but in the real world most voters are: a) rationally ignorant; and therefore b) completely unaware of the existence of prediction models. So my hunch would be that the real-world effects are miniscule. Plus, Grimmer does not address the question of whether the substitute for forecasting models — traditional mainstream media punditry — is any better.1
The second essay worth reading is Amy Zegart’s Foreign Affairs essay, “The Crumbling Foundations of American Strength.” Zegart argues that knowledge is the most important source of national power — and the United States is squandering its lead in this area:
Today, countries increasingly derive power from intangible resources—the knowledge and technologies such as AI that are super-charging economic growth, scientific discovery, and military potential. These assets are difficult for governments to control once they are “in the wild” because of their intangible nature and the ease with which they spread across sectors and countries. U.S. officials, for example, cannot insist that an adversary return an algorithm to the United States the way the George W. Bush administration demanded the return of a U.S. spy plane that crash-landed on Hainan Island after a Chinese pilot collided with it in 2001. Nor can they ask a Chinese bioengineer to give back the knowledge gained from postdoctoral research in the United States. Knowledge is the ultimate portable weapon.
The fact that these resources typically originate in the private sector and academia makes the job of government even more challenging….
Gauging a nation’s long-term power prospects also requires measuring the health of its research universities. Companies play an essential role in technological innovation, but the innovation supply chain really begins earlier, in campus labs and classrooms. Whereas companies must concentrate their resources on developing technologies with near-term commercial prospects, research universities do not face the same financial or temporal demands. Basic research, the lifeblood of universities, examines questions on the frontiers of knowledge that may take generations to answer and may never have any commercial application. But without it, many commercial breakthroughs would not have been possible, including radar, GPS, and the Internet….
The innovation advantage that U.S. universities have over their foreign counterparts is eroding, too. A decade ago, the United States produced by far the most highly cited scientific papers in the world. Today, China does. In 2022, for the first time, China’s contributions surpassed those of the United States in the closely watched Nature Index, which tracks 82 premier science journals….
Funding trends are also headed in the wrong direction. Only the U.S. government can make the large, long-term, risky investments necessary for the basic research that universities conduct. Yet overall federal research funding as a share of GDP has declined since its peak of 1.9 percent in 1964 to just 0.7 percent in 2020. (By comparison, China spent 1.3 percent of GDP on research in 2017.) The 2022 CHIPS and Science Act was supposed to reverse this downward slide by investing billions of dollars in science and engineering research, but these provisions were later scrapped in budget negotiations.
Basic research has been particularly hard hit…. If current trends continue, China’s basic research spending will overtake U.S. spending within ten years.
Zegart proffers a few suggestions for how the federal government can bolster American knowledge power. I would only add that, in addition to her practical suggestions, there is a more overarching suggestion that needs to be made: politicians need to stop demonizing universities more generally. Universities do a better job of hosting and facilitating basic research when they are helmed by university presidents who are willing to back up their faculty. Needless to say, recent trends have made these demanding jobs much less attractive than they used to be. As a category, universities have suffered from a dramatic drop in public trust. This seems like a surefire way to alienate foreign talent and discourage domestic students from pursuing basic research.
Finally — and it is extremely painful for a Williams College alum to write the following words — Wesleyan University President Michael Roth write a fantastic New York Times op-ed, “I’m a College President, and I Hope My Campus Is Even More Political This Year.” Roth explains what he means by this, and in doing so reminds all of us that the modern American university is not and should not be only about turning young people into better white-collar workers:
Last year was a tough one on college campuses, so over the summer a lot of people asked me if I was hoping things would be less political this fall. Actually, I’m hoping they will be more political….
These days many Americans seem to think that education should be focused entirely on work force development. They define the “good of the individual” as making a living, not working with others to figure out how to live a good life. It’s understandable. In these days of economic disparities, social polarization and hyperpartisanship, it is certainly challenging to talk with one’s neighbors about what we want from our lives in common. But that is the core of political discussion.
From JD Vance’s call to support large families to Tim Walz’s “Mind your own damned business,” there are many visions of how best to live in community. Protests are part of the competition, but only its glossy edge. Demonstrations shouldn’t just entice you to come up with rhyming chants; they should push you to inquire about how different groups of people think about complex issues. And protests should lead to more discussion, not shut it down in favor of ever louder chanting….
These discussions, like all authentic learning, depend on freedom of inquiry and freedom of expression. They also involve deep listening — thinking for ourselves in the company of others. The classical liberal approach to freedom of expression underscores that discussions are valuable only when people are able to disagree, listen to opposing views, change their minds.
To strengthen our democracy and the educational institutions that depend on it, we must learn to practice freedom better. This fall we can all learn to be better students and better citizens by collaborating with others, being open to experimentation and calling for inclusion rather than segregation — and participating in the electoral process. As for those loud voices in the political sphere who are afraid of these experiments, who want to retreat to silos of like-mindedness, we can set an example of how to learn from people whose views are unlike our own.
What a great message to everyone at the start of what will no doubt be an eventful semester.
It’s not — it’s way worse.
Thanks, all three have strong messages.
The forecast critique is correct in my view, but I think election forecasts problems are deeper than this. Probabilities are calculated on sets, sets have certain rules (zf theory) that make them coherent. It’s not clear to me that “presidential elections” can be meaningfully represented with sets that are coherent enough to do analysis on. Let’s alone machine learning.