How ChatGPT Has Not Changed My Plans for This Fall
I feel Corey Robin's pain. I just don't share it.
Like many teachers and professors, the hard-working staff here at Drezner’s World got back to teaching this week.1 The biggest difference between the start of this year’s fall semester and the fall of 2022 has been the availability of large language models of artificial intelligence. The diffusion of ChatGPT kicked up a hornet’s nest of consternation among high school teachers and college professors, especially as stories proliferated about how AI outputs were passing classes.
This has forced some professors to rethink their course assignments. Back in late July, Brooklyn College political science professor Corey Robin expressed considerable disquiet at the prospect of ChatGPT to excel in his written class assignments. While the AI originally produces some variation of competent bullshit, Robin noted how his daughter was able to get ChatGPT to improve with further refinement:
The essays got better, more specific, more pointed. Each of them now did what a good essay should do: they answered the question. It became clear that so long as a student has a minimal sense of what a paper is supposed to look like or do, or at least knows what a bad paper (by my lights) looks like, they could easily use ChatGPT to come up with excellent answers to even the most out-of-the-way questions.
Where I had initially thought that such a student would have to have mastered quite a few skills in order to do this—that is, would be able to write such a paper on their own—it’s clear to me now that that’s not necessarily the case. Students just have to be able to spot the difference between good work and not good work, which even the most struggling students can already do. It’s always been amazing to me that students who have a difficult time writing a thesis statement can spot it a mile away in another student’s essay. Likewise, a well structured paragraph or paper. That doesn’t mean they can do it themselves, though.
Drezner’s World is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
On the other side of the spectrum, University of Pennsylvania professor Jonathan Zimmerman argued last week in the Washington Post that it would be up to his students whether to rely on the likes of ChatGPT or not - though he recommends against it:
Here’s my AI policy: I don’t have one.
Here’s what I’m going to tell my students instead.
Of course, you’ll have to notify me if you draw upon AI to write a paper, just as you are required to cite any other source. But whether to use AI or not is up to you.
Though, I hope you won’t….
I want you to decide what is real. One of my mentors, Neil Postman, a professor and social critic, famously declared that education should equip us with an effective “crap detector.” And Postman wrote that years before we all got access to the internet, which has made BS detection even more difficult — and even more crucial….
Some courses really do ask you to think. And if you ask an AI bot to do it instead, you are cheating yourself. You are missing out on the chance to decide what kind of life is worth living and how you are going to live it.
Some of my colleagues are making students complete writing assignments in class to ensure that the work they submit is really theirs. I won’t do that, because I think it’s patronizing. You are grown-ups. You can vote in elections, and you can die in wars. This AI thing is your call, and it’s your life. I can’t live it for you.
But remember: The bots can’t, either.
Full disclosure: I lean far closer to the Zimmerman view than the Robin view, for a variety of reasons. The most obvious is that I am teaching graduate students, all of whom are legal adults. They are paying a fair amount of money to attend my courses. If they want to skate through them without learning anything that is just a fantastic waste of money, but it’s their choice.
Another reason is that I still believe that there is an inverse correlation between student comprehension of the course material and their tendency to rely on AI as a tool of writing. And those who comprehend the subject less well are also likelier to miss the glaring errors that large language models commonly produce. This is a rare case in which I agree with Freddie DeBoer:
All of this would be much lower stakes and less aggravating if people had the slightest impulse toward restraint and perspective. But our media continues its white-knuckled dedication to speaking about AI in only the most absurdly effusive terms, terms that threaten to exceed the power of language. Here’s Ian Bremmer and Mustafa Suleyman in Foreign Policy: “Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.” They compared some interesting machine learning systems to the Big Bang, the literal creation of the universe. Where does this shit end? And what exactly can these systems actually do, right now, in the world of atoms? Besides stranding a bunch of kids for hours with terrible bus routes, I mean. What is their revolutionary reality, rather than their theoretical revolutionary potential?
….what if this software just sucks? What if we’re all so desperate to move to the next era of human history that we talked ourselves into the idea that not-very-impressive predictive text and image compilers are The Future?
Maybe the AI will continue to improve and I will have to adjust my own assumptions of their utility. Right now, however, I still cannot shake the “garbage in, garbage out” theory of artificial intelligence outputs. While some believe that AIs can move down the learning curve, I wonder whether the very proliferation of flawed AI output will warp future AI output even further.
So for the 2023-24 academic year, I will still be proceeding with business as usual. We’ll see where things stand a year from now.