A lot has been said about ChatGPT, and I detect a few major emotional themes. There’s amusement: look at what it can do! Personally, I was impressed when I queried OpenAI for haikus and when I asked for a picture of Gandalf the wizard drinking a Coke. (I’ve since become less impressed, as I’ve read more and more AI-generated poems that have good rhyme, decent meter, and a clear theme but are otherwise wholly insipid.)
Along with amusement, there’s a sense of restrained optimism about what ChatGPT can do for some narrowly-defined tasks. Maybe the most rote copy-editing can be made faster. Maybe the Comcast chatbot –which now feels like a bad Zork game where the victory condition is getting a phone call with a human– can be made even a little less painful.
(I haven’t heard that many voices of unbridled optimism. My guess is that the futurists and transhumanists have the same general attitude I have about ChatGPT. )
The other salient emotion I hear is despair. If ChatGPT can write decent high school essays, or take the bar exam, how will we be able to know if high schoolers can write, or if candidates for the bar actually know the law? Now that ChatGPT is out there, there’s a lot of new work to be done, to create more sophisticated ways to catch cheating, or to totally rethink how we evaluate students. (Even if we figure out tests and cheating, there are some deeper and very uncomfortable questions lurking. We don’t ask students to make quills, because you can buy a pen. How long before we don’t ask students to write basic texts, because you can buy AI-generated text?)
The despair is also about changes in artistic valuation. Creative workers like visual artists, poets, musicians, and graphic designers are finding their already very narrow profit margins become even narrower. I don’t know any artists who aspire to the artistic merits of Stephen King or Thomas Kinkade, but I know many who would die of joy if they had even a fraction of King’s or Kinkade’s commercial success.
Now, however, AIs can produce nearly infinite works in the style of any given artist, even when given only a few example works as training data. King and Kinkade succeeded because they had distinctive styles, prolific output, and effective marketing. That strategy is bound to become less successful when an unscrupulous or naive AI operator can co-opt that work with a few clicks.
My main reaction to ChatGPT hasn’t been amusement or despair, but surprise. To clarify, I wasn’t surprised by what ChatGPT can do, but rather by the amount and the intensity of discussion. (Apparently folks at OpenAI, the company that produced ChatGPT, are surprised too.)
You see, I’m a big science fiction fan, so I thought I was mentally prepared for the effects AI would have on society. Science fiction has taken so many different angles on AI. Surely, I thought, one of these would map onto our experience today.
You have, for example, your malicious AIs, AIs that try to kill us all. These AIs deserve to be feared. In the Terminator universe, defense contractors build an AI called Skynet. Just after the humans hand control of the US nuclear arsenal over to Skynet, it becomes self-aware and decides the smartest thing to do is wipe out all humans, first with nuclear war and then with time-traveling killer robots like Arnold Schwarzenegger. In the Matrix universe, bright-eyed technologists create an AI that becomes self-aware and decides to kill all the humans. In one Star Trek Original Series episode, there is a eugenicist AI that killed billions of “imperfect” humanoids with the equivalent of a finger flick. There is the famous HAL from 2001. The list goes on.
You also have your AIs that are pernicious only because of humanity’s moral failings. These AIs are notable because they can be defeated by human moral courage. In WALL-E, pollution and trash have made Earth uninhabitable, so humans live on a spaceship orbiting the planet. The humans, living in stultifying comfort under the care of a paternalistic AI, become intellectually vapid and physically weak. The plucky robot WALL-E falls in robot-love with a lady-robot, who he woos and then later rescues. Along the way, he convinces the dopey human, who is nominally the captain of the spaceship, to literally rise up against his now menacing caretaker. (WALL-E is essentially Pixar’s retelling of the short story “The Machine Stops,” one of the greatest pieces of science fiction ever written, which was published way back in 1909.) In another Star Trek episode, the good guys find themselves on a planet where the local people alternate between listlessness and deadly coordination. It turns out that, back when the planet was on the verge of societal collapse, a wise man programmed an AI capable of controlling everyone on the planet telepathically. The ever-intrepid Captain Kirk unmasks the computer and convinces it that mind control is not the solution to conflict. The AI repents and blows itself up. Human moral failing creates the crisis in the form of an AI; human moral strength defeats the AI.
On the other end of the spectrum, there are stories where the AIs are more afraid of us than we are of them. In the sequel to Ender’s Game, Ender spends a lot of time with Jane, an AI apparently born from the emergent complexity of the humans’ faster-than-light communication network. Jane is careful about revealing herself to humans because she fears she will be destroyed. There is a similar story in The Moon is a Harsh Mistress, one of the few joint winners of the Hugo and Nebula science fiction prizes: an AI is born from the emergent complexity of the Moon colony’s computers. It aids friendly Moon-dwelling humans in their struggle for political independence from the oppressive Earth government but generally wants to remain a secret, for fear of being dismantled.
The most important more-afraid-of-us AI is Data from Star Trek. Unlike Jane and the Moon AI, Data was constructed by a human and was physically similar to a human. Data is intelligent, curious, and morally aware. His main character motivation is to become more human. He is the quintessential outsider, entirely sympathetic to humans’ emotions and ambiguous moral systems, but also completely baffled by them.
Star Trek writers were very concerned that, when we encountered an AI like Data, one that was superficially robotic but also intelligent and morally aware, we would enslave it. In one of the best episodes of the entire Star Trek franchise, a smug scientist wants to disassemble Data, to learn how he works and them make copies of Data. A courtroom drama ensues. The prosecution, who want to disassemble Data, argue that Data is not sentient because he was built by a human, has superhuman strength, and can be deactivated. The defense, who want Data to choose if he wants to be disassembled, show that there is no test of consciousness or sentience that can distinguish Data from any of the humans in the room. The judge reasons that, if humans are willing to override Data’s wishes and build copies of him, then they will likely turn those copies into slaves and supersoldiers. Data must choose his own fate. (To my chagrin, this moral “quandary,” which has a fairly obvious solution, is repeated in later series in the mode of sentient holograms and “synths.”)
ChatGPT doesn’t fit any of these molds. It’s not about to enslave us, and it’s very far from something that could be “enslaved” in the meaningful sense of the word. Instead –and this is what surprised me– we are in a world where people are very willing to believe that ChatGPT is more than it is.
It doesn’t surprise me that children would be confused about whether Alexa or Siri are real people. The ability to interactively understand and produce human speech is a very good discriminator between what is and what is not human. Of all the things out there in the world, there are very few that aren’t human but that can talk.
Something that does surprise me is that scientists, people who nominally do critical thinking for a job, put ChatGPT on the author list for a scientific publication. The first people who did this worked for a healthcare AI company, so it could very well have been a publicity stunt. But since then, multiple groups have included ChatGPT as an author.
This is problematic for two reasons. First, the standard in the scientific world is that authorship has some amount of responsibility. Although scientific journals vary in the precise requirements for authorship, they are variations on two key themes: you must contribute intellectually, and you must be responsible for the work. In many cases, you need to sign a paper that says you’ve read the paper and believe it to be accurate. For someone to put ChatGPT is a sign that those scientists see intellectual contribution as vastly more important than responsibility.
Second, ChatGPT is a tool, trained on data written by millions of humans. We don’t put Google Search as an author on papers; we cite the authors who wrote the papers that we found using Google. Any intellectual contribution that ChatGPT “produced” was actually the intellectual work of many people, digested and reported by an opaque algorithm. Scientists generally hate it when you cite anything other than the original work, by the person who had the original idea and did the work, and I hate the idea that we’re going to credit an opaque search tool for coming up with all the stuff it reports.
Citing ChatGPT, much less including it as an author, concedes the point that visual artists are fighting against: it shouldn’t be DALL-E that gets the credit for producing an image in the style of an artist, it should be the artist who gets the credit (and payment). Why is any use of ChatGPT not plagiarism?
I’m not saying that ChatGPT isn’t impressive. Bertrand Russell, one of the greatest philosophers of the 20th century, noted that philosophy gets a bad rap of not actually doing anything, and it gets this rap because anything philosophy does do something, we call it science. Aristotle developed theories of how particles move through space and we call that philosophy; Newton developed other, slightly more predictive theories, and we call that science. In a similar way, whenever something that was previously considered a goal for AI is achieved, folks outside the AI field tend to shrug and claim that whatever that was, it wasn’t “intelligence.” Noam Chomsky writes very persuasively that ChatGPT isn’t “intelligent” because it can’t tell truth from falsehood, or moral right from moral wrong. I’m sympathetic to his argument, but we should note that understanding and producing human speech are remarkable achievements in their own right. They are part of intelligence, or one aspect of it, even if it’s not the “the” AI, whatever that is.
If AI were deadly, I think we could unite against it and survive, à la Terminator. If we were in danger of straightforwardly enslaving an AI, I think we might wake up to our crimes and stop, à la Star Trek. And if someone was using AI as an obtrusive tool of oppression, we would treat that like most other oppression (which is to say, maybe not awesomely).
But what ChatGPT is showing me is that, if humans can use AI to redirect value and labor from other humans, in a way that isn’t straightforward, we’ll do that. Our society’s nearly religious attachment to data, progress, and innovation is giving controllers of AI, however limited and imperfect as it is, one more tool to quietly extract value from the rest of us. It’s more about plagiarism than any power per se, at least so far.
Hi Scott,
Thought provoking post! I have a couple of thoughts:
1. When you say that ChatGPT is only good for rote copy editing, what else have you tried? Personally I think that it's coding is remarkably good. Not perfect of course, but saves hours of time and suggests ways of doing things that I didn't already know. It's also useful for outlining lectures for e.g. undergrads, or even MSc/PhD students. If you ask it to outline "10 lectures on introduction to using bioinformatics for public health microbiology" or "the role of the microbiome in colonisation resistance" I'd argue that it does a better job than most MSc or PhD students could do in a day or two of work. Again, you have to tweak it of course, but much better than starting from nothing.
2. Secondly, I'm not sure about the characterisation of what it is doing as plagiarism. Perhaps in a scientific way, where ideas have to be credited to their originator, then presenting the output of ChatGPT as your own work without any further reference would be bad practice, bordering on plagiarism. But, I don't think most people would do this. It's like wikipedia, can be very helpful, but you still need to dig into the primary sources for scientific purposes. Beyond science, I don't really think it's plagiarism anymore than a writer who writes in a similar style to another is plagiarising that person. If I ask you to write a song about malaria in the style of Dolly Parton (as I did with ChatGPT), and you do a good job (as ChatGPT did), then you're not plagiarising Dolly Parton are you?
Personally I can't imagine it being useful for writing any part of a scientific paper other than maybe the first couple of paragraphs of the introduction, and actually, I haven't found the copy editing that helpful either. I could imagine it being helpful for motivation statements for things, or pathway to impact statements and things like that though.