ChatGPT and Science, Technology and Society Studies

Why ChatGPT has generated so much hype

While the discourse around ChatGPT has probably started reaching saturation point, it’s worth pointing out how few of these discussions are actually about the chatbot itself. From being too quick to accuse students of cheating, to underestimating what humans can do, to the perils of following the hype – there’s much that caught the attention of Flora Lysen, Massimiliano Simons and Maximilian Roßmann from the Faculty of Arts and Social Sciences (FASoS).

“Tell me what you think about AI and I tell you who you are,” says Simons. “People are bringing their own concerns to the issues.” He attended a debate among FASoS students and staff and was intrigued by what it revealed about the implicit assumptions about students, with many universities feeling the need to act strongly to prevent fraud.

“Our impression was more that students were curious but had no intention of cheating themselves out of the education they chose.” Fellow researcher Lysen concurs. “I was surprised to find that many of my students pay for programs like Grammarly, so they are really looking for help with their writing.” She notes that tech companies are fuelling an arms race between text generation and fraud detection software, pitching students against institutions with both sides paying to get a leg up on the other.

The art and craft of omission

“It’s a good tool, so long as people don’t bring any assumptions about facts or knowledge to the table,” clarifies Simons. “It generates text – nothing else. And even regarding language, it’s worth keeping in mind that most things we say on a day-to-day basis aren’t about transferring information.” He adds that, in any case, higher education is about critical evaluation and making choices. “Often the hard work in producing a meaningful text lies in leaving out irrelevant things. In many ways, the only way to prompt the bot accurately enough to write the text you want is to have written that type of text.”

Roßmann warns about swallowing corporate predictions on the uses of AI wholesale. “Think of 3D printers and their promise to print everything at the touch of a button. Obviously, the shortage of medical spare parts during the pandemic provided a stage for this vision. When people tried to recreate early successes, however, it was very sobering. In the end, the only things successfully 3d printed at large scale were face shields and these little straps that hold the masks together behind your head.”

“I don’t think ChatGPT is that mind-blowing,” says Lysen. “It won’t completely change how we relate to texts and writing. The impact, to me, seems more about the attention we dedicate to this phenomenon.” She thinks the work of writers crafting texts for specific situations won’t become obsolete. “Those who work with it will have to learn how to prompt and re-prompt it well enough to really contextualise the resulting texts. We shouldn’t underestimate human skills, experience and knowledge in that context.”

Hype and awe

Lysen thinks ChatGPT’s design – the text appearing gradually in a neat, slick interface – generates awe. She refers to the long history with chatbots going back to the ELIZA effect – effectively our tendency to anthropomorphise machines. “That reception, that awe, has become a cultural narrative – but the awe will also wear off. When I’m using a customer service chatbot I’m not in awe anymore; I just need a problem solved.”

That sense of awe, according to Simons, is because the default assumption persists that texts are written by humans. “It might be that this expectation changes over time, especially with texts for commercial use.” He adds that there are also ‘post-artificial’ texts, a category where it is essentially irrelevant who or what produced it, like signs, instructions, ingredient lists, etc.

Simons also wonders if the hype won’t die down rather quickly after the initial phase of exploration. “That was my experience: I used it a couple of days; it was fun – and then I got bored of it.” Roßmann foregrounds the entertainment value as crucial to the lively response – but is also convinced of its utility. “After it came out, I translated my papers into punk-rock songs and was impressed with the genre certainty. I also created short advertisement texts for my parents’ distillery.”

How and which stories proliferate

Roßmann points out that what really generated the attention was that everyone got the chance to experience it. “ChatGPT is like a playground for exploring future scenarios and the relationship between generative AI technology and values. With the orchestrated global release, everyone wanted to share their first-hand experiences using the same buzzwords or hashtags – all at the same time. This incentivised a hyperbolic spiral, since only the most thrilling and garnished stories stand out.”

The limits on characters and attention nudged people into drawing on familiar tropes, from robots threatening our jobs and freedom, to students cheating. “However,” adds Roßmann, “that doesn’t make these stories true or relevant. Indulging in them of excitement might leave us with the wrong issues on top of our agendas.” That dynamic applies among individuals as it does among media outlets. Simons points out that often the same companies that develop AI spend money on think tanks that will then amplify PR statements by talking e.g. about existential risk.

“You need colossal venture capitalist backing to develop LLMs like these,” says Lysen, “and to generate the kind of hype we’ve seen here.” AI, as well as being scientifically very exciting is also a very lucrative field of speculation. Earlier this year Microsoft announced a multi-billion investment in OpenAI. One assumes their stock price hasn’t been negatively impacted by the hype either.

Which narratives should we foreground?

The discussion around the potential commercial use is part of a larger narrative around efficiency, cost saving and profit. “It’s almost as if it makes us more enthusiastic about AI replacing people,” says Lysen. “The problem with the augmentation narrative, e.g. in medicine, is that the actual intention is to make processes faster and cheaper.” Roßmann agrees that we shouldn’t reduce the work of e.g. a doctor to only those components that AI could do better. “What about establishing the kind of relationship needed to understand the patient’s life circumstances or for the patient to accept medical advice?” Lysen adds that there will also be a redistribution of labour to some extent in favour of programmers, developers, marketers, and everyone involved in maintaining the colossal infrastructure behind the AI.

Roßmann hopes the will be about more than merely recasting existing narratives and arguments: “We could also take a step further and finally act on issues like labour conditions, inequality or sustainability [referring to the vast amounts of energy required to train and run e.g. LLMs].” Simons replies that, unfortunately, that’s the hope with every new technology – and that he’s not confident that ChatGPT will serve to amplify the most relevant narratives here.

By Florian Raith

Also read