According to Professor of Computational Lexicology Piek Vossen, the euphoric reactions to AI writing chatbot ChatGPT are entirely misplaced. „It could very well spell the end of the Internet."
Language program ChatGPT is being used to churn out extremely diverse articles using Artificial Intelligence: from a blog about Tokyo to a court appeal text. They’re hardly distinguishable from ‘real’ texts. What does science think about this?
„I don't get all the fuss", Vossen started the conversation before I could even ask him that question. „ChatGPT is not a revolutionary new system or a different approach. It just builds on previous versions and is somewhat better."
Clear language. So you’re not an advocate?
„No. It’s also tiresome that, as scientists, we’ve spent a decade explaining that these systems are actually nowhere near as intelligent as they seem. Now there’s this new version that seems impressive at first glance but, fundamentally, nothing has changed. It’s a kind of parrot that gives an ingenious response. Beyond that, it’s nothing."
A parrot; because the words have no meaning?
„Exactly. The biggest problem is that these systems have no understanding of the world. The system has no idea what it’s writing about. Such inconsistencies were clear in earlier versions when, for example, a woman was referred to as ‘him’ because of a gender bias in the data. You see less of that now. When these systems generate something, it sounds creative and fantastic, but it represents nothing. It's not about anything. They have nothing to say by themselves and they can only react."
Isn’t this invisibility and aren’t these seemingly ‘good’ texts exactly where danger of the program lies?
„Absolutely. Even I fall for it when there’s a text that’s in my own area of expertise. It’s becoming increasingly difficult to find out the truth. It could very well spell the end of the Internet. As soon as people feel they can no longer trust anything, they may very well turn away from the Internet."
Do you think there are ways we can recognise such texts?
„What we need to do as researchers is develop AI that debunks AI. This is being done using so-called ‘probing tools’ but there are no easy-to-use products yet. You’ll need to train people to focus on certain characteristics."
Have you figured out how to trace whether essays are written by ChatGPT?
„No, not yet. Maybe I'll take the opening sentence and give it to ChatGPT to see what the system generates next and how far it resembles what the student wrote. On the other hand, I can’t tell whether students have used a spell checker either. They could just use the system to generate the text and then edit it afterwards. I’m afraid there’ll be tough times ahead for many lecturers. We may have to switch to oral exams."
So for now, you don’t see yourself using ChatGPT’s software?
„Absolutely not. One of the main reasons being that it’s not open AI at all, even though they say it is. You can use this version online, but I can’t download it and use it in other software."
„And may I make one other final point of criticism? It takes a lot of computing power to create a program like this. You don’t hear anyone talk about what that means for the environment. ChatGPT is also based on data from a particular time frame. Five years from now, language will have changed, we’ll have changed and this system will be obsolete. I think it’s stupid to build such an unsustainable model. It's great fun and all that, but who knows how many computers it has steaming away, generating heat. And even though this system knows an unimaginable number of texts compared with what a six-year-old child has read, that six-year-old child is still smarter than ChatGPT."