Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Everyday Philosophy: ChatGPT and the rise of the machines

The latest AI system has been hailed as a gamechanger in journalism, but will the bots have their day?

Photo: The New European

Journalists love writing about how journalists will soon be replaced by artificial intelligence. For some that will be a low bar. The Turing test of the near future may be a machine’s ability to convince a Daily Mail reader that the latest polemic against migrants was written by a flesh-and-blood employee and not by a machine designed to generate rabid headlines to stir up xenophobes. But at the higher end of news and commentary, that has until very recently seemed implausible. Yet now, with the emergence of ChatGPT and similar devices, perhaps the bots will have their day.

“Stochastic parrots” is the apt name that Prof Emily Bender and her co-authors (Bender, Gebru, et al) chose for systems that mimic human writing and conversations based not on appreciation of meaning but on the probabilities of one word following another. They are trained using huge databases of texts. The results they’re producing already can be convincing, yet sometimes they are wildly inaccurate. We need to reflect on what is going on.

The German media group Axel Springer is already looking to eliminate some roles because its chief executive believes that ChatGPT is the first shot in a revolution that will transform journalism. If artificial intelligence can do the jobs of news-gathering, summarising, and commenting better and faster than journalists, then IRL reporters and editors could soon be as redundant as typewriters.

John Locke noted in his Essay Concerning Human Understanding (1689) that if we encountered a parrot that was capable of rational dialogue with us, we wouldn’t immediately leap to the conclusion that the parrot was a human being (“man” in Locke’s terminology). He thought we would rather assume we were dealing with a very intelligent, rational parrot. But now we should realise Locke went too far. Today we have the technology to produce machines that plausibly fake rational communication. But that doesn’t make them rational, nor mean that we should leap to the conclusion we are dealing with intelligent beings that just don’t happen to be human. These are sophisticated programs designed to mimic more-or-less-rational, more-or-less-intelligent beings. Don’t be taken in.

In some areas of life, we are happy to accept that if something is indiscernible from the real thing, it might as well be the real thing. If a factory-produced violin sounds as good as a Stradivarius when listened to behind a screen, why go for the incredibly expensive rare option if what you’re interested in is the beauty of tone of the instrument? You might want to invest in a valuable instrument or delight in the weathered wood and the craftsmanship, but as a device for producing extraordinary sound, it fares no better than the production-line model.

But in other areas of life, the chain of cause and effect that produced the object we are dealing with matters a lot to us. In the visual arts, an original Picasso painted by the artist’s hand is valued more highly (and not just at auction) than a visually indistinguishable copy. Some people would be happy with the copy, but for others, the artistic value is tied up with the actual history of the object’s creation, and they would travel to a distant museum to see an original.

Another example: if lab-produced meat tastes like meat of animal origin reared in a field, then it does matter which you are served, even though the flavours and nutritional value may be identical. That’s because there are serious ethical issues about how animals are harmed in conventional meat production.


With machine-produced journalism, it will be important to know what we are dealing with. Writing spewed from probabilistic pastiche-generators may eventually be indistinguishable from that produced by human beings. It’s likely to cost much less, too. But that doesn’t mean that it should be thought of in the same way as we think of human writing. IRL journalists can be held morally accountable for what they write. In contrast, artificial intelligences can only mimic accountability, they don’t understand what they are doing, and no one can blame them when they produce gobbledegook or fake news. Although some human journalists are corruptible and biased, the best are courageous defenders of truth and accuracy who risk their lives to increase our knowledge and understanding of what is happening. It is no wonder that authoritarian regimes murder, torture and imprison them on such a scale.

We don’t want a world in which demagogues can control the news with a few deft tweaks to code. But we might get one, nevertheless.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

See inside the Welcome to the Brexshit Isles edition

Photo: The New European

Wideangle: Europe’s strangest political parties

From workshy Danish comedians to Polish beer lovers, here are the continent’s most eccentric politicians