Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Multicultural Man: On autocomplete

Microsoft's new software wants to stop us writing anything that’s unacceptable or offensive. But unacceptable and offensive to whom?

Image: The New European

Ever since spellcheckers first appeared with primitive proto-word-processors, some of us have been finding their suggestions irresistibly amusing. I remember a colleague I had in the late 1980s, who couldn’t resist singing it out to the whole office when his machine offered him up “crotch”
in lieu of a misspelt “crutch”. It was as if the technology were squaring a circle for him – allowing him to be at once an incorrigible surrealist, and a sedulous salaryman.

It’s been the same with each successive iteration of software designed to effect better syntax: there’s something about a machine ruling on semantic matters that can’t help but make us feel it’s sentient. When I got my first iPhone I began, idly, to thumb text into the Notes application – then I realised the handheld computer would precipitately suggest words, so I decided to go with the flow and see what it would “compose”, if prompted purely randomly. Beginning with a negative, “I don’t”, I soon enough had a
sentence that read: “I don’t have any chance to be a good person and I’m just getting on my own way and I’m doing the right things for you and yours…”

Then I stopped – because it was creeping me out, and I began to suspect this predictive text algorithm might generate a viable piece of writing if I kept on, thus reducing me from the status of “writer” to that of “data entry clerk”. Others have had similar notions throughout the rise of the machines – and even before: Roald Dahl wrote a story about a novel-writing machine called The Great Automatic Grammatizator in the early 1950s – and yes, he imagined it working by applying its knowledge of grammatical rules to produce new copy – in other words: predictive texting on a vast scale.


Dahl’s was a minatory vision: capable of producing a novel in 15 minutes, the precocious contraption renders human creativity otiose. But the technicians who’ve followed in his avant-garde slipstream have embraced the idea of machined literature with the same enthusiasm as they have the notion of self-driving cars. That both innovations remain on the drawing board rather than in the streets or on the bookshelves, is a function of the same – probably irresoluble – problem: the arbitrary – yet immensely significant – character of human ethical thinking.

With no capacity to distinguish between right and wrong as we imagine we do, a fully autonomous vehicle would be bound to be judged as responsible for accidents that were unavoidable – and by the same token, a Grammatizator unable to negotiate the subtle interplay between taboo and
desire, would hardly be likely to depict humans very realistically: its characters would probably be as robotic as itself. But even if computers can’t hack creativity yet, their creators have come up with ways of – somewhat
paradoxically – making us less so.

It wasn’t until I went into the settings for my Outlook Express email programme that I realised Microsoft weren’t just trying to prevent my tendency to pleonasm – something I wanted no help with anyway, regarding it as integral to my style – but that they also wanted to stop me writing anything that’s unacceptable or offensive. Unacceptable and offensive to whom? I hear you cry – and that occurred to me as well. The checks available include ones relating to “inclusiveness” and “sensitive geopolitical references” – so I set out to see what would arouse it, and what alternatives its algorithm would provide.

Under “inclusiveness’”, Microsoft listed these prejudices: cultural, ethnic, gender, racial, sexual orientation and socioeconomic – the implication being that its programme would query any instances, and suggest appropriate “gender-neutral pronouns”, while also ensuring we realise that “the Gaza Strip” is a rather contentious designation. The only problem is that it does nothing of the sort – as I sat there and typed, it allowed me to come out with any number of calumnies against all manner of groups, persuasions and orientations.

Indeed, the only use of language that it queried was “blacks”, suggesting that “black people” might be a less offensive term. Hear-hear! But this falls far short of the comprehensive prevention of thought-crime the other filters might be expected to enact. What’s happened? Have I just not got the appropriate software update – or is it, rather, that this vast corporation is curbing whatever proclivity we might have for hate-speech by pretending that Big Brother is reading every word, when in fact his attention is entirely elsewhere?

Any answers, please send them to me on a postcard – thereby ensuring that every single one of your sentiments is authentic, rather than a mere suggestion, arrived at by a process of elimination.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

See inside the Farewell Boris Johnson, and thanks for f*** all edition

Jean-Paul Belmondo, 
shooting Le Marginal in 
Paris in April 1983, with the Pompidou Centre in the background. Photo: Bernard Charlon/
Gamma-Rapho

Jean-Paul Belmondo: The unlikely face of the French new wave

He baffled and fascinated his critics, but the actor somehow became the absolute personification of insouciant cool