Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Why fake news is about to get a lot worse

A deepfake of Nicolas Cage as Star Trek's Jean-Luc Picard. Picture: Archant - Credit: Archant

If you think you’re too smart to fall for fake news, you are wrong. Advances in technology mean it is about to get a lot more sophisticated. PARMY OLSON reports.

Most people have heard the same kind of story from an older relative: ‘We were just turning on the computer,’ they might say, ‘when a box flashed up on the screen saying we had a virus! A bad one! And so we have to buy a special package of software for £19.99 to clean it out! Right?’

You shake your head silently to yourself, give a knowing smile and explain that what they saw was a pop-up ad. Those poor, digitally-illiterate older people. They haven’t a clue. Good thing we know what’s real and what’s fake. Right?

It turns out that every generation gets their turn at being left behind, blinking in confusion, by the pace of change. That includes those of us who rolled our eyes at thinking pop-up ads were a sign of being hacked.

The ground is already shifting under our feet, and those of us who grew up with computers and smartphones are sometimes struggling to keep up. Maybe we nearly clicked on a rogue link that ensnared us in a phishing scheme, or almost shared a fake news headline that seemed credible enough because it went viral on Facebook or Twitter. (Admit it, you’ve at least wondered about some of those news stories.)

Fake news has taught us that the wisdom of crowds is now officially a misnomer. We’re learning that information isn’t necessarily true just because tens of thousands of other people made it go viral.

But just as soon as we’ve learned to second-guess the mob-like quality of social media, there’s now another scheme to mess with our heads: deepfakes.

The term was coined on Reddit, a social news site that aggregates viral stories and images, many of which percolate up to become mainstream news stories. The site is teeming with inspiring, deeply informative content – but it also helped popularise creepshots, where men would take surreptitious photos of women without their consent, and the 2014 iCloud leak of celebrity nude photos.

Since early January, Reddit has now also become a spawning ground for deepfakes. Ever seen a face-swapping app for photos? This is the next level: videos where the original face of an actor is swapped out for that of someone else, using a machine-learning algorithm.

The videos have the veneer of CGI but are realistic enough to let you suspend your imagination for a while, as you might when seeing actors like Carrie Fisher digitally reimposed in the latest Star Wars movie.

In time, they’ll get good enough to look convincingly real.

You don’t need to be an expert in artificial intelligence to make deepfake videos. You just need a computer with a high-performance brain – a graphics processing unit (GPU) made by Nvidia – to make it work, as well as time and a modest understanding of programming.

Many have flocked to read the detailed deepfake instructions on Reddit. As of earlier this month, a desktop tool for creating deepfakes called FakeApp had been downloaded more than 100,000 times. The Reddit discussion page devoted to deepfakes had meanwhile amassed at least 85,000 subscribers.

One innocuous use of deepfakes is to transplant the face of actor Nicolas Cage on to the actor of another movie scene. But visitors to Reddit have now taken the next, crude step of transplanting the faces of celebrities – typically famous actresses – on to porn. Some are even selling access to their deepfake videos, distributing them on sites like Pornhub. Hollywood is scrambling to figure out how to respond to something that is not quite defamation, not quite copyright infringement. Porn aside, the rest of us have to ask ourselves how long it will be before we don’t just gawp at deepfake porn, but are duped by a deepfake video that makes the rounds on social media and becomes news.

Remember the infamous Access Hollywood video leaked of 2016 that caught Donald Trump in the act of talking about how he ‘grabbed [women] by the pussy’ – you know, the bombshell that didn’t stop Trump from becoming president? If he had said back then that the audio was fake, no one would have believed him. If he had claimed it today, a few people might have wondered. In a couple of years, it will be a legitimate defence.

Rapid developments in machine learning and AI are not only making it possible to put someone’s face on another’s body; they are also laying the groundwork to make an artificial voice that sounds 
just like Donald Trump’s or Theresa 

Research by DeepMind, the London-based AI division of Google, recently led to a new kind of software that takes a massive step forward in making artificial voices.

The software is called wavenet, and some experts say that within the next two years engineers will use it to create computerised voices (like the ones used by Siri and Alexa) that sound almost indistinguishable from humans.

They’ll also be able to mimic other people’s voices with uncanny accuracy, by blending intonation, pitch and timbre like mixing colours on a palette. Since the software is open source, anyone will theoretically be able to build their own digital voices using the same system – not unlike people’s ability to create deepfakes today.

This throws up all kinds of possibilities for creating news stories aimed at duping the public. Imagine fake videos of politicians in strip clubs that look incredibly real, or fake audio of a respected business figure accepting bribes over the phone.

At the very least, it will be easier dismiss real evidence as digital trickery that someone rustled up on their laptop. Fake news, in other words, is about to get nuts.

Next time you scoff at a pensioner who is deliberating over an email from a Nigerian prince, bear one thing in mind. We’re about to be just as flummoxed as them over the next few years.

• Parmy Olson is a writer for Forbes magazine and author of We Are Anonymous

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.