Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Paradise for conspiracy theorists: the world of deepfakes

Improving technology will make it easier for bad actors and the criminal fraternity to fool us all

Picture: The New European

Earlier this month a video appeared on a Ukrainian website appearing to show President Zelensky telling his soldiers to lay down their arms and surrender. Clad in his signature green top and speaking in front of a backdrop adorned with the country’s coat of arms, he told his people that it was time “to say goodbye to you. I advise you to lay down your arms and return to your families.”

That video, we now know, was a fake – a deepfake, a synthetic, computer-generated video in which words are literally placed into someone else’s mouth. Watching it with that knowledge, the tell-tale signs are there – his head is slightly larger than in reality and appears more unnatural than the rest of his body, his movements are slightly odd. It’s realistic – not exactly 1980s TV presenter Max Headroom – but not quite right.

But it’s still relatively early in the story of deepfakes, chances are you are fairly media- and tech-savvy and, crucially, we approach the video knowing it not to be true. But what happens in the coming years as the technology improves, as it will, and the videos are created for an audience who are not only more credulous but actively seeking to have their prejudices confirmed? We know from Brexit and Trump the potential of fake news, we probably know from our own families the ability of people to take as read whatever their Facebook feed tells them. What happens to democracy when anybody can make anybody say literally anything?

“The fake video of Ukrainian President Zelensky was socially and technically debunked quite quickly – and that is a good thing,” says Dr Lydia Kostopoulos, senior vice president of emerging tech insights at security firm KnowBe4.

“Socially, because the Ukrainian government already warned their citizens to expect a deepfake, and precisely one of this kind where the president surrenders, and so there was a widespread expectation for something fake to come up. And secondly, because the Ukrainian government was prepared for deepfakes to be part of this conflict’s information environment, they were ready and quick to debunk the fake video on their official social media channels within minutes of the fake video being posted.

“Technically the video was not of Hollywood synthetic video quality because it wasn’t made by Hollywood or people with the skills and equipment to do so. Those without skills seeking to make poor-quality deepfakes will find free tutorials and software online to do so. However, while the gap between Hollywood synthetic video quality and poor-quality deepfakes remains, that space between them is closing as more accessible options to create deepfakes become available.

“What this means for our future is that, just like every other technology is becoming cheaper and more accessible and requiring fewer skills to use than before, so too are deepfake technologies.”

A deepfake image of Ukraine’s President Zelensky (Pic: YouTube)

And that, of course, is before even getting to the myriad other ways deepfakes can be used: fraud, blackmail and pornography.

Indeed, the term deepfake itself originated in 2017 on Reddit, when a user called “deepfakes” first began sharing self-created videos, quickly being joined by others. Predictably, a sizeable proportion were pornographic videos with the female actors’ faces being replaced by those of celebrities (slightly more wholesomely, many others featured Nicolas Cage being placed incongruously into classic films).

A year later US filmmaker Jordan Peele produced a deepfake video of Barack Obama, saying, among other things, that “President Trump is a total and complete dipshit”, with the intention of portraying the dangerous consequences and power of the form. But while much use of the technology is harmless – recent additions to the Star Wars series have used it to insert dead, or younger versions of living, actors into the films – greater use of it in the political arena are inevitable. Some of it has good, if dubious intentions – Indian political parties have used it to have candidates speaking in languages they don’t necessarily speak. Some clearly don’t: in 2020, Belgium’s Extinction Rebellion published a video of then prime minister Sophie Wilmès apparently pushing Covid falsehoods.

How worrying is this? How soon until anybody can do this? At the moment it’s not one for the enthusiastic amateur. Andersen Cheng, CEO of Post-Quantum, a British company building an encryption algorithm resistant to quantum computers, tells me: “It depends on how convincing you want it to be. If you really want Hollywood quality, with lip movements all synced, it’s not an easy job today. But if you just want some amateurish stuff, there are already websites that could do it, but it’s more for fun.”

And Nikolay Gaubitch, director of research at voice fraud specialists Pindrop, says: “There are tools to help you create fake voices, fake videos; however, it’s not yet that straightforward that anybody can just sit down and do it. It does require some level of sophistication.”

But inevitably that will change, as all technology evolves and becomes more user-friendly. And that’s not just a concern for democracy and the media – I spoke to a number of security companies who are already seeking to stay ahead of the fraudsters. With voice recognition playing an increasing role in how our banks verify our status when giving access to our financial details, and face recognition expected to play a much bigger role in our lives over the coming years, deepfakes are of interest to those looking to get at our data and money.

“As a result of the difficulties around detecting deepfakes, they are becoming of increasing concern to organisations,” says Srinivas Mukkamala, senior vice president of security products at IT firm Ivanti.

“Deepfakes have given cybercriminals a new medium with which they can look to spread misinformation, extort businesses and commit fraud. They are essentially a new form of phishing. In 2019 we saw deepfake technology being used to fool a CEO of a UK-based energy firm to demand significant funds be urgently transferred to a supplier over a phone call. While this wasn’t a video, the company still transferred the funds, further highlighting that deepfake videos could pose a serious security threat to organisations.”

Ashvin Kamaraju, chief technology officer at Thales, says: “Deepfake technology is now so sophisticated that we are starting to see cybercriminals move away from tried and tested methods like phishing, to carry out far more advanced attacks on enterprises. In 2022, we will see deepfake AI utilised to impersonate the CEO of a high-profile global enterprise.

“Such attacks have already started to gain in popularity, with threat actors using AI to clone the voices of business leaders in order to steal huge amounts of money. If these attacks become more widespread, the consequences could be devastating.”

Paul Scharre views a manipulated video by BuzzFeed with filmmaker Jordan Peele (Photo: ROBERT LEVER/AFP via Getty Images)

These attacks will probably not be people trying to get at your Natwest account – at least, not yet. As Sarah Munro, director of biometrics at Onfido, says, the most likely targets for deepfake fraud are “high net worth individuals or public figures”.

She explains: “There are two main drivers for this, firstly, they need to be worth the upfront time investment from the criminal in developing the personalised video. Secondly, the criminal will need around six to nine minutes of video footage to create a good likeness of the individual in a deepfake. Often this can be scraped from social media, but may also be taken from previous interview footage if the individual has a high profile in the media.

“Although deepfake technology is improving at an alarming rate, so too is the technology which helps to spot them. AI-powered biometric technology very accurately determines whether the video that is presented is real or a forgery. Techniques like motion tracking, lip sync and texture analysis can verify whether the user is physically present. Especially in the cases where the footage is shown in real time, the quality of deepfakes deteriorates due to the heavy processing power required, so it doesn’t currently lend itself to quick reactions.

“While impressive, today’s deepfake technology is unlikely in most cases to be parity with authentic video footage – but it is unquestionably one to watch. Recognition of its growing sophistication and imagination around its potential is permeating through the fraud community and will, in turn, add to the criminal appetite.”

So not just a threat to democracy but a new way of committing crime – a dark side to a form that may have first come to your attention via a series of highly realistic fake videos of Tom Cruise playing golf, performing a magic trick and tellingly a convoluted anecdote about Mikhail Gorbachev.

And it’s a genie not about to get back into the bottle. While hopefully no mainstream political parties will turn to such outright con tricks to mislead voters – although it’s worth remembering that during the 2019 general election the Conservatives rebranded their official Twitter account as “factcheckUK” during the televised leaders’ debate and used it to publish anti-Labour posts – their outriders almost certainly will.

What might we have seen from some of the less principled elements of the Leave campaign had this technology been in play? Vows from Jean-Claude Juncker to abolish the monarchy in the event of a Remain vote? What words might we see placed in the mouth of Sir Keir Starmer across Twitter, YouTube and Facebook during the next general election campaign?

While there will be increasing pressure on those platforms to either label or remove damaging deepfakes, the immediate requirement is education – educating people to display a questioning instinct when viewing such videos online in future.

Kain Jones, CEO of image monitoring platform Pixsy, says: “There are very subtle cues that can help you identify deepfakes, which vary depending on how ‘well made’ the deepfake is.

“Classic giveaways in an amateur deepfake might be slightly unnatural looking skin tones, or, in deepfake videos where the subject is speaking, the lip-sync might be a little off. Look to the edges of deepfake faces too, or small details like hair or jewellery – with a deepfake, these areas might include a subtle blur, flicker or jagged edge that wouldn’t otherwise be there on a real photo or video.”

But perhaps the best part of a decade of deeply cynical politics has already made many of us immediately question whatever we see – even when those things turn out not to be so fake. When, last month and with 130,000 Russian troops already amassed on Ukraine’s border, Diane Abbott appeared in a video saying that Nato was to blame for the tensions, many on social media questioned its authenticity. Looking oddly stilted against a wooden backdrop, Abbott informed viewers that “claims that Russia is the aggressor should be treated sceptically”. So was the video by many.

The only problem? This wasn’t a deepfake, or a fake at all. Finding that balance between a healthy scepticism and becoming an outright conspiracy theorist is about to become an even bigger challenge.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.