top of page
  • Jessica Barker

Discussing Deepfakes on BBC Radio



Deepfakes themselves have a short history, but like everything in cyber security (and technology in general) their roots can be traced back many years. We can, for example, go back to the 1860s and the iconic image of Abraham Lincoln, which is actually a composite of Lincoln's head on John Calhoun's body.


Fast forward to 2021 and earlier this week I was on BBC Radio Gloucestershire talking to John Smith about deepfakes. We covered what deepfakes are, some examples of how the technology has been used maliciously and what we need to consider for the future of deepfakes.


📻 Listen here from about 2:13 (needs a BBC login) 📻



What we cover in the interview


What are deepfakes?


Deepfakes are video, images and audio that have been manipulated using artificial intelligence (AI) to replace the original person with someone else's likeness.


Where have deepfakes come from?


The name deepfake was coined by a Reddit user of the same name in 2017, who created and shared pornography clips on the site, swapping the faces of celebrities with pornography performers.


How do deepfakes work?


Deepfakes use artificial intelligence (AI) algorithms that are good at identifying patterns in large datasets. Creating a deepfake involves running thousands of images of the people the creator wants to swap through an AI algorithm called an encoder. The encoder maps the two faces, learning their similarities. They then use a second AI algorithm - a decoder - to recover the two faces, and then swap them.


What are the security implications?


The vast majority of deepfakes are still pornographic - some research suggests as much as 96% of them. It is clear to see why some people argue that deepfakes are being weaponised against women.


Last year, experts from University College London explored the crime threat posed by artificial intelligence and ranked deepfakes as the most serious threat.


Global use of video communications has increased hugely in response to the Covid-19 pandemic; the more we use a technology or means of communication, the more criminals seek to exploit it. Common email, SMS and voice spearphishing scams that we see today are likely to increasingly capitalise deepfake technology - and there are already reports of this happening. In 2019, the chief executive of the UK part of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a criminal who allegedly used deepfake technology to mimic the voice of the German CEO.


Deepfakes pose an obvious threat in terms of reputation damage, and this threat doesn't necessarily just come from places we might expect. In a recent case, a mother from Pennsylvania was arrested after allegedly creating deepfakes of her daughter's cheerleading rivals. She allegedly used source videos from the girls' social media accounts and doctored them to show the girls naked, smoking and drinking; she then sent the videos to the cheerleaders' coach, apparently to get the girls removed from the squad.


What are the social implications?


It's not hard to see how deepfakes have the potential to shift stock prices, influence elections and cause social unrest.


Beyond that, they also open up the scope for plausible deniability for anyone who is recorded saying or doing something they later wish to deny. This has the potential to exacerbate an already precarious social relationship with trust: if seeing and hearing is no longer believing, how do we distinguish truth from lies?


How can we protect ourselves?


Like everything in this space, it's really a cat and mouse game.


Deepfakes used to be much easier to spot and low quality ones still are, for example with patchy skin tone, flickering around the edges of the faces and strange lighting effects. But, as the technology advances, it gets harder to identify them - as anyone who has seen the Christopher Ume / Miles Fisher deepfakes of Tom Cruise will agree.


Many individuals and organisations are working hard to develop solutions to detect deepfakes, which will likely involve using AI.


For now, we must ultimately rely on raising awareness of deepfakes and encourage critical thinking. Content like the Tom Cruise deepfakes and the Channel 4 alternative Queen's Speech have gone a long way to help bring awareness of the technology into the mainstream. Clients have been asking us to cover deepfakes in our awareness-raising training more than ever, and when we speak about deepfakes many people now have an idea of what they are (which was much less common even just a couple of years ago).


One final thought on deepfakes is that we mustn't let this emerging threat distract us from the more common current threats. Criminals generally use the easiest, cheapest and most accessible ways to carry out their scams, which means phishing by email, SMS, social media and telephone calls are still far more common than anything using deepfake technology. The 'shallowfake' French minister scam is a telling reminder of this: the criminals involved didn't use deepfake technology to impersonate the Jean-Yves Le Drian, but rather silicon printed masks.



I mentioned our awareness-raising earlier. We're working with clients on everything from their cyber security awareness and culture programmes to delivering bite-size content and live sessions. Please get in touch if you'd like to know more.

175 views

Related Posts

See All
bottom of page