Deepfakes may be more dangerous than you thought!

Eli
4 min readFeb 15, 2022

--

designed by Rawpixel.com — Freepik.com

In December of 2020, minutes after the queen delivered her yearly Christmas speech, Channel 4 aired some surprising footage that lead to quite a lot of controversy. Using Deepfake technology they were able to get the actress Debra Stephenson to appear as the Queen on live TV and deliver a speech that was far from the usual themed speech you’d expect the queen to give.

The speech touched on quite a few controversial topics, including the decision by the Duke and Duchess of Sussex to leave the UK. It also alluded to the Duke of York’s decision to step down from royal duties earlier this year after an interview he gave to the BBC about his relationship with sex offender Jeffrey Epstein, and went on to talk about the limitations of being able to speak plainly and from the heart on BBC. The theme of the speech was based on trusting in what’s genuine — and what is not, and even went on to show the queen doing a TikTok dance.

Here’s the video by Channel 4:

Before I continue, I recently started my blog, https://kilabyte.org/, and would really appreciate it if you can check it out and show some support, and maybe even share this post of my site online ☺️

What is a deepfake?

The word deepfake” was first coined in 2017 by a Reddit user of the same name. This user created a space on the online news and aggregation site, where they shared explicit videos that used open-source face-swapping technology to “frame” public figures.

Although attempts to superimpose celebrities or other faces over pornography were not revolutionary, the style, speed, and seeming ease with which they were carried out were. According to AI expert Alex Champandard, who talked to Vice, constructing a deepfake using a consumer-grade graphics card may take just a few hours.

The dangers of deepfakes

The Apollo 11 mission could have gone wrong in so many ways and if it did, the disaster would have been streamed to the millions of viewers watching it unfold live from their home TV. In the event it went wrong, they would have switched the broadcast to president Nixon giving a solemn speech.

Of course, as I’m sure you know, the mission was a success, and that speech was never given. But in 2020, over 51 years later, a team from the Centre for Advanced Virtuality at the Massachusetts Institute of Technology (MIT) took the script President Nixon would have read out and created an almost exact version of it using Deepfake technology (Shown on the video above).

Whilst this is misleading and untrue, it reaches nowhere near the true dangerous potential deepfakes have. Being able to get any person to say whatever you want, can be incredibly harmful when done properly, as proven in 2019; criminals had used deepfake AI-based software to impersonate the voice of company chief executive and request a fraudulent transfer of €220,00 (approximately $243.000) to a Hungarian supplier. The transfer went through and was then forwarded to another account in Mexico when the company received a second call for another transfer claiming that the first payment had already been reimbursed, although luckily they were refused another transfer as they were met with suspicious concerns.

That took place in 2019, and since then the technology has only evolved and improved making deepfakes way more convincing and common.

“We are already at the point where you can’t tell the difference between deepfakes and the real thing,” Professor Hao Li of the University of Southern California

Misuses of Deepfakes

(Unfortunately, the original Instagram post won't embed here, so the video above is an unclear one I found on YouTube, you can view the original post here; https://www.instagram.com/p/CVAksOjMuQu/)

Above is a deepfake of Mark Zuckerberg saying things he never actually said, it was made by two artists. Whilst the voice is not perfect, the mannerism and video look extremely realistic.

Here is another deepfake of former President Obama, it was created by someone called Jordan Peele, a writer and director of ‘Get Out’.

How to Spot a Deepfake

Deepfakes are already incredibly convincing and realistic, and they’re only improving. Spotting a deepfake can sometimes be near impossible, but here are some tips from MIT Media Lab to help you, pay attention to:

  1. The face (most good deepfakes are facial transformations)
  2. Cheeks and Forehead (different skin type? Too smooth? Too wrinkly? etc.)
  3. Eyes & Eyebrows (Are the shadows natural?)
  4. Glasses (too much or too little glare?)
  5. How real does the hair look
  6. Facial moles — do they look real?
  7. Are the blinking natural
  8. Does the size and colour of the lips match the rest of the face?

A lot of deepfakes often miss out on making the environment look natural, and often can't fully recreate certain facial features, so learning to recognise these points should give you a bit of an idea when it comes to deep faked content.

If all the above mentions of Ai confused you, why not check out my article on Ai, and find out if Ai really is what you thought it was! 👀

Have you ever come across a deepfake? Were you able to notice it? Let me know✌️

Originally published at https://kilabyte.org on February 15, 2022.

--

--

Eli

🍱 bento.me/eli 👋 Hi I'm Eli, I’m 17 👀 I'm interested in anything tech 🌱 I'm currently doing a full stack web dev course 💪 Eager to learn new things!