A comprehensive study published in the journal Nature in August highlighted an increasingly apparent predicament: people struggle to recognize deepfake media.

Between 27-50% of respondents to the survey were unable to correctly identify the authenticity of a deepfake video.

In September, National Security Agency (NSA), Federal Bureau of Investigation (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) published a joint cybersecurity information sheet titled “Contextualizing Deepfake Threats to Organizations.” The information sheet offered steps to identify, defend against, and respond to deepfake threats.

Meanwhile, the Federal Election Commission (FEC) has announced that it is taking steps toward regulating deepfakes in campaign ads in order to safeguard the upcoming 2024 election.

While deepfake media has been a concern for years, the rise of generative artificial intelligence (AI) technology over the last year has escalated this concern. Now, it is increasingly easy to create convincing deepfake media with no cost or low-cost tools.

In March of this year, AI-generated fake images of former President Donald Trump being arrested circulated across social media. Also this year, an Arizona mother received a call which turned out to be an AI-generated clone of her daughter’s voice begging the mother to pay a ransom for her daughter’s release from a hostage situation.

Deepfake media is a topic that FPOV covered in depth in its most recent Update. You can go back and watch the Update recording here.

Signup for our Newsletter

Name(Required)

In the Update, we demonstrated how easy it is to create a convincing deepfake, both audio and video.

Just to give you an example, this is a picture of myself, Hart Brown, the CEO of FPOV and the session facilitator.

Using a deepfake generation tool called DeepFaceLab, our team was able to transform my face into various celebrities including Keanu Reeves, Robert Downey Jr., Tom Holland, Nicholas Cage, Sylvester Stallone, and Tom Cruise. This transformation was done live, during the session. The deepfakes were not recorded.

Deepfake Keanu Reeves
Deepfake Robert Downey Jr.
Deepfake Tom Holland
Deepfake Nicholas Cage
Deepfake Sylvester Stallone
Deepfake Tom Cruise

With thirty seconds of audio, we were also able to create an AI-generated voice clone of one the session attendees during the session. It took about five minutes of work.

One can easily see how these tools will turbocharge social engineering. We have come a long way from the Nigerian Prince Scam. In fact, already we have seen deepfake audio and videos being used in social engineering campaigns. The Arizona mother who received a gut-wrenching AI generated voice clone phone call of her daughter is just one example.

Late last year an executive at Binance, a cryptocurrency exchange, claims attackers had created a deepfake of him and used it on videoconference calls to try and trick would-be investors. The executive only found out about it after people emailed him thanking him for meeting with them. This would indicate that in at least one case, someone was duped by the deception.

Deepfakes could be used in social engineering campaigns in a myriad of ways. Videoconference calls is one example. They could be used in business email compromise campaigns. The voice of a CEO could be cloned to trick a payroll specialist to change a bank account number to that of the attacker. Audio and video deepfakes could be used to squash a public merger or tank a stock. Maybe in the most extreme and terrifying example, they could be used to start a war.

Already, companies are working to develop technologies that will help identify and classify deepfakes automatically. However, as is the case with most organizational risk, the bad guys will always be a step ahead of the good guys. This is why it is paramount for organizations to train their employees to spot deepfakes. Education is critical.

Just this month, hospitality giant MGM was hit with a cyberattack. How did it happen? Reportedly, the attackers posed as employees and tricked the organization’s IT staff into giving them access to the network. The attack, still unfolding as this is being written, is reportedly costing MGM $8.4 million a day.

Cybersecurity awareness training should be a part of every organization. This training should include how to spot synthetic media such as deepfake audio and video.

Below are some tips for spotting deepfake media:

Context

  • Is there cognitive dissonance?: Does the media bring up discomfort in you?
  • Does it have a professional look or is it low quality and glitchy?: A telltale sign of phishing emails has always been bad grammar and typos. When it comes to deepfake media, a sign would be the quality of the media.
  • What is the setting? Consider the context of the media and the emotion it brings up in you
  • How are you viewing it? Deepfake media may be harder to spot on a mobile device because the screen is smaller
  • Utilize reverse image searches: Search for a photo using a reverse image search to see if it is contained elsewhere

Credibility

  • Corroboration: Has the media been corroborated by reputable sources?
  • Reputation: Is the organization or individual hosting the media or sharing it reputable and trustworthy? Is the author or source clear or does it seem to be shadowed?
  • Bias: Is there a clear bias inherent in the media? Those sharing AI-generated fake photos of Donald Trump getting arrested certainly had a bias in mind when they created and shared the photos

Technical

  • Metadata analysis: What can you learn from analyzing the data of the image or video
  • Edges: Deepfake images often have jagged edges which can help detect the image
  • Luminance: Deepfakes often have lighting inconsistencies which help with detection
  • Clone detection: There are various techniques being utilized to differentiate between a real voice and a cloned voice
  • Error Level Analysis (ELA): A technique using deep learning and machine learning to understand if an image has been modified
  • Blood flow: Tools, such as Intel’s FakeCatcher, use ‘blood flow’ in the pixels of a video to “assess what makes us human”

Deepfake media is only going to become more prevalent. Its use in social engineering is going to grow. One report found that the proportion of fraud cases in the US using deepfake technology jumped from 0.2% in Q1 2022 to 2.6% in Q1 2023. This is a trend that likely won’t be reversed.

Education is the best way to help your organization thwart this alarming reality. A good step would be to go back and watch our Update on deepfakes, share it with your team members, and reach out to us so that we can conduct advanced cybersecurity awareness training within your organization.

About the Author

Hart Brown is the CEO of Future Point of View and the Security and Risk Practice Lead. He is a widely known expert and trusted advisor in the governance of risk and resilience with over 20 years of experience across a broad spectrum of organizations in both the public and private sectors. He is a Certified Ethical Hacker and a Qualified Risk Director. Learn more about Hart Brown.