Why deepfake video is a real threat to businesses

Author: Phil Muncaster

Over the years, people have largely learned to take what they see on the internet with a generous grain of salt. But seeing, the old saying goes, is believing, and deepfake video—videos generated by artificial intelligence (AI) that, at a passing glance, look and sound genuine—is making it even more difficult to separate fact from fiction. According to Forrester, "deepfakes alone will cost businesses over a quarter of a billion dollars,"1 and the strategies to defend against deepfake-fueled fraud and cybercrime are still evolving.

How do deepfakes work?

The name "deepfake" is a portmanteau: Deepfakes are made using deep learning, a subset of AI, to create synthetic content that superimposes one person's likeness onto an existing image or video. Basic deepfakes use neural networks to swap faces: One AI algorithm, an encoder, learns the similarities between the two faces; another, a decoder, swaps the likenesses and reconstructs the images. A more advanced technique uses generative adversarial networks, which deploy two machine-learning algorithms: a generator that creates synthetic videos and a discriminator that attempts to detect the forgeries. With enough data, training and refining, generators can create images and videos that easily fool the discriminators.

How dangerous are deepfakes?

The applications for deepfake videos haven't progressed very far beyond mischief-making and pornography. But the technology could be deployed for more nefarious ends. Deepfake audio has already been weaponized for fraud: In 2019, the CEO of a UK-based energy firm was tricked into wiring $243,000 to a bank account after a scammer used an AI algorithm to impersonate the CEO's boss and demand the fraudulent transfer.

Deepfake videos could be similarly weaponized for:

  • Spreading misinformation. A C-suite executive could be impersonated to spread fake news about their company and damage the company's performance and share price.
  • Scamming employees, partners or others. Deepfake videos of senior executives could be the video equivalent of phishing and whaling attacks, which are designed to trick victims into divulging sensitive corporate and personal details or to make direct money transfers to scammers.
  • Extortion. Deepfake technology could create highly realistic pornographic content with the face of an executive grafted onto the body of an actor, which could then be used as ransomware. TrendMicro has noted interest in such sex extortion capabilities on underground forums.

Fighting the fakes

Amateur deepfake videos are usually pretty easy to spot. Sometimes eyes don't blink normally;. lips don't sync with the audio; skin tones don't quite match; and fine details, like hair, aren't properly rendered.

The algorithms are getting better and deepfakes are getting harder to spot as the technology improves. With deepfake video services available on the dark web for as little as $50, businesses can't afford to take deepfakes lightly.

Microsoft recently introduced two deepfake-detecting technologies. But until technology emerges to automate deepfake detection in a foolproof manner, the best thing you can do is stay vigilant:

  • Train your staff to spot deepfakes.
  • Adapt business processes to account for deepfake threats (for example, requiring two people to sign off on any money transfer request).
  • Police your brand rigorously, and request that any fake or libelous content be taken down.
  • Account for deepfakes in incident response plans, and ensure that human resources, public relations, legal and other stakeholders know how to react if a deepfake video goes viral.

See how Verizon's enhanced cyber security solutions can defend your brand and corporate reputation.

1Predictions 2020, Forrester, 2019