Artificial Intelligence (AI) tools are getting more capable and creative every day. Generative AI is writing human-like prose and creating lifelike photography. 3D animation software is making those images come alive. Voice generation tools are replicating people’s voices with just a smattering of input.

And it’s not coming out of Hollywood studios. Much of the excitement is due to the fact that anyone’s free to play with these tools. With easily accessible software, anyone can create realistic images, voices, and videos replicating real people and places. But in the wrong hands, they’re tools to deceive.

We rely on our voice and visual identity knowledge to know who’s who. When AI can create replicas of real people that are intended to fool others, that’s a cyber security risk. What follows will help prevent you from becoming a target of deception by deepfake software used with nefarious intent.

What is a Deepfake?

A “deepfake” is an image, a video, voice, or text created by AI. The “deep” is from “deep learning,” a method of training computers on massive amounts of data to perform human-like tasks. The “fake” indicates that it’s computer-generated and difficult to distinguish from human-generated media.

Some deepfakes are used in movies, galleries, or museum exhibits to embellish a scene or bring historical figures to life. People who have lost the ability to speak due to illness or disease have regained that opportunity with computer-generated voices.

These positive uses, however, have mostly been eclipsed and outnumbered by deepfake media created to trick people and falsify the truth. In these cases, deepfakes can be considered digital forgeries.

As reported by Reuters, Deep Media estimated that 500,000 voice and video deepfakes will be posted on social media websites in 2023.

The rise of deepfake imagery in identity fraud points to a rapidly growing problem. Data from Sumsub indicates that the proportion of fraud stemming from deepfakes doubled from 2022 to Q1 2023. In the US, the proportion rose from 0.2% to 2.6%. It increased even more in Canada—from 0.1% to 4.6%.

How Does a Deepfake Work?

To create deepfakes, creators need lots of source material featuring their target. With celebrities and politicians, there’s a surplus of available imagery and footage, making them frequent deepfake subjects.

They also need sophisticated computer programs and lots of processing power. Face-swapping algorithms comprise two kinds of neural networks–an encoder and a decoder–that work together.

The first step is to gather thousands of clips of a source subject and train the encoder about that face. The encoder reduces the face to a set of basic parameters, building a representation of its underlying “latent” features.

Training teaches the computer the basics of the face so it can redraw it. The decoder then takes the latent image and generates a new image that resembles the source. The same process is repeated with a target image. A latent face is extracted, which the computer learns to redraw.

However, when the two latent faces are created inside a deepfake generator, the user can swap the variables to redraw the image. The computer draws the target image using the latent features of the source. The result is a new clip that features the target but has facial expressions of the source.

Advanced graphics software, masking, and other techniques can help separate faces from backgrounds, ensure natural body movements, and finesse small details.

For audio, deepfake creators run audio sources through AI programs to clone the target’s voice and use text-to-speech software and lip-matching techniques to fuse new speech.

It’s no easy feat to fool large, powerful institutions like governments, social media platforms, and media outlets. High-quality deepfakes can be challenging to make.

At the same time, advances in AI have made deepfake technology far more accessible to novices, making phony clips easier and faster to produce. Prepackaged face-swapping programs, voice cloning apps, and image generation services like DALL-E and Midjourney are bubbling up every day.

What are Deepfakes Used for?

Face-swapping filters in Instagram and Snapchat make deepfakes a form of entertainment. However, bad actors seek much more than fun. These are the primary uses for manipulated voice and video.

Scams and Fraud

Deepfakes are the perfect scamming tools. Fraudsters can create audio clones that sound exactly like friends and family and use them over the phone to scam people into sending money. Scam artists have duplicated the voices of company CEOs to persuade employees to move funds into scam accounts.

Identity Theft

Many businesses ask customers to upload pictures of government ID cards to prove who they are. To gain access to customer accounts, fraudsters use sophisticated software programs to turn readily available social media pics into 3D masks to create convincing duplicates of those cards.

As banks move to voice biometrics to verify identity, deepfakes can bypass those security measures.

Pornography and “Sextortion”

Some websites let customers purchase non-consensual deepfake pornography featuring people they choose. In other cases, criminals fuse publicly available pictures of unsuspecting people, celebrities, or even minors with pornographically explicit imagery and videos and use them for extortion or revenge.

Election Manipulation and Conspiracies

Deepfakes of leaders, election candidates, and other authority figures show them making false claims. The goal is often to discredit the speaker, implicate political hopefuls in controversies to hamper their chances, or enlist trusted voices supporting a conspiracy.

The mere existence of deepfakes gives rise to conspiracy culture. Conspiracists or political opponents can easily brush off real events and actual evidence as fake. The result is growing skepticism of journalists and news media and a general erosion of truth in society.

Celebrity and Political Hoaxes

Political opponents create deepfakes to stoke conflict, gain support, or cause confusion. In March 2022, a deepfake appeared on news sites that depicted Volodymyr Zelensky, President of Ukraine, surrendering to Russia. Authorities quickly identified it as manipulated video and removed it.

Dishonest companies have started using celebrity deepfakes to endorse and promote their products without their knowledge.

How Deepfakes Pose a Threat to Security

Everything we do online depends on technologies verifying who we are. The best methods hone in on unique identifiers: our facial characteristics and our voice. Now that AI technology can falsify those attributes, every digital ID checkpoint is a potential security risk.

Fraudsters already use vishing and spoofing for making identity theft. Now, easily accessible voice generation and image manipulation tools make the job easier.

Scammers can use fake voices to impersonate colleagues, trusted suppliers, and executives in phone calls. Fake ID can help them gain access to employee portals and banking services or join video conferences.

Organizations are especially vulnerable to CEO fraud built on deepfake capability. Top executives have news and social media presence, giving scammers sufficient source material to create believable fakes. Unsuspecting employees who believe a deepfake is their boss could release funds or divulge private information.

A well-known deepfake fraud from 2020 is still a cautionary tale. A bank manager had a phone conversation with someone claiming to be the company director and authorized a transfer of $35 million. The manager, however, was speaking to a deepfake clone that simulated the director’s voice.

How to Spot Deepfakes

With the increasing prevalence of deepfakes, the best way to protect yourself is to stay aware and pay attention. Look, listen, and watch closely. Experts have outlined several additional techniques to help you detect deepfakes and strengthen your defenses.

  • Conduct a visual analysis of the content you are viewing. Deepfakes created with image generators often produce “wonky” fingers, smudges, and other oddities not found in authentic photos.
  • Look at the eyes. Deepfake videos often have irregular blinking patterns or lack light reflections in both eyes.
  • Zoom in. Look for digital anomalies, odd skin tones, and smudges between faces and backgrounds.
  • Assess movement quality. Look for robotic movements, a lack of tongue movement, and mismatched lips.
  • Verify that the video or voice clip comes from a known and trusted source.
  • Start phone conversations with colleagues using secret passwords or special questions. If the speaker can’t oblige, it might be a voice clone.

Disarm Deepfakes with Cyber Security Awareness Training

Deepfake detection technology is being developed with some success, but not as fast as fraudsters are forging ahead with devious uses.

Organizations and individuals must arm themselves with knowledge and skills to avoid falling victim to harmful deepfake attacks.

The best way to protect yourself, your employees, and your business is with cyber security awareness training. Short, relevant training sessions on what to look out for, coupled with practice in detection techniques, could mean the difference between CEO identity theft and a regular workday.

Discover how easy and effective it can be to build a cyber-aware culture across your organization. Download The Definitive Guide to Security Awareness Training today.