Intel Says Its Deepfake Detector Has 96% Accuracy

Intel Says Its Deepfake Detector Has 96% Accuracy

The company says its FakeCatcher can operate in real-time to detect deepfake videos.

https://i.imgur.com/p4Fl3pd.png

A video of Vladimir Putin that has been deepfaked with Donald Trump’s likeness.

Deepfake technology—where someone’s likeness is digitally placed over someone else’s—has some very spooky implications. Intel says that its new deepfake detection tech, called FakeCatcher, is able to clock a deepfake video 96% of the time.

Intel announced that FakeCatcher can operate in real-time to detect deepfake videos, further claiming that it is the first of its kind in the world. FakeCatcher apparently has a 96% success rate at detecting fake likenesses and collects data on subtle blood flow mechanics on a person’s face by scanning the pixels in a video. Then a deep learning AI can determine if the subject’s likeness is authentic or not. FakeCatcher was developed by Intel researcher Ilke Demir and Umur Ciftci from the State University of New York at Binghamton using Intel tech.

“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” says Intel Labs senior staff research Ilke Demir in an Intel press release.

FakeCatcher is hosted on a server but interfaces with videos using a web-based platform. According to Intel, the tech’s approach is opposite of traditional deep-learning based detectors, which usually try to find what’s fake about a video, whereas FakeCatcher is looking for what’s real.

In an interview with VentureBeat, Demir explained that FakeCatcher’s approach is based on photoplethysmography (PPG), which is a method to determine the change in blood flow in human tissue. If a real person is on screen, their tissue will change color ever-so-slightly microscopically as blood is pumped through their veins. Deepfakes can’t replicate this change in complexion (at least not yet).

Deepfake technology has seemingly grown in recent years. This summer, the FBI reported to its Internet Crime Complaint Center that it had received an increase in complaints regarding people who were using deepfakes to apply to remote jobs—with specific attention to voice spoofing. In August, Binance CCO Patrick Hillman stated in a blog post that hackers were copying his digital likeness to impersonate him in meetings.