Tom Cruise’s Deepfake TikToks – What Do They Mean?
The Tom Cruise TikTok videos have garnered 11 million views and are going viral because of the deepfake sophistication level. In one video, Cruise is mastering a golf swing. In another TikTok, he’s doing a magic trick with the skill of a professional illusionist. Normally, these videos would only be considered to be amusing, but some experts are sounding the alarm, asserting that Cruise is not actually demonstrating these skills. But instead, they are crafted using highly sophisticated technology.
What does ‘deepfake’ mean?
Deepfake technology allows the user to build a fictitious event using artificial intelligence. Deepfakes can include video, imagery, and voice “fakes,” according to The Guardian allowing the user to swap faces, body parts, backgrounds and more.
While nearly anyone can create a “deepfake,” the user needs more than a standard computer. Many creators use high-end graphics cards and many creators are experts in their field. Some deepfakes are so high-end, they are difficult to discern from reality.
One way to spot a deepfake video is to examine the frequency of the subject’s blinking. Most algorithms haven’t configured eye movement into the equation, so the subject won’t blink normally. Also, lip-synching may not match or weird lighting may indicate a fake.
In addition to deepfakes, “shallowfakes” can also create a fake situation. One example of a shallowfake was when an encounter between CNN reporter Jim Acosta and a White House intern appeared to be aggressive. The video was edited to look like Acosta was aggressively interfering with the intern when she tried to take the microphone away from him. Proof later showed that the video was sped up to create the illusion of aggression.
Why do ‘deepfakes’ worry experts?
A deepfake could create an unreal scenario, which in the world of pure entertainment is benign. But some experts worry that deepfakes could be used for nefarious purposes. Not only undermine public trust, but deepfakes and shallowfakes could also be used to create false scenarios that could have devastating consequences.
Rachel Tobac, CEO of SocialProof Security recently tweeted the Cruise videos and sounded the alarm. “2 years ago on stage I was asked ‘when will Deepfake video/audio impact trust & be believable in social engineering?’ My response then was that we were 2 years away from undetectable Deepfakes,” Tobac tweeted. “I wish my prediction then was wrong. We need synthetic media detection + labels ASAP.”
“Deepfakes will impact public trust, provide cover & plausible deniability for criminals/abusers caught on video or audio, and will be (and are) used to manipulate, humiliate, & hurt people,” she continued. “If you’re building manipulated/synthetic media detection technology, get it moving.”
She added, “If you are building a team to detect synthetic and manipulated media, it’s essential your team is diverse. This issue will impact everyone, but will disproportionally affect women, people of color, and marginalized groups.”