Artificial Intelligence (AI)

Real-Time Video Deepfake Scams Are Here. This Tool Attempts to Zap Them


This announcement is not the first time a tech company has shared plans to help spot real-time deepfakes. In 2022, Intel debuted its FakeCatcher tool for deepfake detection. The FakeCatcher is designed to analyze changes in a face’s blood flow to determine whether a video participant is real. Intel’s tool is also not publicly available.

Academic researchers are also looking into different approaches to address this specific kind of deepfake threat. “These systems are becoming so sophisticated to create deepfakes. We need even less data now,” says Govind Mittal, a computer science PhD candidate at New York University. “If I have 10 pictures of me on Instagram, somebody can take that. They can target normal people.”

Real-time deepfakes are no longer limited to billionaires, public figures, or those who have extensive online presences. Mittal’s research at NYU, with professors Chinmay Hegde and Nasir Memon, proposes a potential challenge-based approach to blocking AI-bots from video calls, where participants would have to pass a kind of video CAPTCHA test before joining.

As Reality Defender works to improve the detection accuracy of its models, Coleman says that access to more data is a critical challenge to overcome—a common refrain from the current batch of AI-focused startups. He’s hopeful more partnerships will fill in these gaps, and without specifics, hints at multiple new deals likely coming next year. After ElevenLabs was tied to a deepfake voice call of US president Joe Biden, the AI-audio startup struck a deal with Reality Defender to mitigate potential misuse.

What can you do right now to protect yourself from video call scams? Just like WIRED’s core advice about avoiding fraud from AI voice calls, not getting cocky about whether you can spot video deepfakes is critical to avoid being scammed. The technology in this space continues to evolve rapidly, and any telltale signs you rely on now to spot AI deepfakes may not be as dependable with the next upgrades to underlying models.

“We don’t ask my 80-year-old mother to flag ransomware in an email,” says Coleman. “Because she’s not a computer science expert.” In the future, it’s possible real-time video authentication, if AI detection continues to improve and shows to be reliably accurate, will be as taken for granted as that malware scanner quietly humming along in the background of your email inbox.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *