Microsoft has launched a video authenticator that detects deepfake content including videos and photos in real-time. The recently unveiled tool rates authenticity of the content through a confidence score that can be viewed by users.
Microsoft has unveiled new software that focuses on helping people identify deepfake photos and videos. Dubbed as Video Authenticator, this application shows a confidence score to help users determine whether or not the media is artificially manipulated.
The new platform’s launch comes ahead of the presidential elections in the United States. The tech behemoth said the new technology provides the confidence score or percentage change in real-time on every frame as the video (media) plays.
The application is capable of detecting the blending boundary used by deepfakes. Aside from that, it can spot nuances such as grayscale or fading, which the human eye is likely to miss. Developed by Microsoft Research, Microsoft’s Responsible AI team, and the Microsoft AI, Ethics, and Effects in Engineering and Research (AETHER) Committee, the Video Authenticator app use a public dataset from agencies such as Face Forensic++.
Microsoft tested the app on the DeepFake Detection Challenge Dataset. You can check out the two leading models for training and testing the Video Authenticator app below.
The new tool comes at a time when deepfake photos and videos are being used to spread disinformation, fake news, and hoaxes. Such AI (artificial intelligence) based videos targetting Bill Gates, Mark Zuckerberg and several other people are doing the rounds online.
Aside from Microsoft, Facebook is also working on enhanced deepfake detection methods. Keeping in line with that, the company hosted a Deepfake Detection Challenge earlier this year.
Deepfake detecting videos are far from accurate at the moment. Even the top-performing model in the challenge received just an average precision of 65.18% against real-world examples.