Facebook researchers announce they can identify ‘deepfake’ videos from a single still image
Facebook researchers working with Michigan State University say that they can now reverse-engineer deepfakes and identify manipulated content with the use of a single still image from the video. They claim to be able to determine where deepfakes in real world settings may have originated, and the software used to produce them.
Deepfakes are digitally altered videos produced by an AI deep learning algorithm, which typically enables the mixers to paste someone’s face on someone else’s body. They have been cited as a potential threat to security as they can enable fraud and impersonation. Deepfakes have been used to mimic celebrities on Instagram and TikTok, and create manipulated pornographic videos of popular actresses.
Facebook researchers claim that their AI software can be trained to establish if a single piece of media is a deepfake based on a single frame of the video. Furthermore, the software can be used to identify the specific model used to generate the deepfake.
On Wednesday, the researchers presented a “research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.”
- Is the Pope Dying? 2022 Conclave?
- CDC Study — No Significant Difference in COVID-19 Transmission Between Vaccinated and Unvaccinated
- Noooo — Notre-Dame interior faces woke ‘Disney’ revamp
- 1st Grade Class Read Transgender Book. School Board President, Who Owns Sex Toy Biz, Does Nothing.
- Stupid Salvation Army Goes Woke and Now Broke — Donors Withdraw Support in Response to Racial ‘Wokeness’ Initiative
- New Peer-Reviewed Study Shows Ivermectin ‘Significantly’ Reduces COVID Infections, Hospitalization, and Mortality
- Hey, Whatever Happened to Richard Gere? Oh, He Criticized China.