By Dan Abdinoor, CEO and cofounder at Fritz AI, Executive Editor of the Deep Learning Weekly newsletter
Image via CNBC.com
Deepfakes have become mainstream — in conversation, if not yet in technology. Is the sum of all AI research to date really just about making fake videos? Or might there be more depth and utility to deepfake technology that we are just starting to scratch the surface of? Here we take a closer look at recent news about deepfakes, and what it all might mean for the future.
Deep Learning Weekly aims to be the premier newsletter for all things deep learning. We keep tabs on major developments in industry—new technologies, companies, product offerings, acquisitions, and more—so you don’t have to.
Was 2020 the tipping point for mainstream use of deepfakes? MIT Technology Review makes the case in this contentious article analyzing the past year’s deepfake activity. Negative use cases include tasteless fake pornography and political propaganada. Some more (perhaps) harmless uses are memes and entertainment. Either way, deepfakes became commonplace in the machine learning community in 2020.
Without question, the quality of deepfakes is improving quickly, and leading to more questions around authenticity. In these Tom Cruise deepfake videos, the quality is good and the misdirection is convincing. However, the effort required to reach this level of quality is not easy to achieve. The creator, Chris Ume, has hit the mark only through years of deepfake experience combined with professional video production experience. Ume says viewers shouldn’t worry about being tricked yet—society survived the same challenges to what is “real” when Photoshop made convincing photo manipulation possible, after all.
One of the worst kept secrets in the ML world is that Snapchat (and TikTok) are deeply invested in using the latest techniques to create consumer demand. From face and gender swaps to hair and makeup try-on, these social apps are usually among the early adopters. With the acquisition of Ariel AI, Snapchat may be attempting to bring real-time deepfakes into their app. In stark contrast to the prior article, this development signals a potential one-click deepfake is on the way. With an audience size in the hundreds of millions, Snapchat may be about to become the world’s largest deepfake publisher… at least the Snaps are ephemeral.
In a future where deepfake technology is commonplace, how do we defend our systems and ourselves against false information? Similar to how adversarial examples were employed to “attack” image classification models, we have begun finding ways to detect deepfakes. In this case, researchers at the University of Buffalo have found a surprisingly simple way to detect deepfake images: looking at light reflections in the eyes. In this ouroboros-like case of AI detecting AI, an ML model was trained on the eyes of both real and synthesized images, and it can successfully discern which light reflections are indicative of deepfakes.
Bio: Dan Abdinoor is the Executive Editor of the newsletter Deep Learning Weekly. The articles featured in this piece were all included in prior newsletter issues. Subscribe for free to stay in the loop with each week’s essential industry news, insights, research, and more.