Over the past few years, the application of artificial intelligence to create faked images, audio, and video has sparked a great deal of concern among policymakers and researchers. A series of compelling demonstrations — the use of AI to create believable synthetic voices, to imitate the facial movements of a president, to swap faces in faked porn — illustrate the rapid speed at which the technology is advancing.
On a strictly technical level, this development isn’t surprising. Machine learning, the subfield of artificial intelligence that underlies much of the technology’s modern progress, studies algorithms that improve through the processing of data. Machine learning systems acquire what is known in the field as a representation, a concept of the task to be solved, which can then be used to generate new iterations of the thing that has been learned. Training a machine learning system on many photos of chairs, for instance, allows it to output new photos of chairs that don’t actually exist.
These capabilities are rapidly improving. While some of the early images generated by machine learning systems were blurry and small, recent papers demonstrate significant progress in the ability to create high-resolution fake imagery, known colloquially as deepfakes.
So, how much should we worry? On one level, it seems obvious that deepfakes might be used by bad actors to spread doubt and manipulate the public. But I believe they may not ultimately create as many issues as it might appear at first blush. In fact, my bigger worry is that the technology will distract us from addressing more fundamental threats to the flow of information through society.
The striking recent advancements make it easy to forget the long historical record of doctored and deceptive media. Purveyors of disinformation were manipulating video and audio long before the arrival of deepfakes; AI is just one new implement in a well-stocked toolbox. As the White House demonstrated when it recently shared a sped-up video suggesting that CNN reporter Jim Acosta assaulted an intern, creating a deceptive video can be as easy as pressing a fast-forward button.
That has big implications for the likely impact of deepfakes. Propagandists are pragmatists. Much of what is known about the tactics of Russian social media manipulation during the 2016 U.S. presidential election suggests the perpetrators leveraged simple, rough-and-ready techniques with little technological sophistication. Peddlers of disinformation want to spread the most disinformation possible at the lowest possible cost. Today, that’s often achieved with a simple Photoshop job or even a crude assertion that a photo is something that it is not. The latest machine learning techniques — which require significant data, computing power, and specialized expertise — are expensive by comparison, and provide little extra advantage to the would-be malefactor.
It’s also easy to forget that, even as the techniques of fakery improve, the techniques of detecting fakery are improving in parallel. Researchers in the field of media forensics are actively contending with the risks posed by deepfakes, and recent results show great promise for identifying synthetic video in the wild. This will assuredly always be something of a cat-and-mouse game, with deceivers constantly evolving new ways of evading detection, and detectors working to rapidly catch up. But the core point is that, if and when deepfakes are used to manipulate public discourse, it will occur in an environment where tools and approaches exist to detect and expose them.
Moreover, even in a changing technological environment, the work of the fact-checking community remains the same. Machine learning is a powerful but ultimately narrow tool: It creates better puppets, but not necessarily better puppeteers. The technology is still far from being able to generate compelling, believable narratives and credible contextual information on its own. Those narratives and contexts must still be developed by fallible human propagandists. Chasing down eyewitnesses, weighing corroborating evidence, and verifying the purported facts — approaches that go beyond the narrow question of whether or not a piece of media is doctored — will remain viable methods for rooting out deception.
It’s also worth noting that the widespread coverage of deepfakes in the media itself helps to inoculate the public from the technology’s influence. Knowing that machine learning can be put to malicious use in this way puts the public on notice. That, in and of itself, is a powerful mechanism for blunting the technology’s deceptive potential.
Thus, there are good reasons to believe that deepfakes will not be a threat in the near future, and that they, in fact, may never pose a significant threat. Despite much-discussed fears about its use as a political weapon, the technology did not make an appearance in the pivotal U.S. midterm elections this year. I’m betting that it won’t make a significant appearance in the 2020 elections either.
However, there’s a bigger issue worth raising. The costs of creating and using deepfakes will likely fall over time, and may eventually become practical for the budget-conscious propagandist. And the technology’s improvements may eventually outstrip our ability to discern the fake from the real. Should we worry more about deepfakes then?
I still don’t think so. Instead of focusing on the latest technologies for fabricating fake media, we should be focusing more tightly on the psychological and sociological factors that underpin the spread of false narratives. In other words, we should be focusing on why what is depicted in a deepfake is believed and spread, rather than the fact of the deepfake itself.
Dwelling on the techniques of disinformation puts society perpetually one step behind. There will always be new ways of doctoring the truth, and chasing the latest and greatest method for doing so is a losing game. Even if we could unfailingly detect and eliminate deepfakes, disinformation would still persist in a culture where routine deception by government officials has become a norm.
For that reason, efforts to counter deepfakes should be matched with increased efforts to get a handle on a set of underlying questions: Why are certain false narratives accepted? How do previous experiences inform whether or not a piece of media is challenged? And what contexts make individuals and groups more or less susceptible to disinformation? Advancing our understanding of these questions will inform interventions that establish deeper, systemic, and more lasting safeguards for the truth.
Tim Hwang is director of the Harvard-MIT Ethics and Governance of AI Initiative, and formerly led global public policy for Google on machine learning. He is on Twitter @timhwang.
Comments are automatically closed one year after article publication. Archived comments are below.
“the widespread coverage of deepfakes in the media itself helps to inoculate the public…”
There’s another fault in the media – making a story out of a non story.
One example this week.
The no confidence motion by the British PM’s party was always bound to fail. Every reporter knew this for certain. However the media made it out to be a tense battle.
They should have told the truth but that only takes one sentence not hours of interviews outside Parliament
I was disappointed that the article didn’t discuss the importance of teaching school-children critical thinking skills to counteract fake news.
I absolutely agree with the premise in the article. However, it is disappointing that the author offered no discussion about the real significance of ‘why’, instead of ‘how’. That is at the root of these false narratives, and is what needs to be at the center of discussion at all levels.