Election deepfakes
We must be more than passive consumers of information in an AI world
Full access isn’t far.
We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.
Get started for as low as $3.99 per month.
Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.
LET'S GOAlready a member? Sign in.
As America turns its weary eyes to the heart of another election cycle, the usual sources of conflict and confusion have been joined by a new menace: AI-generated deepfakes. Artificial-intelligence technology is spreading and innovating faster than we can keep up, allowing pranksters, pornographers, and bad actors of every stripe to create incredibly lifelike images, audio, or video, of people saying and doing things that they never said or did. In the run-up to the New Hampshire primary, for instance, a robo-call imitation of Joe Biden went out to New Hampshire voters warning them to stay home from the polls. In other parts of the world in recent months, deepfakes released on the eve of elections have portrayed opposition politicians in compromising situations or making offensive remarks.
Although the deepfake problem is just beginning to garner public attention, experts have been warning of the impending danger for years. In 2018, a chilling article at Lawfare prophesied a future where: “Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery” and “soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort” or “a deepfake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.” In our polarized, suspicious, and trigger-happy culture, it is not hard to imagine such fakes going viral and generating a powerful political backlash before they could be debunked. As the old proverb warns, “a lie can travel halfway around the world while truth is still pulling its boots on.”
Of course, the proliferation of viral deepfakes could have another perverse result, as one recent episode shows. In December, Donald Trump denounced a Lincoln Project ad chronicling several of his past gaffes as an AI fake, when it fact the underlying footage was well-documented as genuine. In other words, we are facing a future—or perhaps a present—where politicians can be accused of doing something terrible that turns out to be fake, or accused of doing something real that they can readily dismiss as a hoax. The result is a post-truth society, in which every claim and belief become a mere matter of bias and perspective; the postmodernists should have been careful what they wished for.
It may be helpful—although not exactly comforting—to realize that in many ways, what we are now facing with deepfakes is simply the intensification of longstanding trends. Ever since the advent of radio, it has been possible to persuasively present someone saying something they didn’t really say, and distribute the distortion instantaneously to millions. After all, context is everything, and a cleverly edited “sound bite” can easily make the most innocuous statement look perverse, especially if recontextualized alongside others in a compromising collage. Modern news media, especially when it comes to political coverage, are much more interested in stoking cheap outrage than providing truthful context, and social media has intensified this trend. AI deepfakes simply take the process one step further.
Just because it’s been bad for a while, though, doesn’t mean we should shrug our shoulders at it getting worse. How might we respond to this frightening new post-truth world? Certainly, technological and legal solutions will have some role to play. Social media platforms are trying to deploy algorithms that can detect deepfakes, and many states are rushing to pass laws criminalizing the distribution of un-disclosed fakes. But in the race between AI-wielding agents of chaos and AI-wielding agents of law and order, the former may have a distinct advantage.
Thus, we cannot rely on either the coders or the lawyers to rescue us from this mess. Rather, each of us have a responsibility to accept the jarring summons to maturity that life in this cacophony demands. We cannot be mere passive consumers of information, but “be transformed by the renewal of [our] mind[s], that by testing [we] may discern … what is good and acceptable and perfect” (Rom. 12:2). Above all, we should beware of confirmation bias, that natural human tendency to accept as plausible any information that fits what we already think. Consider: if, during the Mueller investigation, a deepfake had leaked showing President Trump plotting electoral malfeasance with Russian agents, Republicans would have rushed to denounce it as a fake, while Democrats would’ve breathlessly shared it as “proof.” Both would have been wrong, because both would have jumped to a conclusion that made them feel good, rather than waiting to find out what was true.
Of course, a lifestyle of endlessly suspending our judgment is not healthy. We cannot abandon trust altogether, but we should demand proof, evidence, confirmation, and truth. Trust must be earned.
These daily articles have become part of my steady diet. —Barbara
Sign up to receive the WORLD Opinions email newsletter each weekday for sound commentary from trusted voices.Read the Latest from WORLD Opinions
Carl R. Trueman | A former Church of England leader erases what it means to be human
Daniel R. Suhr | President-elect Trump will have an opportunity to add to his legacy of conservative judicial appointments
A.S. Ibrahim | The arrest of a terrorist sympathizer in Houston should serve as a wake-up call to our nation
Brad Littlejohn | How conservatives can work to change our culture’s hostility toward families
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.