Logo
Sound journalism, grounded in facts and Biblical truth | Donate

In too deep?

The manipulation of images and voices is moving off the sidelines


You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

In the last week of 2016, I lined up at the box office to see Rogue One, the Star Wars prequel. I had not heard much about it beforehand, so I was startled by the appearance of Grand Moff Tarkin (villain of Star Wars: A New Hope), played by the original actor from 1977. Peter Cushing had been dead for 22 years, but his head was masterfully superimposed on the body of a live actor. The appearance of a young Carrie Fisher as Princess Leia in the last moment of the film enhanced the creep factor, as her death on Dec. 27 was still headline news.

Digital facial manipulation isn’t new; it made a notable appearance in The Crow, a 1994 martial-arts movie featuring Brandon Lee. When Lee was accidentally killed during filming, the remaining footage had to be shot with a body double and some seat-of-the-pants computer-generated imaging (CGI) techniques. Since the 1990s CGI has made massive strides in verisimilitude, and with Rogue One we haven’t seen the end of it. Reviving Carrie Fisher was impressive—and disturbing—enough, but computer imaging is just getting started. In fact, we’ve entered the age of “deep fakes.”

Where technology is concerned, when it comes to deep anything I’m in deep as soon as I step off the Wikipedia ledge. But “deep fake,” a product of deep learning (that is, the ability of artificial intelligence to improve itself by trial and error) is an obvious danger sign on the frontier of computer science. That’s because it’s no longer on the frontier. Last spring, comedian Jordan Peele released a video of Barack Obama warning about manipulated images of real people saying things they never actually said. Only it wasn’t really Obama in that video; it was Peele imitating his voice, with the visual image of the president synchronized to the lip movements of the comedian. The voice wasn’t quite right, and the visual sometimes wobbled toward uncanny-valley territory, where an image that’s close to real can seem more unrealistic than a caricature. But an unwary or inattentive viewer could easily be fooled.

Digital imaging technology is now available to any tech-savvy tinkerer able to appropriate the lingo and software.

Digital imaging technology, from CGI to motion capture to human image synthesis to “deep video portraiture,” is now available to any tech-savvy tinkerer able to appropriate the lingo and software. Only last winter, so many fake-but-convincing celebrity porn videos were showing up on Reddit that the website had to ban them. But any savvy browser can find them, along with software downloads and countless web forums trading tips and how-tos.

What this means is, anyone with knowledge, pretty good hardware, and a few hundred images of you could produce a video of you saying whatever the engineer wants to hear. The quality would vary, and the voice, at least so far, would depend on a talented impersonator. But voice tech is catching up—in a few years, improved sound could match improved video in a product that looks entirely convincing. At that point, “fake news” moves off the sidelines of political theater and becomes a major player.

In a speech to the Atlantic Council last July, Sen. Marco Rubio warned about the potential: “People are doing it for fun with off-the-shelf technology. Imagine it in the hands of a nation state.” A frightening thought indeed—but does it really put truth at risk?

In the last volume of C.S. Lewis’ Space Trilogy (That Hideous Strength), the hapless protagonist is persuaded to write misleading editorials to advance the power-mad dreams of his corrupt organization. His protest that educated people will see through the ruse is met with scorn: “It’s the educated reader who can be gulled. … When did you meet a workman who believes the papers? He takes for granted that they’re all propaganda.”

The genuine truth-seeker is rare: People tend to believe what they want to believe, and conviction precedes evidence. Deceptive technology may increase gullibility, but it’s more likely to increase skepticism. That makes God’s Word even more precious. Our safeguard is not outlawing deep fakes, but holding fast to deep truth.


Janie B. Cheaney

Janie is a senior writer who contributes commentary to WORLD and oversees WORLD’s annual Children’s Books of the Year awards. She also writes novels for young adults and authored the Wordsmith creative writing curriculum. Janie resides in rural Missouri.

COMMENT BELOW

Please wait while we load the latest comments...

Comments