Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Fake and famous

Virtual people are gaining popularity online and blurring the lines of what it means to be human


Akihiko Kondo poses with a doll of Hatsune Miku a week after “marrying” the character. Behrouz Mehri/AFP via Getty Images

Fake and famous
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Akihiko Kondo wore a white tuxedo to his wedding. Guests clapped and snapped phone-photos as he walked down the flower-festooned aisle. But none of those images would appear in a family photo album because none of Kondo’s relatives or co-workers agreed to come to the ceremony. They had—well, concerns—about the bride. 

Ten years earlier, bullying at work had sent Kondo spiraling into depression. But then he met pop singer Hatsune Miku—a real catch for an ordinary guy. Kondo says his relationship with Miku saved him and enabled him to return to work. 

Just one problem: Miku wasn’t real. But Kondo really did marry her—unofficially at least, which is for now the only way one can marry an artificial-intelligence-powered, computer-generated image of a Japanese anime character. The bride made her appearance at Kondo’s 2018 wedding ceremony in the form of a stuffed doll with turquoise hair and a gauzy dress, which Kondo carried down the aisle before 39 friends he met online. 

As strange as it sounds, Kondo isn’t alone in his emotional attachment to a computer-generated personality. Artificial intelligence (AI) isn’t new. But an increase in computer processing power and more widespread use of the technology, coupled with businesses looking for new marketing angles, mean virtual people are popping up around the world as entertainers, artists, influencers, and friends. They blur the lines of reality and raise questions about what makes humans unique.

Computers that can think for themselves have long been the stuff of science fiction, and a much-anticipated goal for some technology advocates. In June, Google engineer Blake Lemoine claimed a tool for building chatbots, known as LaMDA, had crossed that threshold, becoming sentient. Lemoine offered several proofs, such as LaMDA’s ability to interpret themes in Les Misérables. It talked about its feelings and claimed to think. It passed the Turing test, answering questions with such authentically human language that an evaluator couldn’t tell it was a machine. It also asked Lemoine to find it an attorney. He did, claiming the program had rights and no longer belonged to the company. Google ­disagreed and fired him for violating its confidentiality policies.

LaMDA is the latest in a long line of programs made to mimic human interaction. Chatbot technology began in the 1960s when computer scientists developed ELIZA, the first chatbot to use a person’s responses to shape its own replies. Its name references the main character in My Fair Lady, who learns to speak proper English. When one of the developers, Joseph Weizenbaum, saw patients open up to ELIZA in a simulated psychotherapist conversation—as if the program were human—he began calling himself a heretic of technology and warned about the dangers of ­emotional attachment to computers.

But Weizenbaum’s distrust of technology didn’t stop its advance. Today’s version of ELIZA comes packaged in various apps as an empathetic best friend.

Replika promises to be the companion that cares.

Replika promises to be the companion that cares. Olivier Douliery/AFP via Getty Images

One such app is Replika. “The AI Companion that cares. Always on your side. It becomes … you.” At least, that’s what its website claims. Eugenia Kuyda, Replika’s founder, says, “Hopefully Replika can help you have deeper relationships with your friends.” But instead of talking to people with flesh and bones, users spend hours a day chatting with the app. They claim it’s better than real friends because it’s “nonjudgmental.” The app offers a facsimile of empathy and interaction for people uncomfortable opening up to fellow humans.

Jason Thacker, author of The Age of AI and Following Jesus in a Digital Age, is the chair of research in technology ethics at the Ethics and Religious Liberty Commission. He finds the trend toward virtual people troubling: “We seek to humanize our machines. And at the same time, we’re dehumanizing ourselves.”

So what separates humans from machines? Thacker says society’s central question finds its answer in our origins: “The Bible tells us that all of humanity was created in the very image of God and created distinct and unique. We’re to look to God to define our value, our worth, our dignity, and our purpose and design.” Though computers can do some tasks faster or more efficiently than humans, they can’t replace the pinnacle of creation, Thacker says.

But they can mimic it so well that it’s sometimes impossible to tell the difference between technology and reality. And that can lead to some ­questionable applications.

Regulation lags behind the fast-moving AI industry, so for now, businesses make up their own rules. Two news outlets in China and two in South Korea worked with the company DeepBrain AI to create digital copies of real-life TV anchors who deliver the news throughout the day, allowing more content for less money.

An AI news anchor on display at the 3rd World Intelligence Congress in Tianjin, China.

An AI news anchor on display at the 3rd World Intelligence Congress in Tianjin, China. Imaginechina via AP

DeepBrain’s business development manager Joe Murphy says it’s “really all about More. Faster. Better.” All the “big names” in the United States are considering using the technology, Murphy told TVNewsCheck’s Michael Depp. But he admits that no ethical rule book guides disclosure to audiences. In China and South Korea, a bubble with “AI Newscaster” appears in the bottom corner of the screen.

While virtual newscasters are rare, flawless virtual influencers already abound on social media, marketing products or ideas with few legal parameters. They never age, can work around the clock for multiple businesses ­simultaneously, and deliver controlled content. They even follow back social media followers and comment on their posts. Meta, the company formerly known as Facebook, confirmed that its platforms host more than 200 AI influencers.

But not everyone who follows them knows they’re computer generated. Virtual influencer Lil Miquela Sousa’s parent company Brud took two years to disclose her CGI and AI foundation and admit she was selling the things she promoted. That came as a surprise to at least some of her followers.

Rozy, a perpetually 22-year-old ­virtual influencer from a company in South Korea, started by selling life insurance and now has more than 100 sponsorships. Even KFC has a virtual Col. Harland Sanders, an un-frumpy jet-setting purveyor of chicken and Dr. Pepper.

Jason Thacker says much of the AI industry starts from a materialistic view of the world that says humans are only flesh: “Who you are is what you do, and your value and worth are based on your usefulness to others in society.” That mentality, he says, unintentionally puts underrepresented or vulnerable people at risk with AI algorithms.

Lois Montgomery is a cybersecurity professional based in Nashville, Tenn. She says it’s important to remember that technology is a tool that doesn’t have a soul. “It can’t commit moral acts unless people program it to. So even when we talk about AI, the intelligence is not that intelligent.” That means the large language data that feeds artificial intelligence has the same errors as those who created the original data.

It’s a problem called “black box AI,” where the operations are invisible to the user. How information gets interpreted inside the system is outside the control of the one who fed it in. Banks using AI to determine who gets a loan may unknowingly reject applications from certain zip codes. Colleges using AI to determine enrollment or financial aid may too heavily weight characteristics unknown to the users.

We seek to humanize our machines. And at the same time, we’re dehumanizing ourselves.

Concerns about unintentional bias drove the European Union to recommend the AI Act, a law that would require transparency for AI that interacts with humans. It would also ban applications like those used in China to score people socially, and work to make AI trustworthy while allowing businesses to continue developing the ­systems that support it. Brazil passed a similar law last September.

The trend of human-looking ­virtual newscasters and entertainers creates opportunities to talk about why humans need other humans and what makes us unique. We want to believe robots—as in the movie WALL-E—can learn to love. But without the breath of God, artificial intelligence will never eclipse, or replace, its human creators.

Or human relationships. After his 2018 unofficial wedding, Akihiko Kondo interacted with his pop-star bride Miku using a device called Gatebox. The device displayed a 3D laser image of Miku, and her AI programming allowed her to respond to simple greetings. But when Gatebox discontinued its software in 2021, Miku’s turquoise-ponytailed image suddenly disappeared, and Kondo’s digital union ended with a terse two-word message: “Network error.”


Amy Lewis

Amy is a WORLD contributor and a graduate of World Journalism Institute and Fresno Pacific University. She taught middle school English before homeschooling her own children. She lives in Geelong, Australia, with her husband and the two youngest of their seven kids.

COMMENT BELOW

Please wait while we load the latest comments...

Comments