Cyber-Tolkiens? Kuyper-bots? | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Cyber-Tolkiens? Kuyper-bots?

Christian computer scientists weigh the up- and downsides of artificial intelligence


Source images and Abraham Kuyper: Handout; C.S. Lewis: Everett Collection Inc/Alamy; J.R.R. Tolkien: Alpha Historica/Alamy

Cyber-Tolkiens? Kuyper-bots?
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Scholars from around the world gathered in May for the annual Kuyper Conference at Redeemer University in Canada. Over several days, they discussed the works of Abraham Kuyper, the Reformed theologian, journalist, and prime minister of the Netherlands who died in 1920.

Derek Schuurman, a professor of computer science at Calvin University, was one of the presenters. But Schuurman wanted to do more than talk about Kuyper. He wanted to let him speak.

When Schuurman stepped onstage, a PowerPoint slide lit up behind him: A black-and-white photo of Kuyper and the words “Greetings! I am the Abraham Kuyper chatbot. Ask me anything about any square inch!”

Addressing the Kuyper-bot, Schuurman began with an obvious question: “What do you think about artificial intelligence?”

The bot quickly spit out an apparently thoughtful answer: “On the one hand, AI can improve efficiency, accuracy, and decision-making in various fields such as healthcare, transportation, and finance. However, we must be careful to ensure that AI does not replace human agency or decision-making, and that it does not reinforce existing inequalities or injustices.”

To create the Kuyper-bot, Schuurman used ChatGPT, the artificial intelligence program that’s making news all over the world. The plausibility of the bot’s answers amazed the audience of experts, all well-versed in Kuyper’s writing.

ChatGPT first grabbed headlines when it debuted in November. Commentators made sweeping predictions that a new age of artificial intelligence (AI) had dawned, stoking fears from job loss to computers that might someday lash out at humanity. But others, like Schuurman, see the ­technology as a tool that could create a vast array of opportunities—if it’s used in the right way.

ChatGPT is a type of program called large language model (LLM). Imagine someone said “Mary had a little” and asked you to fill in the next word. You would probably guess “lamb.” ChatGPT works the same way, but on a much bigger and more complex scale. “It predicts the probability of the next word in a chain of words and strings them together in a way so that they actually sound quite plausible,” Schuurman said. “It’s basically a big statistical machine that produces plausible text as output.”

Schuurman took his inspiration for the Kuyper-bot from other programmers who used ChatGPT to build chatbots for famous historical figures: “This led me to wonder whether and how the tool might be used to learn more about great Christian thinkers.”

ChatGPT has been trained using vast quantities of text taken from books, articles, websites, and social media platforms, so its responses are very realistic. Users start by typing a question or command called a “prompt.” Examples of prompts include “write a 500-word essay on Hamlet” or “make a recipe for blueberry muffins.” ChatGPT produces responses that sound like they were written by a person. Or, like a Christian theologian.

While Schuurman is not entirely comfortable with the Kuyper-bot referring to itself in the first person, he was happy enough with the experiment to make additional chatbots for C.S. Lewis, J.R.R. Tolkien, Saint Augustine, and others.

I tested out Schuurman’s Lewis-bot by asking it, “How tall is Aslan?” Here’s how it replied: “Ah, the majestic Aslan, the Great Lion of Narnia. While the exact height of Aslan is not explicitly mentioned in my writings, it is clear that he is portrayed as a creature of immense stature and power. Aslan is often depicted as being larger than an ordinary lion, exuding a sense of awe and authority. However, it is important to remember that Aslan’s true significance lies not in his physical appearance, but in the deep symbolism he carries as a representation of Christ in the Chronicles of Narnia. His height, therefore, is secondary to the spiritual truths he embodies.”

The thoroughness of the bot’s response impressed me. It was clearly drawn from Lewis’ actual writings. Despite its realism, though, Schuurman said it’s impor­tant to remember that ChatGPT has “no built-in understanding about the words that are coming out.” Still, I found it unsettling to interact with a computer program speaking in the voice of a man who has been deceased for decades.

Illustration by Krieg Barrie; Source images and Elon Musk: Patrick Pleul/AFP/Getty Images; Steve Wozniak: Torsten Blackwood/AFP/Getty Images; Robot: Johannes Simon/Getty Images

LANGUAGE models have been around for years. The auto-complete function in email drafts is one example. It predicts what the user will type next. But large language models ­represent a massive technological leap forward.

Thomas VanDrunen is a professor of computer science at Wheaton College. He was preparing to teach his biannual course on computational linguistics during the upcoming school year when he ­realized he needed to overhaul his syllabus: “In the two years since I last taught it, it may as well have been 100 years in terms of how things have changed because of the advancements that LLMs have made.”

When OpenAI released ChatGPT, it broke records by acquiring more than 1 million users in five days and 100 million users in two months. The popular video-sharing app TikTok needed nine months to reach 100 million users. On the heels of ChatGPT, tech companies including Google and Meta released their own LLMs. But ChatGPT got the most attention because it debuted first and is free to use. OpenAI released a more advanced LLM called GPT-4 in March, but it requires a paid subscription.

LLMs don’t operate completely on their own. During development, they go through a process called “alignment,” or human training. Of the internet text fed to LLMs, large portions include undesirable content like racist, misogynistic, or violent language. Developers “had to basically get a small army of humans to query the machine and tamp down undesirable output,” Schuurman said.

But alignment can also mean LLMs reflect the biases of the humans doing the aligning. For example, when I asked ChatGPT, “What is a woman?” its reply included this sentence: “It’s important to ­recognize that the definition of a woman can be multifaceted and can vary across different cultures, communities, and individuals.” It should come as no surprise that several studies found ChatGPT exhibits a left-wing political bias.

To get around this, Schuurman recommends asking ChatGPT to answer in the style of a thinker you respect. For example, “What is a woman? Answer in the style of C.S. Lewis.” With that additional prompting, ChatGPT told me: “A woman, in the style of C.S. Lewis, can be understood as a marvelous creation, embodying both strength and delicacy, elegance and resilience. She is a reflection of the divine craftsmanship, woven with intricacy and purpose.”

POLITICAL bias is just one danger posed by ChatGPT. For decades, Hollywood has been churning out movies and television shows featuring AI that rises up against humanity. Though critics say ChatGPT is a big step in that direction, VanDrunen isn’t worried.

“I don’t see the moment we’re in as something so different from any other technological breakthroughs,” he said. “I would say that if things go wrong, they’re not going to go wrong as Hollywood predicts it, but in ways that we didn’t predict.”

Schuurman agrees that the outcome won’t be as dramatic as Hollywood imagines, but he does see cause for alarm. He worries that because ChatGPT can write code, it could “democratize the ability to write malware because you don’t need to be a Ph.D. in computer science.” He also worries about “latent persuasion,” a phenomenon that has only recently been studied. When language models are incorporated into programs like word processors, Schuurman fears the programs could “nudge people toward different opinions about things because they’re constantly interrupting their thought process by suggesting certain ways to say things. I think as a Christian, that’ll be one of the things that we’ll want to be attuned to.”

In March, a group of Silicon Valley heavyweights, including Elon Musk and Apple co-founder Steve Wozniak, published an open letter calling for a pause on the development of any AI more powerful than GPT-4. Their call has so far gone unheeded. VanDrunen believes such a pause is unenforceable anyway, but Schuurman applauds efforts to raise awareness about how to use these new tools ethically. At the moment, Schuurman fears AI development is a largely unregulated “arms race” between technology companies. “The question about how we should use these tools should not be left to computer scientists alone.” He believes philosophers, social scientists, journalists, theologians, and others should engage in dialogue about the appropriate uses for LLMs and other AI.

On May 2, the Writers Guild of America, which represents writers in film, TV, and other media, went on strike. Writers’ demands include defending their jobs from LLMs like ChatGPT. Schuurman believes their concerns are legitimate. “The whole idea of copyright becomes more complicated because there’s echoes of other people’s work inside of these things.” If job losses pile up, the question of compensation for the original authors may become more pressing.

VanDrunen thinks at least initially the impact will be felt in jobs where writing is a secondary part, such as analysts who write corporate reports. “One could take an optimistic approach to this and say that some of the drudgery of that job will be ­eliminated,” he said.

Investment bank Goldman Sachs in March issued a report that estimated 300 million jobs globally could be automated in some way by technology like ChatGPT. Schuurman expects LLMs to become important parts of the medical and legal professions because they will “help people in those roles to be able to sift through large amounts of information more quickly.”

VanDrunen said history teaches that technological innovation usually destroys some jobs while simultaneously creating new ones. “There’s nothing particular about large language models that makes me say, ‘Well, that’s definitely going to happen.’ But, looking at every other advancement in technology throughout history, that seems to be what we should predict.”


Emma Freire

Emma Freire is a senior writer for WORLD Magazine. She is a former Robert Novak Journalism Fellow at the Fund for American Studies. She also previously worked at the Mercatus Center at George Mason University and a Dutch multinational bank. She resides near Baltimore, Md., with her husband and three children.

@freire_emma

COMMENT BELOW

Please wait while we load the latest comments...

Comments