Thinking through AI | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Thinking through AI

Testing the spirits of a transhumanist age


You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Fears of artificial intelligence have been around at least since 2001: A Space Odyssey, where HAL the computer takes over a spacecraft bound for Jupiter and kills almost all the crew. Last November, when OpenAI made its ChatGPT large language model available to the public, the public woke up to a development that’s no longer speculation.

Like thousands of Americans, I took the model out for a spin, and I wrote about the results for WORLD (see “The Limits of ChatGPT,” Jan. 28). Its ability to churn out reasonable and literate answers to my questions was impressive, if a bit disconcerting. I concluded that ChatGPT’s poetry was limp and its humor lame. A certain spark was missing, namely the originality and creativity granted to humans as divine image-bearers. Yet I wondered if that spark could be imitated successfully enough to lure future producers and publishers into mass-producing content at little cost.

Here’s an interesting coda to that story. After my ­column appeared in WORLD, a reader wrote in with a correction. The first assignment I had given the machine was “Explain the controversy between Athanasius and Pelagius.” ChatGPT dived into early church history with aplomb and spun out a grammatically and historically accurate reply—except for one rather major detail. The debate over free will and original sin was between Augustine and Pelagius. Athanasius is best known for an earlier controversy with Arius over Christ’s divinity. I knew that, and would have corrected it if I’d taken more care to review my draft. I was just being sloppy.

But so was the machine. After slamming palm to forehead over the reader’s correction, I looked at the original interaction and wow—ChatGPT had assumed my mistake and run with it. It’s supposed to get “smarter” in time as factual errors are corrected, but I’m puzzled how it could explain the controversy accurately without gently reminding me that Augustine, not Athanasius, was one side of it.

We’ve been warned that large language models (LLMs) can sometimes “hallucinate,” or generate false content. The picture of a machine spinning out fantasies like a fringe-wearing hippie on LSD makes me smile—what a story that would make! The prosaic explanation given is that mistakes are caused by gaps in the training data. The model hasn’t yet incorporated enough information to provide an accurate answer, so it makes stuff up. This seems endearingly human, like any middle-­schooler writing a report on a book he hasn’t read.

A more serious issue is “jailbreaking,” or the malicious bypassing of guardrails that are supposed to keep LLMs from promoting conspiracy theories or producing offensive or dangerous content. By adding strings of characters to natural language prompts, jailbreakers can trick the machine into deceptive replies.

All this was predictable, as was the gleeful abandon with which students would be using ChatGPT and other chatbots like Bing and Bard to produce their essays. Some teachers have decided to roll with it. Daniel Herman describes in The Atlantic how he revamped his English class this year, scrapping essay assignments in favor of classroom literary discussions. His students can later feed their insights into their laptops and let the bots put them into readable form, which he believes will be more original than the dreary, mechanical papers about themes and symbolism in Moby-Dick.

Interesting approach, but as a firm believer in clear writing as a cornerstone of clear thinking, I’m not ­convinced. Every technology takes as well as gives, and it’s already too late to debate whether AI is a net ­benefit to humanity. That train has left the station. Humanity, as always, must struggle with the implications and decide how to respond, or not. Christians, as always, must rethink our calling as culture-makers and culture-redeemers in light of an innovation that could undermine both. “Test the spirits,” warns the apostle in 1 John. For us, it’s the spirit of a transhumanist age as flawed as humans are.


Janie B. Cheaney

Janie is a senior writer who contributes commentary to WORLD and oversees WORLD’s annual Children’s Books of the Year awards. She also writes novels for young adults and authored the Wordsmith creative writing curriculum. Janie resides in rural Missouri.

COMMENT BELOW

Please wait while we load the latest comments...

Comments