The limits of ChatGPT | WORLD
Sound journalism, grounded in facts and Biblical truth | Donate

The limits of ChatGPT

For learning to think, the pen is mightier than the program

You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.


Already a member? Sign in.

THIRTY-ODD YEARS AGO, I engaged in therapy with a machine: a plain-text program designed to help the befuddled and depressed work through their issues. Glowing letters on a blue screen asked, “How can I help you today?” I typed in an imaginary problem. Every statement from me generated a question or an encouraging prompt, such as “Let me re-state the issue … Is this correct?” or “Tell me how you feel about that.” At first I could easily imagine a therapist on the other side of the screen patiently drawing out my responses, until we came to the point where I could expect some answers and wasn’t getting any.

To anyone uneasy about the rise of the machines, I have sort of good news and maybe bad news. A language model called ChatGPT is now available to random users on the OpenAI website. Writer and cultural observer Virginia Postrel interacted with the program and reported her conclusion on Substack: “Routine Writing Is About To Be Free.” That is, ChatGPT writes better than your average college freshman. It can easily handle “routine writing” on the order of company memos and project reports: Just feed it the facts and the program will roll out a presentation in workmanlike prose with which the most zealous grammarian will find no fault.

I was intrigued. I checked it out. Sure enough, ChatGPT is at the service of the public while developers work out the bugs. As OpenAI explains, “We’ve trained this model using Reinforcement Learning from Human Feedback (RLHF)”—that is, “human AI trainers provided conversations in which they played both sides,” and over thousands of hours the model absorbed paradigms of verbal interaction well enough to imitate it convincingly. That ability, combined with the encyclopedic information at its disposal, makes ChatGPT a word-­juggling, essay-spewing, homework-dodging bad boy.

I began with a straightforward question: “Explain the debate between Athanasius and Pelagius.” The debate between Athanasius and Pelagius was a major controversy in the early Christian church that centered on the nature of human free will, original sin, and the role of divine grace in salvation. Not bad. How about “What must I do to be saved?” The specific beliefs and practices that are considered for salvation can vary widely. Here is a general overview of some common beliefs about salvation in Christianity—followed by a list that might be found on any evangelical church website. “How can I persuade my girlfriend to marry me?” generated boilerplate endearments that would have melted any young lady halfway inclined.

But artistry and humor are beyond it. “Make a pun” yields a lame riddle and a definition of “pun.” The ­model’s notion of “witty dialogue” leaves Oscar Wilde’s reputation unchallenged. Virginia Postrel’s efforts to refine its concept of blank verse unmasked ChatGPT as a dutiful but dull student, capable of regurgitating facts but not of using them creatively. Because when it comes to that, ChatGPT doesn’t know anything and can’t leap from fact to application. It can only get there by coded pathways. Materialists argue that the human brain works the same way; AI is only imitating a process that took evolution millions of years. But materialists are missing something: a spark between synapses, the light-bulb brightness of an idea that is the origin and essence of human language created by the master Creator.

The good news is that, however sophisticated its appearance, AI will never have an original idea. The bad news? It might be able to fake it. And as a sometime writing teacher I fear generations of students leaning on ChatGPT and its progeny to produce their assignments, thus skipping over a bedrock of their education. For writing is thinking: Speaking for myself, I don’t know what I think until it’s black letters on white. I learned how to do it by trial-and-error with a pencil, not a ­program. Will outsourcing thought to a machine result in thinking like a machine?

Janie B. Cheaney

Janie is a senior writer who contributes commentary to WORLD and oversees WORLD’s annual Children’s Books of the Year awards. She also writes novels for young adults and authored the Wordsmith creative writing curriculum. Janie resides in rural Missouri.


Please wait while we load the latest comments...