Not so smart after all? | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Not so smart after all?

Like humans, AI bots don’t always tell the truth


hocus-focus/iStock/Getty Images Plus via Getty Images

Not so smart after all?
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

The internet was originally billed as “the information superhighway.” It’s turned out to be more like the LA freeway featured at the beginning of the movie La La Land: gridlock as far as the eye can see—though at least with plenty of good music videos to pass the time. As more and more information has piled up, we’ve increasingly despaired of ever sorting through it or separating fact from fiction. As so often, we’ve hoped to solve our technological woes with more technology—in this case, artificial intelligence bots that are so smart that they can answer any question, solve any problem, research any paper for you. But what if they aren’t so smart after all?

Thus far, while AI has proven disconcertingly good at creative tasks—including some we had thought were most uniquely human—it’s tended to disappoint at what we might have thought the more basic task of telling the truth. Researchers and casual users alike have found top AI platforms like SearchGPT or Perplexity regularly spitting out completely manufactured factoids with serene confidence. Attorneys rashly relying on such models for research have been caught citing cases that never took place. It seems as if AI may be more human even than we wanted it to be—just as prone as we are to embellish stories, imagine memories, and make up evidence to corroborate its claims.

While jarring at first glance, the recurrence of AI “hallucinations” (as programmers call them) makes sense as an almost unavoidable feature of the technologies and a disturbing picture of our own thought processes. Faced with reams of information, AI does what we do—only much faster: It tries to form a coherent picture, matching new information to existing information and, where it encounters a knowledge gap, guessing or predicting something that would plausibly fill the gap.

Philosophers have long debated between a so-called “correspondence” model of truth and a “coherence” model. The former reflects the commonsense conviction that the purpose of knowledge is to lay hold of the real world, that something is “true” inasmuch as it corresponds to reality. According to coherentists, however, since we have no unmediated, unbiased access to reality, the best we can strive for is a coherent mental map, one in which our various beliefs fit together and do not contradict. While this assessment is overly pessimistic, it accurately reflects the way we often operate—and for AI, it is spot-on. We do have some kind of access to reality, but our bots don’t. If our X feed tells us that there’s a storm outside, we can always, in a pinch, fact-check by walking out our front door. ChatGPT can’t. It cannot tell us what is true, only what feels true.

It seems as if AI may be more human even than we wanted it to be—just as prone as we are to embellish stories, imagine memories, and make up evidence to corroborate its claims.

In itself, this need not be overly troubling. This is, after all, the limitation we encounter every day when talking to ordinary people. We know that they’re not infallible, that they sometimes misremember things, that they tend toward confirmation bias, and that they probably try to present themselves as more knowledgeable than they really are. If AI bots mirror these human foibles, they will not lead us further astray than we already are. Indeed, they might even prove helpful, by reminding us just how pervasive is our own tendency to try to make facts fit our narratives.

The trouble, though, is that chatbots can sound so much more authoritative. Indeed, we’ve already encountered this problem with the internet in general. However much we may know that falsehoods circulate online with effortless ease, we find ourselves readily taken in by “facts” we find on the web, confidently citing them to prove our point. Precisely because of the impersonality of the medium, they seem so much more objective and reliable than mere personal testimony.

As with many such technologies, the problem is not so much with them but with us—with the unrealistic assumptions we bring to them and the false faith we put in them. While adults who have been trained in critical thinking skills and habits of healthy skepticism may be able to take ChatGPT’s answers with a grain of salt, children first using such bots may be at greater risk of losing their anchor in reality. Indeed, the greatest danger of these tools may be that, having put too much faith in them to help us make sense of the world, we will fall deeper into despairing skepticism when they let us down.


Brad Littlejohn

Brad (Ph.D., University of Edinburgh) is a fellow in the Evangelicals and Civic Life program at the Ethics and Public Policy Center. He founded and served for 10 years as president of The Davenant Institute and currently serves as a professor of Christian history at Davenant Hall and an adjunct professor of government at Regent University. He has published and lectured extensively in the fields of Reformation history, Christian ethics, and political theology. You can find more of his writing at Substack. He lives in Northern Virginia with his wife, Rachel, and four children.


Read the Latest from WORLD Opinions

Jessica Prol Smith | Voters need to understand how much is at stake

Denny Burk | Has the persecution of Jack Phillips finally come to an end?

John D. Wilsey | Democrats want to abolish a central protection of our constitutional order

Daniel R. Suhr | Democrats seek to suppress “misinformation” by stomping on free speech

COMMENT BELOW

Please wait while we load the latest comments...

Comments