Not so smart after all?
Like humans, AI bots don’t always tell the truth
Full access isn’t far.
We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.
Get started for as low as $3.99 per month.
Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.
LET'S GOAlready a member? Sign in.
The internet was originally billed as “the information superhighway.” It’s turned out to be more like the LA freeway featured at the beginning of the movie La La Land: gridlock as far as the eye can see—though at least with plenty of good music videos to pass the time. As more and more information has piled up, we’ve increasingly despaired of ever sorting through it or separating fact from fiction. As so often, we’ve hoped to solve our technological woes with more technology—in this case, artificial intelligence bots that are so smart that they can answer any question, solve any problem, research any paper for you. But what if they aren’t so smart after all?
Thus far, while AI has proven disconcertingly good at creative tasks—including some we had thought were most uniquely human—it’s tended to disappoint at what we might have thought the more basic task of telling the truth. Researchers and casual users alike have found top AI platforms like SearchGPT or Perplexity regularly spitting out completely manufactured factoids with serene confidence. Attorneys rashly relying on such models for research have been caught citing cases that never took place. It seems as if AI may be more human even than we wanted it to be—just as prone as we are to embellish stories, imagine memories, and make up evidence to corroborate its claims.
While jarring at first glance, the recurrence of AI “hallucinations” (as programmers call them) makes sense as an almost unavoidable feature of the technologies and a disturbing picture of our own thought processes. Faced with reams of information, AI does what we do—only much faster: It tries to form a coherent picture, matching new information to existing information and, where it encounters a knowledge gap, guessing or predicting something that would plausibly fill the gap.
Philosophers have long debated between a so-called “correspondence” model of truth and a “coherence” model. The former reflects the commonsense conviction that the purpose of knowledge is to lay hold of the real world, that something is “true” inasmuch as it corresponds to reality. According to coherentists, however, since we have no unmediated, unbiased access to reality, the best we can strive for is a coherent mental map, one in which our various beliefs fit together and do not contradict. While this assessment is overly pessimistic, it accurately reflects the way we often operate—and for AI, it is spot-on. We do have some kind of access to reality, but our bots don’t. If our X feed tells us that there’s a storm outside, we can always, in a pinch, fact-check by walking out our front door. ChatGPT can’t. It cannot tell us what is true, only what feels true.
In itself, this need not be overly troubling. This is, after all, the limitation we encounter every day when talking to ordinary people. We know that they’re not infallible, that they sometimes misremember things, that they tend toward confirmation bias, and that they probably try to present themselves as more knowledgeable than they really are. If AI bots mirror these human foibles, they will not lead us further astray than we already are. Indeed, they might even prove helpful, by reminding us just how pervasive is our own tendency to try to make facts fit our narratives.
The trouble, though, is that chatbots can sound so much more authoritative. Indeed, we’ve already encountered this problem with the internet in general. However much we may know that falsehoods circulate online with effortless ease, we find ourselves readily taken in by “facts” we find on the web, confidently citing them to prove our point. Precisely because of the impersonality of the medium, they seem so much more objective and reliable than mere personal testimony.
As with many such technologies, the problem is not so much with them but with us—with the unrealistic assumptions we bring to them and the false faith we put in them. While adults who have been trained in critical thinking skills and habits of healthy skepticism may be able to take ChatGPT’s answers with a grain of salt, children first using such bots may be at greater risk of losing their anchor in reality. Indeed, the greatest danger of these tools may be that, having put too much faith in them to help us make sense of the world, we will fall deeper into despairing skepticism when they let us down.
These daily articles have become part of my steady diet. —Barbara
Sign up to receive the WORLD Opinions email newsletter each weekday for sound commentary from trusted voices.Read the Latest from WORLD Opinions
Daniel R. Suhr | The pardon of Hunter Biden represents an exercise of raw political power in favor of a family member
Erin Hawley | The Supreme Court will hear arguments over a Tennessee law concerning the use of harmful gender transition treatments
Seth Troutt | A fourth epoch of evangelicalism has emerged from the podcast era
R. Albert Mohler Jr. | A post-Christian society surrenders to “assisted death”
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.