Fooling ourselves
Truth and Love will survive AI, despite humanity’s faith in itself
Full access isn’t far.
We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.
Get started for as low as $3.99 per month.
Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.
LET'S GOAlready a member? Sign in.
Steven Spielberg’s 2001 film A.I.: Artificial Intelligence posed the question, What if we could create humanoid robots that were capable of love? The experimental model, named David, looks like an 11-year-old boy and is provisionally adopted into a human family. Though the mother resists at first, she decides to activate the android’s imprinting protocol, meaning David will love her forever. It doesn’t end well.
David’s creator, Dr. Hobby, justifies his work by citing Genesis: “God created man to love Him.” So why shouldn’t men, aspiring to godhood, do the same? The movie was speculative, not predictive; AI today generally refers not to androids but to large language models (LLMs). We—well, most of us—don’t want machines to love us. But we are asking them to think for us.
Since the debut of ChatGPT, LLMs have made themselves at home in offices, research facilities, and (inevitably) college campuses. “Everyone Is Cheating Their Way Through College” blared a recent New York magazine headline. Allowing for the exaggeration of “Everyone,” of course they are. Unlike plagiarism, AI cheating is extremely difficult to detect, since an LLM can write plausible papers in any style, including that of a reasonably bright college freshman. Professors may despair, but AI is here to stay, and even its defenders have concerns worthy of science fiction. Will AI’s benefits for humanity end up redefining humanity?
On May 15, The Free Press hosted a formal debate on a similar question: “Will the truth survive Artificial Intelligence?” That is, in this brave new world, will humans retain their grasp on reality? Arguing for the affirmative were Aravind Srinivas, CEO of the AI company and search engine Perplexity, and Dr. Fei-Fei Li, widely regarded as “the godmother of AI” for her work in computer image recognition. In opposition were computer scientist Jaron Lanier and Nicolas Carr, author of The Shallows: What the Internet Is Doing to Our Brains.
The debate was held in San Francisco, next door to Silicon Valley, so the audience of 900 was presumably tech-friendly. Early in the evening they registered a 68% positive response that the truth would survive AI. Then the debate began.
Surprisingly, the core arguments on both sides were built not on the costs and benefits of technology, but on faith in humanity. Humans are truth-seekers, claimed the affirmative team; technology helps us find truth, search for answers, and solve problems. It’s a helper, not a master: “What AI does to truth is up to us, not AI,” said Dr. Li. Machine values are necessarily human values.
But that, countered Lanier, is the problem. Computer science is currently obsessed with passing the Turing test—making machines that imitate humans so well we can’t tell the difference. “We’re fooling ourselves,” he insisted, and the Silicon Valley business model is built on third parties paying developers to capture the attention of users and fool them, too.
Carr’s concern was AI’s effect on education. If we want to know what LLMs are doing to truth-seekers, look at your average college student. Gathering and synthesizing information is an automatic function, but it isn’t real knowledge: “By automating learning we lose learning.”
Srinivas replied that he’d anticipated all the opposing arguments. How? By asking Perplexity. What’s more, since debate isn’t his strong suit, Perplexity had supplied effective responses for him. It was an Aha! moment, but probably not in the way Srinivas intended. The ripple of startled laughter may have hinted that he’d proved the opposition’s point and possibly lost the debate. By evening’s end, 23% of the audience had shifted to the negative.
Tech progress marches on, regardless of human scruples. Biological science is all atwitter over another kind of Turing test. Three Stanford scientists are proposing “bodyoids,” human bodies with living organs grown from stem cells, as a solution to the shortage of donated organs. They would have only enough brain function to keep their organs alive, not enough for sentience: an “ethically sourced,” never-ending supply of spare parts. One can objectively grant the benefits, but it’s a squirmy twist on Spielberg’s A.I.: humanlike creations who “love” us by mindlessly serving us.
Yet truth will survive AI, and so will love, because Truth and Love exist outside the human realm. Inside that realm, if a thing can be done, it will be done, but always with unintended consequences. Our best defense is to hold firmly to Truth, hope in Love, and allow no machine to think for us.
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.