Brad Littlejohn: The soul stakes of AI | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Brad Littlejohn: The soul stakes of AI

0:00

WORLD Radio - Brad Littlejohn: The soul stakes of AI

Leading the world with artificial intelligence requires deep wisdom and a clear moral compass


Nvidia headquarters in Santa Clara, California JHVEPhoto / iStock Editorial / Getty Images Plus via Getty Images

Editor's note: The following text is a transcript of a podcast story. To listen to the story, click on the arrow beneath the headline above.

NICK EICHER, HOST: Today is Wednesday August 6th. Good morning! This is The World and Everything in It from listener-supported WORLD Radio. I’m Nick Eicher.

LINDSAY MAST, HOST: And I’m Lindsay Mast.

Up next, a national action plan for AI.

Last week, the Trump Administration released its roadmap for building American AI infrastructure.

White House Chief Scientist Michael Kratsios

KRATSIOS: America has to win the AI race. As I said before, as a country we have to have the most dominant technological stack in the world and that's critically important for our national economic security

EICHER: Some compare the AI race to the arms race during the Cold War, but WORLD Opinions Contributor Brad Littlejohn says today’s decisionmakers need to recognize the stakes are even higher.

BRAD LITTLEJOHN: The advent of artificial intelligence already represents a technological breakthrough at least on par with the harnessing of nuclear energy nearly a century ago. Like nuclear energy, it is a technology clearly capable of doing extraordinary good for humanity or extraordinary harm. And like nuclear energy, its breakneck development is happening in the midst of tense international competition between superpowers.

The White House’s AI Action Plan recognizes the high stakes and high risks of this competition, seeking to roll back overly burdensome regulations that would stifle innovation, but without dismissing the real risks of AI. For instance, the plan highlights our current woeful ignorance when it comes to understanding the inner workings of major large language models and calls for DARPA research to better understand and control AI. And it warns that “The most powerful AI systems may pose novel national security risks in the near future in areas such as cyberattacks and the development of chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons.” With AI as with nuclear, our government seems to be attending to the maxim “with great power comes great responsibility.”

That said, there are at least three significant differences between our situation today with AI and with nuclear science eight decades ago, which together suggest the need for this administration to expand and deepen its AI Action Plan—if it is to secure our American future.

First, nuclear technology was initially almost entirely a military and industrial technology. It was housed in powerful reactors requiring enormous infrastructure, not in your living room or your pocket. While AI systems require enormous investments in data centers and research labs, they also have innumerable consumer applications that already saturate the market. This consumer-facing AI poses a whole slew of additional questions and challenges largely unaddressed by the Action Plan: in a world where 75% of teens have already experimented with AI companions, how can we combat the retreat from reality these technologies are likely to engender? How are we to address the rampant cheating that is leading to a breakdown of education, or the atrophy of human skill and knowledge that comes from overreliance on easy (and often misleading) AI answers? Such questions are just as urgent as “how are we going to beat China?” since “What does it profit a man if he gains the whole world, but loses his soul?”

Second, any attempt at sensible AI regulation today has to reckon with the immense power of entrenched industry players. This was not the case with the Manhattan Project, where the federal government took the wheel to develop a new industry while drawing on the industrial might corporations like DuPont. Today, the government is playing catch-up. NVIDIA, which has established a near monopoly in advanced AI chips, currently enjoys a market capitalization of over $4 trillion, with AI-powered titans Microsoft, Alphabet, and Meta not far behind. Such immense market power has already enabled them to sway critical Trump administration AI policy in their favor.

Finally, today we are living in an increasingly post-religious and post-truth world, and that poses a problem for one of the AI Action Plan’s stated goals: promoting “human flourishing.” To promote human flourishing requires a commitment to human nature, something few in our tech companies seem to be truly invested in. And whereas White House policy lays great stress on the need for AI models to “pursue objective truth rather than social engineering agendas,” as Pontius Pilate famously said, “What is truth?” One cannot simply demand that AI models be truth-seeking without a commitment to order society itself around objective truth—as revealed in nature and Scripture. Is that something that this White House is truly prepared to do?

The challenge before us, in short, makes the nuclear race look like a walk in the park. Our leaders will need exceptional wisdom, courage, and determination if we are to win the AI race without losing our souls.

I’m Brad Littlejohn.


WORLD Radio transcripts are created on a rush deadline. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of WORLD Radio programming is the audio record.

COMMENT BELOW

Please wait while we load the latest comments...

Comments