Reining in the chatbots | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Reining in the chatbots

A balanced approach protects children from AI abuses without banning what could be helpful


portishead1 / E+ via Getty Images

Reining in the chatbots
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Whenever there is a new trend or innovation that generates eye grabbing stories, policymakers understandably want to jump in the fray. This is especially true at the state level where legislators are closer to their constituents and have a heightened sense of their community pulse. This dynamic helps explain the over 1,080 proposals across 50 state capitols that have been introduced in reaction to the emergence of artificial intelligence.

Over the past year there have been several troubling stories about individuals committing self-harm or engaging in exploitive behavior after engaging with AI chatbots. This prompted California to address these concerns, and, at the end of the day (I cannot believe I’m admitting this), the results were more nuanced than many expected.

One of the legislative solutions signed into law is AB 1043, which creates age verification requirements for operating systems, devices, and app stores in California. Another proposal was signed into law, SB 243, which establishes transparency and accountability for use of AI chatbots. However, the more broadly sweeping proposal, AB 1064, which would flat out ban the use of chatbots by anyone under the age of 18 “unless the chatbot is not foreseeably capable of certain actions” was vetoed by Gov. Gavin Newsom. This arguably balanced outcome is a reminder that even on the leftist west coast, checks and balances can lead to better policy.

It is true that the use of AI is a major part of American life these days. You cannot peruse the news without stumbling upon a top story about a breakthrough in artificial intelligence or how more people than ever before are using AI in their work and personal lives. This also extends to AI use by people under the age of 18, who are using AI chatbots for school and other purposes (some good uses, others not so good, let’s be real).

This is why age verification is such an important first step. Personally, I have been advocating for age verification reforms at the state and federal level for several years now. In fact, the California age verification legislation AB 1043 is based on proposals originally passed in Utah and Texas that were developed by my business partner Joel Thayer. Age verification has also effectively kept kids off gambling sites and platforms. They should absolutely extend to the use of AI.

This is a dicey issue for parents to wrestle with, but it is one we cannot run from.

It is also important that these apps and chatbots are transparent and accountable to mitigate risky behavior like ideation of self-harm and protect against sexually exploitive behavior. It is a big reason why Congress was correct to enact the Take It Down Act, which is geared towards curtailing “revenge” porn and sexually explicit Deep Fakes. Improving these standards to protect consumers (especially children) is a very reasonable goal for legislators to achieve.

Concurrently the AI industry is also implementing new safeguards and protections to further improve the wellbeing of users (including children.) The norms and standards of AI are still evolving, faster in many ways because questions of liability are far thornier than they were with social media companies and Section 230 protections.

Yet at the same time, cutting off access to chatbots to anyone under the age of 18 because the AI doesn’t fit the tedious definitions written by a state lawmaker seems problematic as well. Chatbots are used for many different purposes that are often very helpful for everyone, whether kids using Khan Academy to learn math or even train to code with Cursor. As the founders of the Christian AI platform Creed noted, the language of AB 1064 is overly broad and could block a chatbot from encouraging children to rely on their faith or prayer. There is a real need to find the appropriate balance of how children leverage AI tools while also mitigating real concerns. We need to empower parents in way that does not throw the baby out with the bathwater.

States will continue to feel this pressure to address issues around protecting children online on their own until there is a national framework on AI in place that guides innovators and consumers. One of the authors of President Trump’s AI Action Plan, Dean Ball, recently left the administration and put pen to paper on what a federal legislative framework could look like. Specifically, Ball proposes a series of important and carefully crafted protections for children using AI.

I admit this is a dicey issue for parents to wrestle with, but it is one we cannot run from. It is essential for Christians to be on the front lines of the discourse, discerning how we as families and as churches should think about this transformative technology at our fingertips. May the California legislative session be a launchpad for our further engagement in this debate.


Nathan Leamer

Nathan is the CEO of Fixed Gear Strategies, a boutique consulting firm based in Washington, D.C. He previously worked as a policy adviser to Federal Communications Commission Chairman Ajit Pai, where he played a key role in developing initiatives to close the digital divide. Previously, he was a senior fellow at the R Street Institute and worked as an aide on Capitol Hill.

@nathanleamerDC


Read the Latest from WORLD Opinions

Denny Burk | We must defend truth and oppose moral evil, even if it means criticizing the home team

Steven Wedgeworth | Faithful Anglicans, led from Africa, sever ties with an errant Canterbury

Emma Waters | The president’s IVF proposal could pave the way for better and more ethical fertility care

Daniel R. Suhr | A photographer wins in federal court, but SOGI laws remain an obstacle to religious liberty

COMMENT BELOW

Please wait while we load the latest comments...

Comments