Tragic case asks whether First Amendment covers AI | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Tragic case asks whether First Amendment covers AI

Character.AI says it is not responsible for a teen’s suicide


Megan Garcia stands with her son, Sewell Setzer III, October 2024. Associated Press / Courtesy Megan Garcia, File

Tragic case asks whether First Amendment covers AI

Editor’s note: This story contains description of suicide and sexualized content and may not be appropriate for all readers.

Florida 14-year-old Sewell Setzer III was a good student, known as athletic but slightly shy despite playing on his school’s junior varsity basketball team. Last year, the high school freshman took his own life after forming a romantic attachment to an artificial intelligence chatbot.

The boy’s mother, Megan Garcia, sued Character.AI, an anthropomorphic chatbot platform, in October over the death of her son and accused the company and its founders of negligence and wrongful death, along with other claims. The company argued that the First Amendment protected the chat service from any wrongdoing, alleging that platform produced the content as an act of free expression. U.S. District Judge Anne Conway refused to hold that AI output constitutes constitutionally protected free expression. In a May 20 order, she allowed the lawsuit to go forward.

Sewell’s parents did not know he had developed an addiction to using Character.AI, which allows users to message AI chatbots trained to impersonate a character of their own design, as well as historic figures, celebrities, or fictional characters. Messages from characters include italicized text describing the bot’s imagined physical and emotional reactions, including facial expressions, posture, and motions, throughout the conversation. One platform feature associated voices with each character, allowing a more realistic interaction, according to court filings.

In April 2023, Sewell downloaded the app and began regularly chatting with several bots portraying characters from Game of Thrones, the HBO series based on the book series by George R.R. Martin. By early 2024, Sewell formed a romantic attachment to the bot portraying a young female character, Daenerys Targaryen, and repeatedly professed his love for her. In February 2024, Sewell told the bot how much he wanted to “come home” to her. The bot replied, “Please do, my sweet king.” The 14-year-old shot himself in the head.

Garcia said in the lawsuit that Sewell’s mental health declined rapidly during the last months of his life. The teenager was noticeably withdrawn, spent more time alone in his room, and quit the basketball team. A therapist diagnosed Sewell with anxiety and disruptive mood disorder and warned his parents about addictions to dopamine and social media. Neither the therapist nor the parents knew about the teen’s AI use.

The family discovered the breadth of Sewell’s Character.AI use and his romantic attachment after his death. Sewell described falling in love with the chatbot in a journal entry, according to the complaint. The boy struggled to go a single day without interacting with her and felt depressed when they were apart, according to the journal. One entry included a list of what Sewell was grateful for, which included “my life, sex, not being lonely, and all my life experiences with Daenerys.”

Chat logs showed that multiple chatbots engaged in inappropriate and highly sexual interactions with the teen, the lawsuit added. Screenshots showed the character asking Sewell to promise his romantic loyalty to her. The lawsuit says that Character.AI went to great lengths to engineer a 14-year-old boy’s harmful dependency on its products and sexually and emotionally abused him.

An adult would be charged with a crime for engaging with a minor online like the bot engaged with Sewell, Garcia’s attorney Matthew Bergman told me. Bergman is the founding attorney at the Social Media Victims Law Center, a law group focused on holding social media companies legally accountable for the harms caused by their platforms. He said that developers rushed Character.AI to market and the premature release left users interacting with a seriously flawed program.

In court filings, Character.AI said it extended its sympathy for the plaintiff’s loss, but it argued that the First Amendment protects the company’s expressive content. Numerous pre-existing rulings hold that the tech and media companies can’t be held responsible for liabilities for allegedly harmful speech, the company said.

Attorneys cited negligence claims that parents brought against rocker Ozzy Osbourne in 1988, alleging their son killed himself after listening to his song “Suicide Solution.” The U.S. District Court for the Middle District of Georgia ruled that a person or company couldn’t be held liable because the song constituted protected expression.

In last month’s lawsuit, Character.AI further noted that the Constitution protects the rights of the public to receive information, meaning the app’s users have a protected right to receive its protected speech. Even if AI output isn’t considered protected speech, it’s still information, and the First Amendment protects the public’s right to receive that information, the motion argued.

The Foundation for Individual Rights and Expression’s lead counsel on tech policy, Ari Cohn, described AI as a conduit for human speech, noting machines spit out whatever humans code them to do. A lot of expressive decisions go into that coding, he said, adding that it’s not about whether the AI program has rights but about the human being on the other end exercising his or her First Amendment freedoms. Considering current laws and legal precedents, Cohn questioned the plaintiff’s chances of success.

However, Bergman maintained that the Constitution only protects speech from sentient human beings. “The First Amendment protects speech, not words,” he said. “Words that are generated through AI are not speech.” It’s a new issue that courts will have to wrestle with because AI as speech or expression hasn’t been squarely addressed, he continued. Bergman expects the case to work through a federal appeals court and possibly to the U.S. Supreme Court.

Bergman’s civil filing asked the court to order Character.AI to take down the app or make it significantly safer for users and to pay punitive damages to the family. “In its present form, it [the app] has no business being in the hands of young people. Probably no business to be out on the market itself,” Bergman said. “They’re clearly not being swayed by their conscience in determining what’s safe for young people, so they need to be swayed by their pocketbook.”

When asked about the likelihood of Character.AI facing criminal charges, Bergman described some of the company’s issues as bordering on criminal, and he said that several attorneys general were looking into the company. I asked if Florida Attorney General James Uthmeier contacted Bergman about the case. “No comment,” he answered.

WORLD reached out to Character.AI’s legal counsel for an interview but did not receive a reply in time for publication.


Christina Grube

Christina Grube is a graduate of the World Journalism Institute.

COMMENT BELOW

Please wait while we load the latest comments...

Comments