Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Supreme Court to review Google’s liability for radicalization

Plaintiffs say the tech giant allowed terrorist ideas to spread


A memorial in front of La Belle Equipe Cafe on Nov. 16, 2015 Getty Images/Photo by Geoffroy Van der Hasselt/Anadolu Agency

Supreme Court to review Google’s liability for radicalization

A Supreme Court fresh from summer recess waded into the murky waters of social media last Monday. The court agreed to hear a case about platforms’ liability for user-provided content—with tragic and compelling facts.

In Gonzalez v. Google, the tech giant is being sued by the family of Nohemi Gonzales, an American killed in the November 2015 terrorist attacks in Paris. The 23-year-old student died when three Islamic State militants fired into a crowd of diners at La Belle Equipe bistro in Paris—part of several coordinated terror attacks.

Gonzalez’s relatives and estate subsequently brought an action against Google, which owns YouTube, contending that the company aided and abetted ISIS in the attack. The family claims Google not only allowed ISIS to post videos inciting violence but also recommended ISIS videos to platform users targeted by algorithms indicating which users would be interested in the content. The company uses automated algorithms to maximize viewers’ screen time and exposure to sidebar and banner advertising.

In a tortured June 2021 opinion, the 9th U.S. Circuit Court of Appeals rejected the claim that Google was liable, relying on Section 230 of the Communications Decency Act. Enacted at the dawn of the internet, the 1996 federal law sought to foster free speech by immunizing platforms from liability for user-generated content. At the same time, it gave providers broad discretion to moderate content deemed offensive.

As social media providers’ growth has mushroomed, so have the claims that the outsized companies’ control over information has gotten out of hand. Conservatives charge the companies with silencing conservative and Christian voices, especially on topics such as opposition to vaccine mandates or same-sex marriage and gender ideology. Liberals claim the providers have done too little to muzzle hate speech and misinformation, whether about LGBTQ issues or vaccines.

Some also accused President Joe Biden’s administration of being complicit in censorship by pressuring tech companies to block “misinformation” about vaccines during the pandemic. Some groups even demand such censorship. In an Oct. 3 letter, the American Medical Association and two other medical groups asked U.S. Attorney General Merrick Garland to investigate and prosecute those spreading “disinformation” on social media platforms regarding “evidence-based gender-affirming healthcare.”

Given inaction by Congress, conservative-leaning states have enacted their own laws to corral what they view as Big Tech abuses. Recently, the 5th U.S. Circuit Court of Appeals upheld a Texas law regulating social media platforms, while the 11th Circuit struck down a similar Florida law targeting providers. Both rulings could eventually wind up before the Supreme Court.

The Gonzalez case does not focus on the platform’s content moderation—its de-platforming of users and flagging of objectionable content—but on its algorithmic recommendations. Challengers argue in their petition to the Supreme Court that Google engages in editorial judgment when it recommends videos to users—meaning it can be held liable for extending the audience for the videos even if it can’t for the user-generated content.

The case will provide the court its first opportunity to weigh in on the scope of Section 230 immunity. Victims’ rights attorney Carrie Goldberg told Bloomberg Law that the case represented a pivotal cultural moment. “Our society has gone from seeing Big Tech platforms as untouchable by law and legislation to finally recognizing that when left to run amok, they can cause atrocious personal and social injuries,” said Goldberg.

Court rulings are often blunt instruments for resolving policy disputes. Those same algorithms that connect puppy lovers can also radicalize someone with extreme political views who has not yet thought of committing a terrorist act.

Refinement, not repeal, should be the focus, argued Klon Kitchen, former director of the Heritage Foundation’s Center for Technology Policy, in an Oct. 2020 report. Kitchen believes the law can be modified in a way that preserves free speech while making internet platforms more responsible for conduct that may lead to harm. “While the online world is not the totality of the public square, it is an ever-growing portion of that square, and good governance and human thriving require that this important statute be better suited for current times and needs,” Klondike concluded.

That resonates with Jason Thacker, author of Following Jesus in a Digital Age. Thacker notes there is bipartisan agreement that something is wrong with social media and yet little consensus about what to do. There is not a clear Christian position on content moderation, he adds, but Christians do have legitimate concerns about being able to express their faith and views in the digital public square without being blocked for “hate” speech. Congress created Section 230, and Congress needs to fix it, he contends.

Nor is it a problem just for the government or Big Tech. “Christians need to think wisely about how we live out our faith in a more digital society,” Thacker said, adding, “We also bear personal responsibility for the things that we see and the things we engage with, and how we act upon those things.”


Steve West

Steve is a reporter for WORLD. A graduate of World Journalism Institute, he worked for 34 years as a federal prosecutor in Raleigh, N.C., where he resides with his wife.

@slntplanet

COMMENT BELOW

Please wait while we load the latest comments...

Comments