Two separate cases before the U.S. Supreme Court will weigh whether technology companies can be held legally liable for what their users post.
As The Verge reports, the lawsuits blame Twitter and Google-owned YouTube for facilitating Islamic State terror attacks. In Gonzalez v. Google, relatives of victims of an ISIS attack are suing YouTube for allegedly helping turn viewers into terrorists. The case is the first before the Supreme Court to test the liability protections that technology companies enjoy under Section 230 of the Communications Decency Act.
In the Gonzalez case, plaintiffs argue that Section 230 does not extend to algorithmically created recommendations, since YouTube helps decide which videos it recommends to users. Google contends that Section 230 protects YouTube’s methods of organizing users’ posts and that weakening the law would make filtering out terrorism content more difficult.
According to The Verge, plaintiffs in the Gonzalez case claim Google knowingly hosted Islamic State propaganda that allegedly led to a 2015 attack in Paris, thereby providing material support to an illegal terrorist group. The estate of a woman killed in the attack says YouTube automatically recommended the videos to others, spreading terrorist propaganda across the platform.
In a statement to Axios, Google spokesperson José Castañeda said that YouTube has long invested in technology, teams and policies to identify and remove extremist content. “We regularly work with law enforcement, other platforms and civil society to share intelligence and best practices. Undercutting Section 230 would make it harder, not easier, to combat harmful content — making the internet less safe … for all of us.”
Emma Llansó, director of the Free Expression Project at the Center for Democracy & Technology, told Axios that the Gonzalez case has “a very high potential impact with a very small amount of decision makers involved, which makes it a particularly intense decision point.”
Meanwhile, Twitter, Inc. vs. Taamneh will debate whether the social media company provided material aid to terrorists in an Islamic State attack in Turkey. The case asks whether platforms can be held to violate anti-terrorism laws if they have policies against pro-terrorist content but fail to remove all such messages. Twitter argues that it’s not a violation of anti-terrorism law to fail at banning terrorists from using a platform intended to deliver general-purpose services.
For further reading on PRsay:
• Section 230 Reform Is in the News Again: What Does It Mean for Communicators?
• What You Need to Know About Section 230, and the Potential Changes on the Horizon
[Photo credit: gary blakeley]