For more insights on Section 230 of the Communications Decency Act, PRsay reached out to Capitol Hill veteran India McKinney, director of federal affairs at the Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco. The following interview was edited for length and clarity.
What do people misunderstand about Section 230, and why?
It gets treated as blanket immunity, that platforms can do whatever they want and you can’t ever hold them accountable, and that’s just not true. The reason Section 230 exists in the first place is — back in the early days of the internet, when online bulletin boards were a big thing — there were two lawsuits, one against CompuServe and the other against Prodigy, for defamation activity that happened on their platforms.
In one of the cases, there was no content moderation [on the platform], just a free-for-all. The court held that the company could not be held liable for the defamatory speech that happened on its platform. The second platform did filter for swear words. In that case, the court held that because the company did some content moderation, they were therefore liable for all of the content that users posted on that platform.
Chris Cox and Ron Wyden, both members of the House of Representatives at the time, got together and wrote this language that turned into Section 230.
There are two important parts of the original law. One is that platforms are not held liable in civil court or in state criminal court for content that their users post. The second is they are not held liable in civil court or state criminal court for moderation decisions the companies make. This is what allows [such] companies to filter spam and make the internet usable. It doesn’t require content-moderation decisions to be perfect.
The other thing you hear people talk about is, the internet is not new anymore. But a lot of new business gets generated on social media. Small businesses have an Instagram site, where they feature their product or service, talk about their consumer support and build their brand. That branding on social media is only possible because of Section 230, which not only protects Instagram from a defamation claim that [might result from] the comments; it also protects that individual’s Instagram page.
Why is there so much focus on Big Tech related to 230?
Everybody’s heard of Google and Facebook. They’re great boogeymen right now [when people] talk about privacy, data protection and competition. It is true that Section 230 protects those companies, but there’s a reason why Facebook is taking out full-page ads in The New York Times [that say] they think it’s time to regulate the internet and to revisit Section 230.
Facebook is a multibillion-dollar company. Litigation is incredibly expensive. If we change 230 and make it easier to sue companies, it’s going to benefit Facebook, because it will help prevent their potential competitors from ever getting funding, getting off the ground or getting big.
How might changes to Section 230 affect the work of communicators?
Any time you do work that engages with the public online — that invites public comment or promotes engagement on a platform — you are protected by 230. Without it, you would be liable for every false statement that somebody says in a comment section. You would be liable for any content you take down or any of your moderation and filtering decisions. If you happen to apply it unequally, if you miss something or if you make a mistake, you could be sued in civil court.
A trio of Democratic senators has introduced the SAFE TECH Act. How might the draft bill — or any reform — impact 230?
I’m still trying to get a handle on the nuances, but one thing I think is interesting is the difference between their FAQs and the text of the [SAFE TECH legislation’s proposed amendments to Section 230] itself. Their FAQs talk about [the proposed changes] intending to be about advertisements. But the [proposed amendment to Section 230 as set forth in the SAFE TECH bill] reads, “Unless the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech,” which doesn’t [seem to pertain to] ads. What Congress intends is not necessarily how the court reads the words on the page. In terms of enforcement, the second part matters more than the first.
Let’s say an Instagram influencer does sponsored content for a particular product, and it turns out the product is flawed or harmful. Under the proposed SAFE TECH Act, can you sue Instagram for posting the sponsored content and the individual influencer for taking payment to promote the content, as well as the creator of the product itself? That seems problematic.
The SAFE TECH Act aside, do you think the conversation about Section 230 might fade into the background again now that the contentious 2020 election is behind us?
This is going to be something we continue to talk about. Who gets to speak on the internet and how is a big deal, and speech legislation, in general, is really complicated. There’s a lot of misunderstanding about what Section 230 is [and] what the First Amendment is. There’s content that many lawmakers, quite understandably, want platforms to be liable for not taking down, [but hate speech is] protected by the First Amendment. It’s a complicated problem.
Some conservative lawmakers are upset that websites are taking down content that is protected by the First Amendment. But these are private companies, not the government, and you’re allowed to set the tone and the conversation on your site.
Do you see much misinformation and disinformation about 230, whether from people on social media or from politicians?
Yes, it’s definitely a topic of conversation, and it gets tricky. I haven’t seen language yet that makes clear the difference between misinformation or disinformation, and satire or parody. Satire and parody are an important part of political speech, as well as protected speech, and part of the context. You don’t want to create a system that kills satire or parody or critical speech, especially of the government.
Truth is in the eye of the beholder, but I don’t want Facebook to be the arbiter of truth. Misinformation and disinformation are a problem, but how do you crack down on that in such a way that you don’t also silence the important speech that goes along with it? Can a bot or a filter tell the difference between actual hate speech and people talking about or reporting hate speech, [as well as] talking about how to be an ally against hate speech [or] how to be anti-racist? How do you tell the difference between those things at scale, in a way that it is reasonable to assign liability for making a mistake?
Is there anything else you’d like to add?
It’s important for people to emotionally understand that Section 230 protects [internet] users. It is good for all of us when marginalized people — who can’t necessarily connect with other people outside of their own physical spaces — have the ability to see and be seen on the internet. It does have a huge impact on an individual’s life. That’s the beauty of Section 230, the beauty of the First Amendment. Those things are worth protecting.
John Elsasser is PRSA’s publications director. He joined PRSA in 1994.
[Illustration credit: JoeZ]