Shapers of Cyber Speech – Silicon Valley and American Discourse

In January, Twitter and Facebook removed President Trump and many of his followers’ accounts while Google, Apple, and Amazon cut business ties with Twitter competitor Parler over alleged contract violations, crippling its business. Many on the right are incensed. Others see these actions as an example of polycentric checks-and-balances in the classical liberal tradition. Yet even among those who welcome the silence, many are troubled by Silicon Valley’s aggressive actions. But if there is a problem, what is to be done? Between the First Amendment, the bitter partisan divide in Washington, and the need for some content moderation in widely-used social media, what are the realistic regulatory options for curbing Silicon Valley’s influence on the national discourse? What are the potential downsides of some of these options?

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

Nate Kaczmarek:  Hello and welcome to this Regulatory Transparency Project webinar. This afternoon’s program is titled “Shapers of Cyber Speech: Silicon Valley and American Discourse.” I’m sure it will prove to be an edifying discussion. My name is Nate Kaczmarek. I am Vice President and Director of the Regulatory Transparency Project for The Federalist Society. As always, please note that all expressions of opinion are those of our guests. 

 

And today, we’re lucky to have a great moderator in Stewart Baker. Stewart is a partner at Steptoe & Johnson in Washington, D.C. From 2005 to 2009, he was the first Assistant Secretary for Policy at the Department of Homeland Security. If you’d like to learn more about the extensive bios of our guests today, you can visit our website, www.regproject.org, where we have all of their complete bios. 

 

In a moment, I’ll turn it over to Stewart. Once the panel has completed their discussion, we’ll go to audience Q&A, so audience, please think of the questions you’d like to ask our panelists. Audience questions can be submitted via the Zoom chat function at the bottom of your screen or by using the raise hand function, and we will call on you directly. 

 

With that, Stewart, Billy, Neil, thank you very much for being with us today. Stewart, the floor is yours. 

 

Stewart Baker:  Thank you, Nate. And for those of you who are thinking about questions, I would suggest that the first thing you do is type it into chat because we’ll be trying to take those questions first. And Nate will summarize them and ask them, and then we’ll turn to people who’ve raised their hand. So it’s best to include it in the chat.

 

The two panelists that we’ve got today are very thoughtful on this topic. Billy Easley is a policy analyst at Americans for Prosperity in Arlington, and Neil Chilson is a fellow at the Charles Koch Institute. As you might expect, they’re probably both kind of closer to the business middle of this debate, and so I will be pressing them to acknowledge and explain some of the other views that the debate has thrown up, and there are plenty of them. And since I am the moderator, I will — I’m thinking about taking a Silicon Valley view of moderation. And if they say something I really disagree with, I’ll just cut them off. 

 

So let’s start. That tells you my prejudices in this area, but let’s try to get them all out. And Billy, I’ll start with you. What are the reasons people think 230 needs to be handled, and maybe more specifically, what is the problem with platforms and speech today? What are people worried about?

 

Billy Easley:  It depends on who you ask, I think, is the easy, lawyerly answer to that question. If you talk to people who are on the right, most of the people that I talk to on grassroots, conservative activist level, their concern is, to put it bluntly, if the president of the United States could be taken off Facebook and Twitter, what’s the likelihood that I could be taken off of Facebook and Twitter? And I’m going to give them a little bit of credit here. 

 

Some of the people who argue this say, “Hey, listen, I didn’t really see a clear articulation of rules about why President Trump can be taken off but not the Ayatollah or propagandists from the CCP.” And I think it’s really incumbent on platforms and interactive computer services to do a better job of communicating what the rationales are for the specific content moderation decisions that they take.

 

Stewart Baker:  Let me ask Neil to give us the view from the left about what’s wrong with speech on modern social media platforms.

 

Neil Chilson:  Sure. So while Billy, I think, accurately summarized the concerns on the right as too much content being taken down or too many people being taken down, that concern does come up some on the left. There’s been plenty of groups on the left who feel like their content has been unfairly moderated or that they’ve been pulled down off of platforms. 

 

But the more animating concern on the left is that there’s certain types of content that aren’t being taken down, so types of hate speech, or types of dangerous views in their mind, or offensive views, or maybe even just commercial views that they disagree with that people are profiting off of bad content that’s online and that the companies really should be forced to take down more content. So it’s kind of diametrically opposed in many ways to the concerns on the right.

 

Stewart Baker:  So I’m going to try to look for places where the left and right might agree on these issues, and I’m not sure we’ll find them. But let me put forward one, and you tell me what you think of it. There’s a common meme that says, look, we know what the social media platforms want. They want to make money, and they want to do that by selling adds to people whose attention they’ve captured. And the way to capture attention is to get engagement, and engagement is another word for pissing people off at somebody else so that they want to watch more and more stuff that feeds their prejudices and tells them how much somebody hates them. 

 

And so without even meaning to, the social media platforms are creating division in the country, building up hatred and a sense of otherness with respect to other people who are on  social media, and that Section 230 has enabled this kind of irresponsible behavior. Thoughts on that view of the profit motives in how social media refers you to particular content?

 

Neil Chilson:  Yeah, I think that is a common story about the social media platforms. And just setting aside the Section 230 part of that for a bit, I think there’s been some really good social science work, and it’s ongoing. Our experience with these platforms and this type of media is not that long, and so it’s no surprise that when people come in contact with ideas that they haven’t spent a lot of time around in the past, which is much more common online than it is offline, that feelings can flare up. And so the question is whether or not these platforms are profiting from that, and I think that’s a difficult question to answer. 

 

I think people are drawn, in some ways, to the entertainment of owning the libs or owning the Republicans. And in some ways, it’s entertainment. It’s sports. It’s like a team thing. And so is it right or wrong for companies to provide that type of entertainment to people who want it? I don’t know. That’s a big question. I think it does suggest that we should, as users of social media, spend some time thinking about how we are engaging on it and why we’re drawn to maybe get into fights that we wouldn’t have if we were sitting across the table from another person.

 

Billy Easley:  It sounds like one of those situations where they’re mixing up technological problems with cultural problems, in some ways, with this. And the one thing I will simply add here is there has been some legislative discussion about this because Senator John Kennedy from Louisiana has a Don’t Push My Buttons Act which doesn’t just target targeted advertising but also explicitly says, hey, you should not be able to use people’s data to bring up specific content to get a rise out of them, as he described it on the Senate floor. 

 

Stewart Baker:  Well, and that does — it is interesting because that approach conflates the privacy concerns and the “push my buttons” concern into a single unified field theory of what’s wrong with social media of this sort, that you pull everybody together because you want their data, and once you’ve gotten enough people together, nobody else is going to attack your unassailable niche. And then you can keep people engaged in this fashion. So it’s a theory, I kind of agree, and Neil was hinting, it doesn’t quite go to Section 230 so much as it goes to a critique of what’s been called surveillance capitalism. And we’re not going to try to solve that problem as well today. 

 

Let me ask whether there is another critique that might span ideological concerns. And that is to say when Silicon Valley demonstrated that it was going to cut off President Trump and that it was going to make sure that Parler couldn’t be used by him or anybody else to express views that he had been expressing, it raised the question whether Silicon Valley as a whole or the leaders are in a position to say, “Look, America, there are some things you’re just not going to be allowed to say.” 

 

And they may be doing it now to views they find offensive, but you can be reasonably sure that if people started to advocate for things that were fundamentally against their business interest, they’d find a way to shake that discourse too. And that’s a way of suggesting that maybe there’s just too much power over our discourse in a group of relatively insular companies out in San Francisco. Billy?

 

Billy Easley:  I actually think that this might be a situation that Neil and I might disagree a little bit here on this point. I’m a little bit more worried about this than some of my other right of center colleagues are. Jillian York over at the Electronic Frontier Foundation wrote a little bit about the AWS decision not to host Parler, for example. And I want to be clear. I think there’s a very big distinction between Facebook and Twitter saying, “You violated our terms of service. We’re no longer going to allow you to host speech,” and internet infrastructure companies saying, “Hey, we’re not going to host Parler anymore.” 

 

And I want to also note that, yes, Parler may not have been a perfectly good faith actor. It may have failed to actually create content moderation frameworks. But it is concerning to hear about these companies saying, “Well, they’re further down the stack,” deciding no longer host specific platforms. And I think I’m a little uncomfortable with that. Neil, we talked in the Slack about this, but I’d love to have this discussion now.

 

Neil Chilson:  I think you’re right, Stewart, that this sort of concentration or the perceived concentration of power over speech is certainly something that brings the left and the right together in being concerned about these big companies. 

 

I think if you put it in a historical context, though, it’s pretty obvious that it’s never in the history of humanity been easier for a single individual to get out their idea and test their ideas in the marketplace. And that’s because of these — it’s in part because of these big companies, but also in part because of the many, much smaller companies who do something similar. There’s lots of avenues. Gab is still online. 8chan is online. Parler has new hosts. There is a wide range of infrastructure companies out there who can serve the needs. 

 

Now, it is concerning when the big ones face political pressure, essentially, and social pressure to take down speech. And I think that that’s — so I think as a sort of a social matter, we should push these companies, we should encourage these companies to be platforms of tolerance where lots of views can be there and can work things out and hash things out in dialogue. But that’s pretty different from suggesting that government action should require such things.

 

Stewart Baker:  This is probably another topic that we’re not going to get deep into, which is the antitrust issue, whether the problems that people are concerned with and especially the problem of too much speech suppression would be resolved if there were more aggressive antitrust measures being taken. 

 

I have a cyber law podcast, and on the last podcast when we were talking about people who were taking on Silicon Valley and whether they were taking on comic book archvillains, Amy Klobuchar came up because she has both raised antitrust enforcement and Section 230 reforms, a kind of double-barreled punch that led me to compare her to Dr. Octopus. But I think the idea of breaking up the companies is a separate one, and let’s see if we can stick to 230. 

 

So let me now just take us to 230. And I’ll ask Neil, can you walk us through — well, actually, why don’t I do this because I know you’ll probably disagree with me. I’m going to walk it through, and then you can tell me where you disagree. 

 

There are really two pieces of Section 230, and they are completely different. And when people talk about Section 230, they are often confusing the two because they are really about two very different things, (c)(1) and (c)(2), we can call them, like Thing One and Thing Two. But (c)(1) is basically subsidizing by protecting the companies when they include voices that are not theirs. And that essentially says you’re not going to be held liable as a publisher for things that other people say on your platform. So that’s a protection for what’s included. 

 

And there’s a separate (c)(2) which says and we’re going to protect you when you decide to suppress certain speech. We’re not going to let you be sued for suppressing. And that’s another kind of subsidy, although smaller because there are fewer ways you can be sued for taking people’s speech down. But those two things, protection for inclusion and protection for exclusion, are both part of Section 230. And it turns out the people who want to get rid of Section 230 or change it often have very different views about those two things. 

 

Now, Neil, I know you said you had some quibbles with my suggestion, probably the suggestion that it’s a subsidy. So go for it. 

 

Neil Chilson:  I’ll just put aside that. We can call it various different things, but I think the gloss that I would put on it is that courts in many ways have said that (c)(1)—and (c)(1) is by far the most used protection under Section 230—that (c)(1) covers both what’s hosted and choices that the platforms make to take content down. So it’s not just (c)(2) that covers moderation; (c)(1) also protects that editorial function of —

 

Stewart Baker:  — They say, “I so disagree with this that I’m taking it down.” 

 

Neil Chilson:  Well, yes. And they’re not using the (c)(2) protection. They’re using the (c)(1). The one other thing I would add is that I think people really confuse how Section 230 protects companies. We talk a lot about that it protects them from liability. But the primary protection, as I see it, and when you look at most of these cases, is that the First Amendment protects most of these companies from liability. 

 

What Section 230 protects these companies from is having to litigate all the way through to a constitutional case about a specific issue. And so in many ways, I think of Section 230 as a tort reform bill that’s much more like anti-SLAPP laws that allow frivolous lawsuits to get dismissed very quickly, which is important, given the scale of user-generated content and the number of potential cases that platforms could face if they didn’t have this quick method to get rid of frivolous lawsuits.

 

Stewart Baker:  All right. Well, there we have Neil Chilson trying to persuade the entire right wing of the country to hate tort reform. [Laughter] Billy, can you elaborate on what the core of (c)(1) and (c)(2) are? Give me a good example of something where almost everybody would agree (c)(2) is needed, or (c)(1) is needed.

 

Billy Easley:  If I go on Facebook and claim that Hillary Clinton is part of a cult of people who are selling kids online, you would suspect that I should be able to be sued for the defamation of saying something like that and not Facebook. I think people understand that.

 

Stewart Baker:  I kind of agree with you. I think defamation, which is what really triggered 230 in the first place, is a classic example of speech that could appear on your platform. You certainly don’t know whether it’s true or not, in most cases. In your case, I think we probably would. But in many cases, you’re not even going to know whether it’s defamatory. It could well be that in the particular community that somebody spends their life in, it is defamatory to say that they don’t believe in a particular religious doctrine, which most of us wouldn’t be familiar with, but which would be defamatory to them in their church community. 

 

And so the platform is completely at sea, even when it gets a complaint. It says, “Well, I don’t know. They say it’s true. You say it’s false. What am I supposed to do?” So in those circumstances, it’s a little hard to ask them to take on that responsibility. And that is the kind of classic, to my mind, example of a (c)(1) where we need (c)(1), which is why getting rid of 230 is a little hard to take. 

 

How about (c)(2)? What would you say was a classic example of something where most people would agree of course you’ve got to protect their ability to suppress that kind of speech? Neil?

 

Neil Chilson:  Well, let me give what might seem like an innocuous example, but I think — and again, I would say this is probably protected by (c)(1) as well. But for example, if I set up a blog or a Facebook group that focuses on knitting — let’s just make it a blog where it’s my personal blog about knitting. Not one of my top ten hobbies, but let’s just assume that. And people come on there, and they want to talk about all sorts of other things. 

 

I think probably people intuitively understand that I should be able to take down that irrelevant content in order to maintain the usefulness of my community to the people who want to be there to talk about knitting. And absent 230, there is at least some risk that if people wanted to come and sue me over taking down that content, I might get dragged into court, and it’d be much more expensive than in the Section 230 context. Now —

 

Stewart Baker:  — That’s not what they had in mind when they passed it, though. They had a pretty good idea of what they felt was offensive speech that you could take down.

 

Neil Chilson:  Right. You’re right. Probably the more — well, I would say they did have in mind that they wanted the internet to have lots of different communities that were able to police themselves. 

 

But maybe the more classic use case for (2)(b) and (2)(a) is speech that is protected by the First Amendment, but a person doesn’t want to host on their site. So pornography is a great example. Lots of people — Facebook and Twitter, they want a place where lots of people feel comfortable to come and engage, and so they take down pornography. That speech is protected under the First Amendment from government interference, but platforms, just because of the kind of community that they want, they want to take it down. And Section 230 (c)(2) allows them to do that, and (c)(1), I think, as well. 

 

Stewart Baker:  Yeah, you’ve said that a couple of times. Can you elaborate on that? (c)(1) basically says you’re not going to be treated as a publisher just because you let people use your forum. Why do you think that protects you when you take people down?

 

Neil Chilson:  Well, the courts have basically looked at this and said that the editorial discretion that exists by — you’re not going to be treated as a publisher for liability purposes, but you can act in an editorial way on your site. And so I think those two things are very intertwined. If you’re not going to be treated as the publisher or speaker of any information provided by somebody else, there’s an editorial element in there as well. And that’s how courts have applied (c)(1). And that’s why I think most cases are used under that. 

 

Stewart Baker:  I’m skeptical about that. It seems to me that (c)(2), which is very clear about what the — about the protection for editorial decision making is more appropriate. In a close case, you might say, well, I’m not going to read (c)(1) as sub silentio overriding the very clear rules in (c)(2). But I take your point. The courts have not exactly covered themselves with glory in this area. They’ve been heavily influenced by some very effective lawyering by a combination of business interests and civil society to maximalize Section 230, don’t you think?

 

Neil Chilson:  I don’t know if I agree with that characterization exactly. I think the courts — what we do know is that the courts have gone through what is in many ways like a common law process, applying the law to many, many, many, many different sets of facts, hundreds of cases. And over time, the law has evolved in application to where we have it now. 

 

And so I’m a bit Hayekian in this sense, and I’m a fan of the common law approach. And in many ways, I think that we look back at that history of hundreds of cases and say, hey, this is not decided right, at some peril. We should respect some of that gained knowledge over time. Not that the words are sacrosanct, but that’s sort of how I think about it.

 

Billy Easley:  I agree. I think Section 230 is a maximalist statute. So I think when the courts have applied it in the ways that I think Stewart is saying is sort of a maximalist approach, I think it’s because the statute is pretty broad in its protections.

 

Stewart Baker:  Well, yeah, although there have been decisions where the courts have said a competitor complains that their speech about their product is being taken down because it competes with the platform’s speech. And the court said, “Sorry. Section 230. You don’t get to complain about anticompetitive behavior in moderation of speech. That may be an antitrust violation in most places, but thanks to Section 230, it isn’t.” That strikes me as pushing the protection for editorial decision making and takedowns way beyond what Congress had in mind.

 

Neil Chilson:  So can I flip the script a little and ask you, do you think the First Amendment allows a company to take down competitor’s speech on its own platform? It’s hard for me to — I don’t know that an antitrust action — the intersection between antitrust and First Amendment is not crystal clear, but it’s hard for me to imagine that the First Amendment would tolerate requiring a company to leave up the speech of a competitor on its platform. 

 

Billy Easley:  In a circumstance where there is competition, you might be right. But where the competitor controls a platform that is dominant, it seems to me that that’s pretty problematic. And just to remind you what the (c)(2) protection for taking stuff down is, it is not something that says you can do anything, you can take anything down. It says that you can take down— let’s see, where is that language—anything that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. So you have to read that pretty hard to turn otherwise objectionable into, yeah, it might cost me sales.

 

Neil Chilson:  Yeah. Of course, there’s also (2)(b) which says any action taken to enable or make available to information content providers or others the technical means to restrict access to the material described in paragraph one, which is another category of —

 

Stewart Baker:  — I’m going to put that aside because it’s not that. It’s something users can use. So I’m going to put that aside. 

 

Neil Chilson:  But as Billy said, otherwise objectionable is a pretty broad phrase. And so Congress wrote it that way. The courts have interpreted it quite broadly. And I think that’s pretty consistent with the First Amendment rights of companies to decide what’s hosted on their platforms. 

 

Stewart Baker:  So let’s move into some of the discussions about otherwise objectionable and some of the ideas about how to discipline, if that’s appropriate, the kind of authority that social media exercises when it suppresses speech. The kinds of things that we’ve seen for regulating what we’re calling—I’m calling, at least—(c)(2) immunity, includes an effort to say otherwise objectionable needs to be narrowed because the platforms are basically using that, and they persuaded the courts that otherwise objectionable means if they have an objection to it, they could take it down. 

 

And many of the proposals have said, “Why don’t we narrow that down to something a little more objective?” And I wondered what you thought of those proposals. Billy, let me start with you.

 

Billy Easley:  So if I’m getting this right, I believe that some of the proposals that I’ve seen are the Online Freedom and Viewpoint Diversity Act from Senators Blackburn and —

 

Stewart Baker:  — Yeah, and I think we had something like this from the Justice Department in the Trump administration.

 

Billy Easley:  Yeah, and I believe they were pretty substantially similar in general. It’s clear that they were informed by the same sort of concerns here. And what they basically wound up doing was two different things. Number one, sort of raising — as you said before, the current language says, hey, if an interactive computer service considers this to fall into one of these buckets, it raises that to objectively reasonable standard. 

 

And it also limits the number of buckets you can play with here in terms of protections to, if I’m recalling this correctly, anything that promotes terrorism or self-harm. And I believe there’s one other one. I’m sorry, I can’t recall exactly which on it is right now. But the key point is that —

 

Stewart Baker:  — Violent extremism, self-harm, I think, and promoting terrorism. Those are the kinds of things that I’m seeing.

 

Billy Easley:  I think it also says if it’s just unlawful.

 

Stewart Baker:  Yes. 

 

Billy Easley:  Right. Sorry, it took me a little bit there to remember. And I think the main problem with this is it’s sort of non-exhaustive. I think one of the things that I actually bring up to this point is what about spam, or what about election disinformation, or medical misinformation, things that Congress has actually bugged these platforms to actually take down in the last year or so. This doesn’t contemplate it as well. 

 

And I know I’m being long-winded here, so just one more point. I want to return to the spam issue because as James Grimmelmann at Cornell Law School I think has talked about, spam is one of those situations where it’s someone abusing an information technology infrastructure in ways to get people’s attention. And Congress is going to be really bad at finding ways to define that in legal terminology. And so I think it makes sense to leave a broader viewpoint of (c)(2) generally for emerging technologies in new platforms.

 

Stewart Baker:  And Neil? You’re on mute.

 

Neil Chilson:  Thanks. That probably would have — may have been better than what I was going to say, to just watch my mouth flap. Sorry, I was scanning through some of the chat, so I may have lost the thread a little bit here. There’s a lot of great questions in here, really hard questions.

 

Stewart Baker:  Okay. Well, we’re going to get to them shortly.

 

Neil Chilson:  Yes. The DOJ-type proposals I think — remember, if you think of this—and I agree, not everybody does—but if you think of this law as primarily a sort of tort reform where activities that the companies are doing are protected by the First Amendment and would get thrown out of court if somebody brought a lawsuit under — for those activities, then to the extent that the categories of speech are narrower than what the companies would be allowed to exercise their rights under the First Amendment, then I think we start to remove the incentive for user-generated content. 

 

And that’s the ultimate outcome. That’s the ultimate purpose of 230 is to provide the ability for user-generated content platforms to exist. And if you narrow the categories of moderation such that the companies are forced to either take down more content because they’re concerned about liability or to face a lot of expense in defending their moderation decisions, that undermines the purpose of Section 230 overall. 

 

And so I think a lot of these — we can talk about a lot of the immunities, but that’s how I think about all of them. How much to they undermine the ability to quickly dispose of frivolous litigation or abusive litigation because that is the core problem that Congress was tackling. 

 

Stewart Baker:  All right. Let me challenge you on that from, at least, the right. There is no way — there is no world in which Twitter says, “Oh, maybe we shouldn’t have users participating in Twitter.” They’ve got nothing but users participating in Twitter, and without that they’ve got no business. So they’re never going to do that. 

 

And they are spending boatloads of money looking for speech they don’t like already. The idea that a few lawsuits would prevent them from making enough money to stay in business is implausible, isn’t it?

 

Neil Chilson:  You’re talking about one platform. And I would say Twitter doesn’t spend nearly as much money as Facebook, for example.

 

Stewart Baker:  That’s true.

 

Neil Chilson:  But it certainly spends a ton more money than a platform like Medium does, for example, which I think, as far as I know, has a couple lawyers in-house to do this work. And there’s a ton more platforms out there who are protected by this. And let’s not forget, Section 230 also protects users so that when you quote me online, you’re not responsible for the thing that I said in my quote. 

 

So overall, I would say if we only think about this in the context of the big companies, we’re making a bit of a mistake. And I think you’re right. Facebook, Google, maybe Twitter—although they don’t have the sturdiest business model—could survive this type of increased cost. But there’s a whole bunch of people who maybe they run their blog for free, and they’d be like, “It’s not worth it. I’m not going to take the risk to provide this space.” My knitting blog, for example. I’m not going to host it if — the first lawsuit I get, I’m just going to get rid of the blog, or I’m going to stop having people participate. 

 

And so I think we’ve got to think about all of the potential communities online that would be affected by this, not just the big guys who probably would be happy if a law took out a bunch of potential competitors and drove more eyeballs to their services. 

 

Billy Easley:  There’s a fascinating case, a New Jersey case, happening right now with Julianna Reed and whether or not she could be sued for her retweets where Section 230 was brought up. I’ve written a blog about this. And I just bring that up to illustrate Neil’s point that we keep on talking about Section 230 only defending these big companies. But it also could defend someone who wants to retweet someone else’s content from being sued.

 

Stewart Baker:  Okay. I think all of that is pretty clear. Let me then turn to (c)(2) reforms so we get on the table ideas that are floating around. There, the notion is — the most common notion is that — sorry, (c)(1). The most common notion is that there are certain kinds of things that you shouldn’t be allowed to leave up, human rights violations, harassment, cyberstalking. Those are things that I think Senator Klobuchar’s bill flagged. 

 

What’s wrong with that, at least assuming you buy into the left deviation as to the view that there’s already too much bad speech on these platforms? Go ahead, Billy.

 

Billy Easley:  I’m going to leave Neil some time here because I think he might push back on me a little bit here. Actually, I could see a world where there’s a civil rights exception that’s written narrowly, very tightly, that might not eviscerate Section 230. And it seems a little weird seeing the Roommates case, the Facebook housing add discrimination regulations that were happening a couple years ago. I think the case for exempting these federal laws specifically is fairly strong in my view. 

 

I think the problem with the bills like Yvette Clarke’s or Klobuchar’s bill is that they’re just too broad. One of the things that I would do if I could sit down with Representative Clarke’s staff or Klobuchar’s is what is the actual landscape of laws that would be impacted by this because their bills say state, federal, and local laws that deal with protected classes or have any sort of impact on civil rights would be exempted. 

 

Does that mean that — I believe Washington, D.C., protects political affiliation as a protected class, or a couple other jurisdictions do. Is that really the intent of what they want to achieve? It might be, and we can quibble about that, but I’m very skeptical of making a broad exemption when we don’t really know the full landscape of what the impact would be on each federal law.

 

Stewart Baker:  So your concern is that is it encourages more takedowns, and those takedowns will have a bias based on what’s in a local law, and a local law could include a lot of things. There’s plenty of bills that say you can’t discriminate based on veteran status, and that might lead to people saying, well, maybe there’s a problem with highlighting how many people in the January 6th demonstrations were ex-military. So you can imagine that. 

 

Neil, if you turn on your mike, we’ll be glad to listen to you.

 

Neil Chilson:  I think it’s interesting to think about some of these exemptions. As I said — as Billy said, federal criminal law is not exempted at all right now. And it’s interesting to think about how federal civil protections on some of this would work as well. 

 

The question is how it’s structured. So the way that these turn is essentially like if you don’t take down — I’m making the formula very broadly, but it’s sort of if you don’t take down this type of content, you don’t get any Section 230 protection. And so that is conditioning — in many ways, that’s conditioning a protection or a sort of benefit on a type of moderation. And that raises some First Amendment concerns. Now, maybe those are — in certain areas, those are surmountable, but that’s the way I kind of think of it. 

 

I wish there was a more targeted way to say — sorry, I should say it this way. If you step back and you said, look, you are not allowed to leave up — what if you just leave Section 230 aside to say, what if you wrote a law that said you have to take down content that does X, Y, and Z, that fits these categories. That law would face a lot of First Amendment scrutiny, and for good policy reasons. We don’t want the government setting rules about what people can say on platforms. 

 

And I think as conservatives, I think that we should be particularly concerned when the people writing the rules for what should be taken down maybe don’t share our ideas of what good speech is. And so then if we bring back Section 230 in here, and that’s sort of a mechanism to get at that thing we can’t do because of the First Amendment, that should raise flags for people. That’s all I’m saying. And I’m not saying in the particular case of civil rights. Maybe there is a way to do that that works, but I think that hasn’t been carefully talked about to be narrow. 

 

Just to Billy’s specific example, if political affiliation was a protected class, it would mean, what, you have to let Democrats be in your Republican Facebook group and say, “Just take over it if you want,” otherwise you’re going to lose Section 230 immunity for your platform. That seems pretty heavy-handed, and it seems to undermine one of the main benefits of the internet, which is —

 

Billy Easley:  — It’s a clear backfire effect. There’s no way — there are tons of civil rights groups that came out in favor of these bills. And I would like to ask every single one of them, did you really mean to outline the effect that Neil just brought out, which is the ability for people to go to specific Facebook groups and say you can’t take out my speech anymore based on political affiliation? Come on. That can’t be what these organizations or even the representatives and senators want to achieve here. So again, I think this was not written narrowly enough to be effective in my view. 

 

Stewart Baker:  So I think there is a difference between saying you can’t kick somebody out of your Facebook group because you don’t like what they say and that Facebook can’t deplatform them. 

 

And I want to cover two points before we go to questions, and I’ll do it quickly. Neil, you’ve raised several times the First Amendment issue, both in terms of protection of platforms for what they allow and protection of platforms for what they take down. I want to talk about taking down because that is suppression of speech. There’s just no doubt about it. And you’ve said, well, this is a First Amendment right to suppress speech that the platform doesn’t want to have on its platform, and there is. 

 

But there have been lots of cases, or enough cases at least, where the courts have recognized that letting one big player exercise its First Amendment rights by squashing the views of a large number of other people is not consistent with the First Amendment’s values, and that it is appropriate for government to say there are circumstances where a single company with control of a forum has to allow views that it disagrees with, notwithstanding the First Amendment. Why isn’t that principle appropriate in the context of (c)(2) suppression?

 

Neil Chilson:  Well, again, setting aside 230, because I think we’re not in 230 land anymore when we’re talking about whether or not the government can require a company to allow speech that it disagrees with, you called it suppression of speech. I would call it freedom of association, which is also protected by the First Amendment where a company just doesn’t want to be associated with a certain set of views, and so it doesn’t let people use its platform. 

 

That is very, very strongly protected under the First Amendment. And the few cases that you’re talking about are cases where, for example, you had a company town that held itself out as providing all the functions of government, and therefore when it suppressed speech in those conditions —

 

Stewart Baker:  — Or a shopping center.

 

Neil Chilson:  Even there, the protections, their ability to moderate the content in that setting was not that constrained. It was pretty open. And I think those two analogs are not particularly strong for platforms that are intended — they’re much more like newspapers in some ways than they are like shopping malls or company towns. And  the First Amendment protection for them is pretty strong for them in that condition.

 

Stewart Baker:  All right. Last question, and that is that this is the set of proposals that I think might have some chance of succeeding because they are content neutral to a degree, and that is to say that in the context of (c)(2) when you’re suppressing speech, you have to live up to your own rules. You have to provide some kind of due process. You’ve got to provide some transparency about what you’re taking down and why. 

 

We start with Billy. Plausible?

 

Billy Easley:  I have no problem with transparency requirements. I really don’t think anyone has any issues with it. I think the bigger issues is do those actually deal with the main problems that are animating this debate if we’re more transparent and if people are saying that they’re following the terms of service? I actually don’t think it does.

 

Stewart Baker:  Okay. Neil?

 

Neil Chilson:  Yeah. I would say, look, it is clear that it is unclear how these decisions get made at companies. And that’s problematic not just from a regulatory point of view, but it’s really problematic from a trust point of view. Users don’t trust these platforms because the platforms don’t explain what they’re doing. And they really should. And so is that a government requirement that they explain? 

 

Again, transparency requirements, I think if you go and read their terms of service on all of these platforms, including ones that are free speech mavens like Parler, they gave themselves every right to take you down at any time for any reason. And so I think if you’re just being transparent about that level of power, it’s not very helpful.

 

Stewart Baker:  What you heard is exactly what we’re doing.

 

Neil Chilson:  Right, yes. We can do whatever we want whenever we want. That doesn’t help much. And so I would say there are laws that require companies to be honest about what they’re doing. I spent years at the Federal Trade Commission, and the Federal Trade Commission has deceptive and unfair acts and practices authority. But again, most of their terms and conditions are pretty clear that they are — their terms and conditions, what they’re promising you is not to leave you up on the platform for the most part. 

 

And so while I think that transparency is a must and we should push these companies to do it, I think finding legal tools that don’t over — that actually solve the problem is actually a bit harder. But if Congress wants to pass some rule that says you have to be transparent about how you take people down and here’s some guidelines on it, I don’t see that as the worst thing in the world. But I don’t think it has much to do with Section 230. They could do that without changing Section 230.

 

Stewart Baker:  Yeah. Or they could do it in Section 230, which is where a lot of people have proposed doing it. Certainly, the Justice Department did. 

 

All right, we’re going to stop here because we’ve got a lot of great questions. I’m going to ask Nate to give us the first question.

 

Nate Kaczmarek:  Yes. The questions have been piling up, and I’ll do my best to relay them as they came in. One of the first questions — I know Stewart’s been keeping us away from antitrust discussion, but one of the first questions was what would be worse in the panelists’ minds, breaking up the monopolies that run social media or repealing Section 230 altogether?

 

Stewart Baker:  Okay. Neil, I’ll start with you because you actually have a little bit of FTC expertise. 

 

Neil Chilson:  Oh, I hate hypotheticals where you can only choose between two bad things. That’s the worst. Both would have pretty bad effects. I guess in some ways, 230 applies much more broadly than a breakup of a single company would, and so in some ways, the effects of repealing 230 would be much broader than breaking up a monopoly. 

 

Now, the only caveat I’d put on that is if the way you break up a company is by changing antitrust law, which has a very broad application, that could also have some really big knock-on effects. But if you can break up the companies under current antitrust law, if you can show that they’re violating the antitrust law now, we should probably be doing that anyway. So maybe it shouldn’t be an either/or. 

 

Stewart Baker:  Billy?

 

Billy Easley:  I agree. 

 

Stewart Baker:  All right. I’m not sure that’s completely fair because under current antitrust law, you couldn’t break them up. And so the real question is are you going to say because of the extraordinary power that a few platforms have over our speech, we are just going to make sure we break them up, even if Congress has to mandate it. And given a choice between that and Section 230, maybe you’d be more concerned about the antitrust remedy. 

 

But let’s go to question two because we’ve got a lot.

 

Nick Kaczmarek:  Sure. So the next question I had on my queue was the questioner indicated that under Section 230, it’s fairly clear and easy to follow and adjudicate under. And I know we’ve talked about this a little bit, but if you’d like to expand. If you were — if your hand was forced and you had to get rid of 230, how would you write it in a way that was better than what we currently have?

 

Stewart Baker:  Billy? Billy’s rolling his eyes because he’s not sure what he’s going to say.

 

Billy Easley:  I don’t know how I could come up with another law that aligned the incentives in favor of user-generated content speech to the same degree that the current law does. I know I’m dodging the question here, but having a liability shield as strong as (c)(1) has been developed in both the statute and jurisprudence, and with the combination of (c)(2), no, I don’t think I could. Maybe someone could, but I don’t have the brains to do it.

 

Stewart Baker:  I suppose I’ll ask Neil this. Suppose you just made this an affirmative defense that how could I possibly monitor this kind of behavior. It means you get a lot more lawsuits, but you’d probably win most of them. 

 

Neil Chilson:  Well, I actually think the current law is that it is an affirmative defense, but a lot of the courts apply it to allow a motion to dismiss. And so I don’t know that that would change anything. I know some of the proposals have that language in it. I think the Safe Act does.

 

Billy Easley:  It does.

 

Neil Chilson:  But I don’t — the courts may just — it depends on how the courts interpreted that. They might interpret it to mean, no, we really mean it this time. It’s an affirmative defense. And there is some extra language, I think, in the Safe Act that says that they have to prove it, which always would move you past a motion to dismiss because it raises the factual issue. And I think that, again, if the test is how do we balance the incentives to allow user-generated content, that tips it more towards taking down more content or incurring a lot more litigation costs. 

 

So I will flag, I did mention earlier that I’m a big fan of the common law, and I wrote a whole article for Protocol that points out that it’s really fair to say that what Section 230 did was it cut off the development of common law. It jumped in front of the development of common law and said, “We’re going to do it this way. We’re going to short cut maybe how common law, especially the common law of defamation, develops.” 

 

That has had downsides. We would not be having this fight if more or less the common law had evolved. It would have been pretty costly in many ways, and maybe we wouldn’t have got the same sort of user-generated content platforms, but if it evolved essentially to look more like 230, what we would have now would be — you wouldn’t have anybody to point to. You wouldn’t have Congress — you wouldn’t have people pointing to a law and say, “Hey, change that law,” because it would be a common law protection. 

 

So there are some downsides to that approach, but I really am a Hayekian in that sense that I love the common law a little bit too much in some ways. 

 

Billy Easley:  I think you do, Neil, I have to say.

 

Stewart Baker:  I’m going to side with Neil on this. I think the problem in 1996 was that AOL and other nascent providers of third-party content didn’t have the budget to defend themselves. But those days are gone. And if Congress had put a 20 year sunset on 230, we could go back to the common law and it probably would end up not so far from where we are now. 

 

Okay, next question.

 

Neil Chilson:  So here would be my proposal. If I had to scrap 230, then what we’d do is we’d create a giant funded litigation defense firm for any startup that wants to apply for it. There we go. That would be a subsidy that I’m sure everybody on this call would love. 

 

Billy Easley:  Yeah.

 

Stewart Baker:  That is a fair point, that this is — it’s only the very biggest guys who have the budget to defend themselves. But they’ll be making the law. 

 

Nate, next question.

 

Nate Kaczmarek:  Okay. So we are running short on time, and there’s one — one of our attendees has posited, I think, four or five questions in the chat. I will ask our team to call on him—he’s got his hand raised—with the condition that he can only ask one of the questions, his best question, to the panel. So please go ahead and unmute. 

 

Caller 1:  All right. Well, thank you. Since time is short, I will ask the shortest one. Would a duty to deal for giant web hosting providers like AWS, and only the giant ones, of course, would that really run into First Amendment problems, because Neil mentioned that, but what about the counterpoint that it is an economic regulation? And it’s a least restrictive economic regulation because it’s a way to increase diversity in social media, and it is an alternative to regulatory meddling and constant moderation decisions which may be very bad for all sorts of reasons. 

 

Stewart Baker:  I think that’s a great question, and I’m going to add to it, just to stick the knife in a little deeper. If you think that it is a violation, please explain why common carrier rules for phone companies are not.

 

Neil Chilson:  We could do common carrier — if you want to do common carrier for social media platforms, be careful what you ask for, is all I’m saying. That is a world that looks really different from — that’s a world where Facebook is just flooded with pornography, frankly.

 

Stewart Baker:  But you know what he’s saying. He’s saying, “I’m allowed to do that in the case of common carriers. Are you telling me that I can’t take the piece of common carrier law I like and apply it to people, and that that suddenly becomes a violation of law?”

 

Neil Chilson:  So I don’t know about the First Amendment implications of that. I haven’t thought through that, actually, at any great length. Duties to deal exist in commercial transactions now, and I think the courts have been exploring those — sorry, it’s not — they’ve been exploring them in the contest of the Amazon marketplace. What duties does a marketplace have when it provides a marketplace to a third party who sells something that injures the customer, and the customer can’t find? So it’s not the same as a duty to deal, but I think some of those issues are already coming up. 

 

If you’re talking about the sort of AWS versus Parler thing, I have to look at the marketplace. I’m not going to make a First Amendment claim here, but I’ll just make a policy claim, which is that there’s plenty of alternatives out there. AWS is big, but it’s not — it’s like 30 percent of that market. There’s plenty of other alternatives, and some of them may — I just don’t know how — if this is necessary to solve the problem that we think we have, which is Parler had to go to a different service provider. 

 

Stewart Baker:  The reality is they are still not up, so obviously it was a pretty substantial barrier to Parler finding an alternative. But I’m going to —

 

Neil Chilson: — They had some leadership issues as well, as I understand it, so that may be part of why they’re not up. I think some of their backers are a little concerned, too. And are we going to pass a law that requires backers not to change their minds when a platform goes a different way?

 

Stewart Baker:  Fair enough, but the question was should AWS have a duty to deal in light of the context of what they did to Parler? Billy, you get the last word on this.

 

Billy Easley:  I’m not ready to support a duty to deal with regards to this yet. But I could see a world where these infrastructure companies like AWS are being pressured so much not to host specific platforms or specific speech where maybe the market changes and where the context for evaluating changes. I don’t think we’re there yet, but I understand the concern animating that.

 

Stewart Baker:  So the notion is if there’s a market — if you can define the market broadly because of essentially interlocking social ties that lead to people making the same economic decision, you might say, well, in this case, the market for conservative speech has been more or less eliminated by everybody agreeing that they’re not going to allow it. Yeah.

 

Neil Chilson:  And I would say in a year where FedSoc has had an amazing success on social media, I don’t think that we’re in a world where conservative speech is at a giant risk. 

 

Stewart Baker:  Fair point. Well, I’m disappointed that I was not able to content moderate either of you off this platform, but it was a great, entertaining, and wide-ranging discussion of Section 230. And I hope that everybody has gotten at least better informed about what the issues are. So thanks for tuning in, and thanks to Neil and Billy who did a great job. 

 

Neil Chilson:  It was great fun. Thank you, Stewart.

 

Billy Easley:  Have a good Friday.

 

Nate Kaczmarek:  On behalf of RTP, we just want to thank all of you for your insights today, a great conversation. We look forward to having you back again soon. To our audience, we welcome feedback by email at RTP at www.regproject.org. Thank you all for joining us. Have a great day.

Neil Chilson

Senior Research Fellow

Center for Growth and Opportunity


Billy Easley

Senior Policy Analyst for Technology and Innovation

Americans for Prosperity


Stewart A. Baker

Partner

Steptoe & Johnson LLP


Cyber & Privacy
Emerging Technology

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content