The FCC Should Address Distortions of Section 230

Rachel Bovard

This post is part of the “Governing the Internet” blog series, which features a range of viewpoints on topics involving internet regulation.
The series is co-sponsored by the Regulatory Transparency Project and the Federalist Society’s Practice Groups.

Section 230 of the Communications Decency Act, the provision of law imparting broad immunity to America’s tech giants, is approaching its 25th birthday—perhaps its last in its original form. A rigorous debate over the law’s application has captured bipartisan attention on Capitol Hill and civil society more broadly. This attention has endured despite concentrated efforts to dismiss it.

As reform proposals proliferate in Congress, the executive branch is pursuing its own reform track with an Executive Order released in May. The National Telecommunications and Information Administration (NTIA) has taken the first implementing step, petitioning the Federal Communications Commission (FCC) to issue regulations narrowing the broad interpretation of Sec. 230 currently granted to tech companies—not by Congress, but by the courts. All of these reform efforts share one goal: to make the recipients of Sec. 230’s benefits more transparent and accountable to their users in exchange for the statutory legal privileges they receive.

A return to original intent

Specifically, the petition asks the FCC to use the broad authority granted to it in Sec. 201(b) of the Telecommunications Act to clarify the interaction between the two immunities granted to internet platforms by Sec. 230, (c)(1) and (c)(2). The first immunizes platforms against liability for content posted by third parties on their sites; the second protects the platforms if and when they block or remove content.

The character, nature, and transparency of the type of content removal protected by (c)(2) is one of the key areas of the NTIA’s petition. The petition argues (correctly, in the view of my organization, the Internet Accountability Project), that the statute intended a narrow immunity designed to give platforms the freedom to filter content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” without fear of liability. Unfortunately, embedded in that section is a catch-all phrase, “otherwise objectionable,” that gives tech platforms discretion to censor anything that they deem “otherwise objectionable.” Such broad language lends itself in practice to arbitrariness.

In other words, while the law’s intended purpose was to protect the “good samaritan” behavior of platforms to block and remove smutty content online, the troublesome text of Sec. 230 has created an unintended consequence: a grant to Big Tech companies and other online platforms of extraordinary legal protection from those lawsuits that would challenge their suppressive, discriminatory power over the lawful viewpoints of citizen-users.

The legislative history of the statute clarifies the conflict between intent and outcome. Sec. 230 itself, written originally as an amendment to the Communications Act of 1995, was titled the Online Family Empowerment Act and described by the House Rules Committee as an amendment “protecting from liability those providers and users seeking to clean up the Internet.”

Sec. 230 was a necessary protection for an infant internet of small companies, which would have been overwhelmed by the massive liability they would face for knocking out bad actors. As a byproduct of Sec. 230, however, the Internet has grown and changed dramatically from small online chat rooms hosting 36 million users to one dominated by what are now a handful of America’s largest companies hosting 4.8 billion people online. Sec. 230’s protections have been, and in some form, likely remain, critical to this growth.

Sec. 230 itself has likewise evolved from humble beginnings to a judicially contorted statute whose application protects content wildly beyond what is suggested in the text of the law itself. Law professors Danielle Keats Citron and Benjamin Wittes refer to Sec. 230’s judicial interpretation as a “fortress built in the courts,” one that shields platforms from all kinds of accountability for unlawful activity.

As the NTIA’s petition emphasizes, courts have now construed Sec. 230 to grant liability-immunity from contracts, consumer fraud, revenge pornography, anti-discrimination civil rights obligations, and even facilitating terrorist activity.

Sec. 230 and the downstream effects of viewpoint bias

Sec. 230’s protections also encompass viewpoint-based content removal, despite the statute’s intention for the internet to develop as a “forum for a true diversity of political discourse” and “myriad avenues for intellectual activity.” The broad interpretation of tech companies’ legal immunity that allows them to discriminate on the basis of viewpoint, combined with the market dominance of a few large companies, presents an insidious threat to our free society.

Their market dominance is perhaps the decisive factor in understanding how harmful viewpoint discrimination can be to individual citizens and our society more broadly. Google filters information for 90 percent of the world. Facebook is the primary news source for over half of Americans. Twitter captures a smaller market share, with 22 percent of Americans using the site, but stands out as one with the most news-focused users—according to one survey, around seven-in-ten adult Twitter users utilize the site for news.

When these platforms engage in viewpoint moderation that ranges far beyond content properly understood to be “obscene, lewd, lascivious, filthy, excessively violent, harassing” and so on, the downstream effects are significant, contorting not just the free expression of individuals, but also the values of independent thought, viewpoint diversity, market access, and behavior.

Consider that these platforms now determine what constitutes “appropriate” medical information for their users—even when it is presented by board-certified physicians. Earlier this year, Facebook banned anti-lockdown protest content where it “violated social distancing rules,” but embraced no such pseudo-governmental enforcement role against the protest activities that have been organized on their platform in violation of local ordinances. Google-owned YouTube has decided that an interview with Dr. Scott Atlas, a neuroradiologist and professor at Stanford University Medical Center and an advisor on the White House Coronavirus Task Force, is “anti-science” for its data-driven discussion about the efficacy of broad lock-downs.

Facebook has also acted as the arbiter of due process rights for Kyle Rittenhouse, the teen charged with fatally shooting two people amid the riots in Kenosha, Wisconsin. Rittenhouse’s attorney says his client acted in self-defense, but Facebook has blocked all posts in praise and support of him, and taken down links to contribute to his legal defense. Facebook, by its own admission one of the most powerful speech companies in the world, has already declared him guilty of mass violence, and restricted the use of its platform to reflect their subjective, extralegal judgement.

Google has unprecedented power to filter information for most of the planet, and seems to flex this muscle with impunity. A 2019 investigation by the Wall Street Journal found that Google “made algorithmic changes to its search results that favor big business over smaller ones,” and modified search results around subjects like abortion and immigration. In June, Google demonstrated how much power it has to demonetize entire news sites for minor violations of its ad policies. In July, the search engine inexplicably stopped presenting search results for several leading conservative websites. Breitbart News has presented analysis suggesting conservative sites are routinely downgraded.

The gatekeeping role of these mega-platforms is taking on broad and opaque roles as it relates to the upcoming election. Big Tech’s influence on voter behavior has already been well-documented. Center-left leaning research psychologist Dr. Robert Epstein testified before Congress that Google “displays content to the American public that is biased in favor of one political party.” He estimated Google’s search behavior, which he tested against other search engines in the weeks leading up to the 2016 election, swung as many as 2.6 million votes to Clinton. He also estimates that Google’s algorithmic filtering has “been determining the outcomes of upwards of 25 percent of the national elections worldwide since at least 2015.”

Facebook and other tech platforms are reportedly “war gaming” different election outcomes, as well as meeting with government officials about “potential threats to election integrity.” “Digital platforms,” according to Axios’ reporting, are now as important as state and local elections agencies in “protecting public confidence” as it relates to “faith in democracy.”

In other words, mega-corporations have as much power as the government itself—and in some ways, more power, because theirs is unchecked and unaccountable. The decision of government officials to meet with these platforms regarding election integrity is an admission of how much power they now hold; so much that the decisions of a handful of unelected CEOs could alter or distort voter behavior in a free society.

Sec. 230 is not an entitlement

The legality of content moderation itself is not at issue—but rather, the profound impact these actions have on the nature of free thought and expression when done at the scale at which these companies exist. A single algorithmic decision made by a private corporation, accountable to no one, changes what kind of viewpoints and information are available to billions of people around the world. The downstream effects of this change the nature of truly independent thought and behavior in ways that should concern both Congress and the executive branch—particularly as it is a law Congress passed which advantages the exercise of these activities.

Yet DC’s tech establishment frequently rejects this argument, choosing instead to focus on the First Amendment right of corporations to suppress whatever content they so choose, never acknowledging that these choices, when made at scale, have enormous ramifications. Rather, they suggest that those who are unsatisfied simply use or build alternatives (how they treat those alternatives and the people who build and use them, however, reveals the suggestion to be less than sincere).

But this argument intentionally sidesteps the fact that Sec. 230 is not required by the First Amendment, and that its application to tech platforms privileges their First Amendment behavior in a unique way among other kinds of media corporations. Newspapers also have a First Amendment right to publish what they choose—but they are subject to defamation and libel laws for content they write, or merely publish. Media companies also make First Amendment decisions subject to a thicket of laws and regulations that do not similarly encumber tech platforms.

In other words, Sec. 230 is, as Santa Clara law professor Eric Goldman has put it, “an implicit financial subsidy” to the tech platforms. Ultimately, the Sec. 230 debate centers on what type of accountability these platforms should exhibit in exchange for this government-mandated privilege—none, or some. For 24 years, neither Congress nor regulatory agencies have required any quo for what is a substantial quid. That is now changing.

Agency authority and the NTIA Petition

The NTIA petition seeks to ground Sec. 230’s special immunity into the language of the statute as articulated, rather than the expansive, judicially-bloated interpretation that covers behavior far from the free expression the statute was envisioned to foster. It demands a measure of accountability and transparency in exchange for a government benefit. While there are those who might use this authority as a cudgel to suppress speech, many others seek to bring it back to its original intent of facilitating more speech.

The inevitable question about the scope of FCC’s jurisdiction arises, of course. But it is reasonable to suggest that FCC’s authority over Internet “information services” could extend to the rule-making requested by NTIA, and that it could be consistent with existing law. The Supreme Court’s decision approving that FCC governance classification in National Cable & Telecommunications Assn. v. Brand X, cited and followed in the 2019 D.C. Court of Appeals ruling in Mozilla v. FCC, could support that approach. In addition, clarifying the vagaries and inconsistencies of section 230 in its meaning and application would not be inconsistent with the FCC’s light touch philosophy regarding online issues.    

That such actions are necessary is quickly becoming inarguable. Sec. 230, as currently interpreted, has fostered the growth of companies now so powerful that a single algorithmic change can, as one commenter put it, select “winners and losers in every sphere of public life, from markets to political contests.” Sec. 230 was not designed to advantage this behavior, or to entitle it.

As Scott Cleland noted recently, “America is the only country that protects platforms from people, but not people from platforms.” Sec. 230 was designed to optimize free speech for users, and the ability of platforms to provide a forum for a safe and lively debate. Its current application, however, has reversed its intended effect, prioritizing the speech rights of platforms over and above those of their users—and at a scale that distorts the information access, free thought, and market access for billions. It’s time to expect more from platforms who have been given plenty. The NTIA should be applauded for acting with the diligence and urgency that this issue requires.

Rachel Bovard

Senior Director of Policy

Conservative Partnership Institute


Emerging Technology

Federalist Society’s Practice Groups

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the author(s). To join the debate, please email us at [email protected].

Related Content

Skip to content