Texas Enacts Social Media Censorship Law to Benefit Anti-Vaxxers & Spammers

State legislatures are competing with each other to see who can enact the most ill-advised laws to impose censorship on the Internet. Florida made a splash enacting its social media censorship bill SB 7072, only to have a federal district court immediately enjoin it. Undeterred by Florida’s futility (and perhaps envious of it?), the Texas legislature tried to one-up Florida with HB 20, its own social media censorship bill. Like Florida’s law, Texas’ law is brazenly unconstitutional. I hope the courts are equally dismissive of it.

Overview of the Law

Like other #MAGA-inspired bills, Texas’ law was never intended to survive critical scrutiny. It is purely performative–to show constituents that the legislature hates “Big Tech,” even if the law’s consequences will harm, not benefit, their constituents.

The law has four main provisions:

  • “transparency” requirements that include a long list of mandatory disclosures and statistics about editorial operations.
  • digital due process obligations that require services to provide users with notice and an explanation when the services take action as well as a mechanism to appeal the action.
  • a ban on email service providers blocking spam, coupled with a private right of action and statutory damages…for any spammer whose spam gets blocked. THIS IS THE STUPIDEST POSSIBLE POLICY THE LEGISLATURE COULD ADOPT, AND TEXAS RESIDENTS BOMBARDED BY SPAM WILL BE SHOCKED THAT THEIR LEGISLATURE SCREWED THEM OVER.
  • removal of Internet services’ editorial discretion, a/k/a must-carry obligations

These provisions are each independently unconstitutional. They are also absolutely terrible policy. Together, they represent a gross abdication of legislative responsibility, a sign that the legislators supporting this law are trying to make the world worse. If you live in Texas, I’m sorry that your governance mechanisms are so broken. What can be done to fix that?

Who is Governed by the Law? 

A “social media platform” has the following 4 elements:

1) an Internet website or application. This is narrower than the Florida law, which included improper entities like “access software providers.”

2) open to the public.

3) allows a user to create an account. I didn’t understand how this factor intersected with the “open to the public” factor. Does it mean that a service’s user-generated content must be readable by the public without registration or a fee? Or that accounts must be available to anyone who wants them? If anyone can register for an account, but their content is published behind a registration wall or paywall, is the service still open to the public? If anyone can get an account but must pay for having the account, is that open to the public? To understand these two elements, I need some concrete examples of account-based services that aren’t “open to the public.”

4) “enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images.” This helps distinguish “social media” services from, say, online banks that also have user accounts. However, any online account registration that lets users publish content to anyone, including the world, seems to meet the definition. So, for example, Amazon should be covered by this law because it lets users post reviews even though few users view that publication capacity as the primary reason to get an Amazon account.

The term “social media platform” does not include:

  • IAPs (“a person providing connectivity to the Internet or another wide area network”)
  • “electronic mail.” I think the legislature meant an email service provider or portions of a service that involve email. Otherwise, there’s a genus/species mismatch–it says “social media platforms” don’t include “electronic mail,” which makes no sense. Note that “electronic mail” is ambiguous–does it include other forms of private messaging, such as instant messaging or services like WhatsApp or Signal? If not, why not?
  • a publisher of content “preselected by the provider” where the UGC functionality is “incidental to, directly related to, or dependent on” the preselected content. I’m not sure what “preselected content” means–if a UGC site has humans prescreen UGC before publishing it, is it no longer a “social media platform”? That can’t be right; giving services an incentive to prescreen would lead to far less content being published, which would run contrary to the purported “pro-free-speech” justification for the law.

The law governs users who meet any of the following:

  • resides in the state,
  • does business in the state, or
  • “shares or receives content on a social media platform in this state.” It adds that the law “applies only to expression that is shared or received in this state.” Putting aside the epistemological question of when content is shared/received in any geographic area, this factor raises serious Dormant Commerce Clause problems if it interferes with the publication of content by a non-Texan that is read exclusively by non-Texans. Texas cannot control how non-Texans talk with each other.

The Quantitative Threshold

Like most attempts to stick it to “Big Tech,” the law makes size-based distinctions among Internet services. It only applies to “a social media platform that functionally has more than 50 million active users in the United States in a calendar month.” (What does the word “functionally” do in this definition?)

Earlier this year, Jess Miers and I published an article on how to draft size-based distinctions for Internet services. The Texas approach violates numerous principles we offered, including:

  • it has no phase-in period. MAUs are measured after the month occurred, but if the service newly crosses 50M MAUs for the first time, the service has already broken the law if it wasn’t preemptively in compliance for that month (plus it will take many weeks/months to get into compliance with a complex law like this). I don’t understand why it’s so hard for legislatures to provide a transition period, especially for laws as complicated and wide-ranging as this. Without a transition period, services below the statutory threshold have to anticipatorily implement a full compliance effort to avoid breaking the law–even if they never actually reach the threshold.
  • the monthly measurement is too short. It means that a service that has a traffic spike due to a viral hit or a seasonal high would need to incur all of the compliance costs even if it never again reaches that threshold. The measurement period should be averaged over a sufficiently long period to smooth out seasonality and virality.
  • MAU doesn’t have a single well-accepted definition, so it can be gamed. It also creates apples-to-oranges comparisons.
  • The definition is based on MAUs in the US, not MAUs in Texas. So a service could have no MAUs in Texas and still be obligated to comply (see my Dormant Commerce Clause point above).
  • MAU doesn’t link to registered accounts, the precondition for qualifying as a social media provider under the law. So a service with only 1 registered account and 50M MAUs would be governed by the law.
  • It doesn’t specify the boundaries of a social media platform–is it the corporate form (I presume so), a domain name, or something else?
  • The law doesn’t distinguish non-profit organizations from for-profit, so the regulated entities might have minimal or no revenues or profits to afford compliance. For example, consider how Wikipedia would handle compliance with the law’s requirements.

The Texas law sets the threshold at 50M MAU, while the Florida law targeted 100M (but didn’t use the term MAU, which made things even more confusing). The Texas law’s lower threshold picks up more Small Tech services.  In our article, Jess and I favor running a size-based law through a “test suite” of potentially regulated companies to ensure the law applies to the right entities. Of course Texas didn’t do this, so the legislature had no idea who this law actually affects.

Disclosure Obligations

The law requires social media platforms to “publicly disclose accurate information regarding its content management, data management, and business practices….sufficient to enable users to make an informed choice regarding the purchase of or use of access to or services from the platform.” The law gives apparently NON-EXCLUSIVE specific examples of what these obligations include, so in theory a service could be out-of-compliance even if it discloses every statutorily enumerated item.

Some of the specific disclosures required by the statute:

  • “the manner in which the social media platform: (1) curates and targets content to users; (2) places and promotes content, services, and products, including its own content, services, and products; (3) moderates content; (4) uses search, ranking, or other algorithms or procedures that determine results on the platform; and (5) provides users ’ performance data on the use of the platform and its products and services.” I have no idea what disclosures might look like. What does it mean to disclose the “manner in which it moderates content”? Would a disclosure like “We use human and automated content moderation” suffice? And what does “performance data” mean in this context?
  • an “acceptable use policy,” including “the types of content allowed on the social media platform,” “the steps the social media platform will take to ensure content complies with the policy,” and how users can report illegal or AUP-violative material.

Twice a year, Internet services must publish a transparency report with about a dozen different categories of details and statistics about the service’s content moderation operations.

I object generally to mandated disclosures by Internet publishers (even if they are publishing UGC). I will detail my objections in my next big paper coming in a few months. Some of the biggest concerns:

  • I don’t think governments can compel publishers to reveal details about their editorial operations. See, e.g., Herbert v. Lando, 441 U.S. 153 (1979) (the First Amendment doesn’t permit a “law that subjects the editorial process to private or official examination merely to satisfy curiosity or to serve some general end such as the public interest”). The law’s pretextual invocation that “social media platforms” are “common carriers” does nothing to save this.
  • I don’t understand how these obligations can be enforced without blatant intrusions into the publishers’ editorial process. Can a plaintiff second-guess the editorial judgments and say they should have been made or reportedly differently? That raises obvious Constitutional problems, and it raises privacy concerns about the material available for plaintiff review.
  • If governments can enforce alleged defects in disclosures, this gives them the basis to make pretextual enforcements that are motivated by illegitimate purposes. This is the gist of the Twitter v. Paxton case (involving Texas’ AG Paxton–who will be given the keys to enforcing this law). Paxton has sent Twitter an overreaching CID because, as he publicly declared, he wanted to punish Twitter for deplatforming Trump. The CID superficially conforms to the rule of law because Paxton can claim it’s about enforcing consumer protection laws, but in practice we know that Paxton only chose to send the CID to Twitter for wholly partisan retaliatory purposes. The new disclosure obligations in Texas’ law give extra legitimacy to brazenly censorial and retaliatory investigations.
  • The high costs of the reporting obligations, as imposed on low-revenue or nonprofit services, have drive UGC services out of the industry.

Note that the mandatory disclosures in Florida’s laws were part of the overall injunction against the law, but the court didn’t get into many details about their independent constitutional infirmity.

Digital Due Process Requirements

Services must provide a mechanism for the submission of complaints about illegal content or removed content, including a way to track the complaint’s status. Internet services must evaluate the content’s legality in 48 hours (excluding weekends). For removed content, Internet services must notify the affected user, provide an explanation, provide an appeal process, and respond to the appeal (including an explanation if the service reverses its removal) within 14 business days.

As I’ve explained before, these digital due process requirements are unconstitutional as applied to editorial publishers because they regulate core editorial functions. The Florida court agreed with that conclusion. They also misassume the complaints and appeals will be made by good-faith actors, even though they will instead be flooded by trolls, spammers, and malefactors seeking to weaponize the procedures for lulz or as a way of imposing costs on their target. Plus, the high compliance costs, including the staffing levels to provide the quick turnarounds, further drive services out of the UGC industry.

Eliminating Anti-Spam Efforts by Email Service Providers

The law says: “An electronic mail service provider may not intentionally impede the transmission of another person’s electronic mail message based on the content of the message” unless:

  • it “provides a process for the prompt, good faith resolution of a dispute related to the blocking with the sender of the commercial electronic mail message” or
  • “the provider has a good faith, reasonable belief that the message contains malicious computer code, obscene material, material depicting sexual conduct, or material that violates other law”

This is coupled with a private right of action: the lesser of $10 per impeded email or $25k per day of impeding.

Who thought this was a good idea? We want email service providers to do everything they can to combat spam. The idea that the legislature would force reductions in anti-spam efforts is absolutely mind-boggling. The Texas legislature wants to dial back the anti-spam efforts a quarter-century, when spam routinely ranked at users’ top problem on the Internet. The technologists have made substantial headway in curbing this problem, and users nowadays rarely rank email spam as a top Internet problem. The Texas legislature thinks that’s a problem to fix???

If the Texas legislature cared about users, it could have let them opt-out of this law. In effect, let email users empower their service providers to impede email as they see fit. That’s not an option. And giving statutory damages and a private of action to spammers will spur more anti-social behavior by venal and pernicious litigants.

There are some significant ambiguities in the regulation. What does it mean to “impede” email? Does that include killing spam at the server level, putting spam into a user’s spam folder, or putting spam into a “promotions” tab? Also, the dispute resolution process (which itself is incorporated from a 15 year old law) doesn’t explain how it works with the other exception. What will be a “good faith” resolution of a dispute if the email service provider characterizes the incoming email as objectionable spam, even though it’s not malicious code/obscene/sexual content/illegal? That dispute resolution process will be flooded by the spammers, just like users’ email inboxes would be. And if it’s too costly to adjudicate the dispute resolution, email service providers will loosen their spam filters to let more spam through. Texas residents, I hope you enjoy getting more spam in your inbox.

In addition to violating the email service provider’s First Amendment rights, this provision raises CAN-SPAM and Section 230(c)(2) concerns. And it’s terrible policy.

Must-Carry Obligations

The anti-“censorship” restriction says: “A social media platform may not censor a user, a user ’s expression, or a user ’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user ’s expression or another person ’s expression; or (3) a user ’s geographic location in this state or any part of this state.” “Censor” (as usual, a deceptive misnomer) is defined broadly: “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” Consumers cannot waive this provision by contract.

Among other things, the anti-discrimination provisions essentially require consistency in content moderation decisions, something that the Florida law also requires. Consistent content moderation is an oxymoron. It’s not possible.

Limiting the restrictions to “viewpoints” doesn’t really solve any problem. Is anti-vax content a “viewpoint”? This puts Internet services in a terrible bind. They could ban categories of discussion altogether, like discussions about vaccines; but if the Internet service doesn’t ban vaccine discussions entirely (and, presumably, perfectly effectuate that ban), then anti-vax content can’t be touched. Even if a social media service were to cut all chatter about all social and political issues (i.e., only discussions about personal and family matters were permitted), if anti-Semitism is a “viewpoint” (it’s the official platform of some political parties) the service still can’t save even personal conversations from degrading into garbage. For more on this point, see my article on the terrible policy outcomes of must-carry laws.

To reinforce the point that the legislature has no interest in the quality of online discourse, the legislature rejected amendments that would have authorized Internet services to remove content related to Holocaust denialism, terrorism, or vaccine disinformation. In other words, the legislature is fine with the unrestricted proliferation of all of these categories of content. Which, if this law survives the Constitutional challenge, they will definitely get.

I’m also confused about the anti-“censorship” restriction based on “a user’s geographic location in this state or any part of this state.” Does this mean that Internet services can’t wall off Texas from the rest of the United States and treat Texan users differently? Any effort to prohibit services from distinguishing between Texans and non-Texans raises substantial dormant Commerce Clause problems by controlling how Internet services cater to non-Texans.

The must-carry provisions have the following exceptions:

  • content “the social media platform is specifically authorized to censor by federal law.” [Not state law?]
  • “is the subject of a referral or request from an organization with the purpose of preventing the sexual exploitation of children and protecting survivors of sexual abuse from ongoing harassment”
  • “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge” [the last one is odd…why not other professions?]
  • “is unlawful expression” [see my Online Account Termination paper for an explanation of why the line between lawful and unlawful expression is extremely difficult for Internet services to determine, and requiring Internet services to make that distinction precisely sets them up to fail]
  • intellectual property

The law authorizes users to sue for declaratory relief, including attorneys’ fees, and injunctive relief. An Internet service’s violation of a court order should be punished by daily fines. Of course we know that any court order not to “discriminate” among content will fail, so the daily fines are designed to punish the inevitable and unavoidable violations of censorial injunctions.

The attorney general may also enforce the provision (including “potential violations”) and, if an injunction is granted, may recover its attorneys’ fees and “reasonable investigative costs.” As discussed above, this is not a hypothetical situation–Texas AG Paxton is sitting on an open-ended CID to Twitter. This law would dramatically increase the AG’s leverage over Internet services by giving them a pretext to mask their partisan and retaliatory objectives.

Conclusion

HB20 makes numerous classifications among media entities and between different types of content. Each of these classifications creates bases for constitutional challenges:

  • distinguishing social media platforms from other publishers.
  • distinguishing between social media platforms that provide only UGC functionality incidental to the non-UGC they publish from other social media platforms.
  • distinguishing among social media platforms on the basis of their size.
  • distinguishing email from other forms of private and public messaging.
  • distinguishing the legitimacy of impeding emails on the basis of their content (it treats non-obscene “material depicting sexual conduct” as more disfavored than other content).
  • restricting how Internet services distinguish between Texans and non-Texans.
  • providing preferential treatment for referrals related to sexual exploitation/abuse (this content isn’t necessarily illegal–it depends on exactly what is referred).

This law takes effect in December. We haven’t seen Internet services rushing to comply with this law because I believe everyone assumes it will be struck down as unconstitutional, just like the Florida one was. I think the Internet will be in for a shock if the courts don’t strike this down completely.

As I said with my Florida blog post, I hope those of you rallying for “digital due process” take a hard look in the mirror. Your advocacy is encouraging censorial and clearly unconstitutional laws like this. Having now seen the MAGA weaponization of digital due process in two states, it’s time to rethink the strategy and tactics of demanding digital due process.

Library