We now know that the insurrection of January 6, 2021 was organized largely online, with loose networks of Trump-inspired radicals hatching a plan to infiltrate the Capitol and harm our representatives, including the Vice President. Yet nearly a year after that incident, Congress still hasn’t figured out how to handle dangerous, conspiratorial speech online.
Two bills in particular will make content moderation more difficult. The “American Innovation and Choice Online Act,” (HR 3816) and the “Ending Platform Monopolies Act,” (HR 3825) require that major platforms not discriminate among “similarly situated business users.” This would prevent Facebook or YouTube from removing or downranking hate speech, conspiracy theorists, or insurrectionist speech (such as Alex Jones’ Infowars or Parler) because doing so would “discriminate” against their apps. Apple would be required to carry Parler, Gab, and 4Chan in the App Store, because such policies would “discriminate among similarly situated business users.”
These bills would also limit technology platforms’ ability to elevate important information for public benefit. Google and Facebook periodically promote their own tools to help distribute public health and vaccine information, help consumers find and patronize minority-owned and local businesses, and to share polling place information, and AMBER and crisis alerts. That could be impossible under should these bills pass.
Instead of seeking a solution where no problem exists, we should be incentivizing these companies to monitor themselves more readily. As currently constituted, HR 3816 and HR 3825 don’t do anything to stop another January 6 from happening in the future. Congress can and should work to ensure that the type of online activity that fueled Jan. 6 is not allowed to propagate again.