In the days leading up to the U.S. Capitol insurrection on January 6, social media platforms were flooded with hate speech and misinformation. Months before, there were also denouncements by former President Donald J. Trump about the proposed content moderation practices of private companies, as well as their use of Section 230 protections.
It took only days after armed mobs stormed the U.S. Capitol, for platforms like Facebook and Twitter to decide Trump’s tweets were more than just “saber rattling.”
Relying on section 230 of the Communications Decency Act, for immunity from civil suit, Twitter permanently banned President Trump’s account, wiping out his contact with 88 million followers, and banned thousands of conservative social media accounts. Facebook banned Trump’s account “at least until his term was over.” Google and Apple blocked the conservative-leaning social networking service Parler from their stores, and Amazon Web Services denied Parler access to its cloud network. Parler was forced to shut down for a time.
To many, January 8 seemed two days, two years, or two decades too late. Nonetheless, the question remains: Should free speech be regulated online? And if so, what should the content moderation practices be of private companies? Further, what voices should be subjected to greater scrutiny, and will those from more marginalized populations be questioned?
In this episode of Tech Tank, Nicol Turner Lee speaks with David Johns of the National Black Justice Coalition and CTI scholars John Morris and Tom Wheeler.
Hosted on Acast. See acast.com/privacy for more information.