The social media giants are walking a thin line lately on content moderation. On one side, they face criticism from those who want hate speech removed and restrictions on the spread of false information on their platforms. On the other, they face charges of censorship. The pandemic has brought added pressure as the spread of misinformation, hate speech and human rights protests heightens and everything becomes partisan.
Twitter has taken significant steps to update its policies, banning paid political promotion, implementing fact checking and working to flag any content deemed potentially harmful. But Facebook is another story. It has been slow to make changes despite criticism of the political ads on its platform – including those that contain lies – its lax hate speech policies, recent employee virtual walkouts and major companies pulling their ads in July.
Social media’s growing influence is ubiquitous. Critics say given their influence and the implications of spreading false information or violence promoting content, they should at least contextualize, if not censor such material. Those who want social media to be democratic and inclusive believe banning hate speech against certain groups and resisting those who want to buy political clout are reasonable requests. They ask that the same moral codes we use in real life apply online. Twitter has been accommodating. But Tik-Tok is getting push back from several countries that want it banned for security reasons.
Many argue that it is less of a moral debate for these social media platforms than one that has to do with the financial and political influence they can wield by allowing certain things to remain on their sites. Others believe Facebook’s influence and wealth are enough to withstand the blowback of its policies. For now, only time and the rapidly changing global climate will determine the direction social media evolves.