Walking a Fine Line with Content Moderation

by

One of the larger debates in the final months of Donald Trump’s presidency revolved around his social media presence and how much he should be allowed to say online. On Twitter, for example, he was given free rein for most of his presidency, tweeting a staggering 12,200 times in 2020. However, in the final months, Twitter began posting warnings with tweets that were factually incorrect, and eventually banned him altogether.  

The progression of Trump’s fall from Twitter has prompted many to question social media platform policies on content moderation. Large social media platforms have revealed the complexity of content moderation, and the ongoing evolution of the field. 

What is Content Moderation?

Content moderation refers to the screening or monitoring that online companies perform on user-generated content (UGC). It covers how online platforms handle both user censorship and users’ content. 

While social media platforms have recently come under the most scrutiny, they are not the only ones that perform content moderation. Content moderation occurs anywhere that users can leave online comments, including, but not limited to, internet forums, YouTube, blogs, reviews, and video games with messaging. In fact, content moderation occurs in many of the most popular places on the internet.  

How do Social Media Companies Moderate UGC?

Social media platforms can moderate user-generated content (UGC) in a multitude of ways, including:

  • In-House: Employing their own staff to perform content moderation.
  • Outsourcing: Paying an external company to perform content moderation.
  • Combination: Combining both external companies and in-house employee moderators. 
  • Community Policing: Allowing users to flag, report, or complain about other user content.
  • Human: Manually monitoring and screening UGC. 
  • Artificial Intelligence (AI): Training AI systems to review UGC. Due to the current technological state of AI, most AI systems are supplemented with human moderators. 

Most companies utilize a combination of these techniques and are continually changing and updating their content moderation protocols as new information becomes available. 

Understanding the Challenges with Content Moderation

There are a number of challenges that companies must overcome when creating a content moderation system.

The Numbers 

The sheer volume of content that must be reviewed is astronomical. To put it in perspective, here are some quick numbers:

  • As of 2020, 4.2 billion people were active on social media
  • As of December 2020, Facebook reported 1.84 billion active users daily.
  • 500 hours of video are uploaded to YouTube each minute
  • In 2016, Facebook reported that more than 1,000,000 content items are flagged for review in a single day. 

These statistics are specific to social media platforms and represent only one of the industries performing content moderation online. The sheer amount of content that must be moderated is constantly growing, making the task of moderation more challenging. 

Policy

Much of content moderation deals in the gray zone; there is not a clear line between what should and should not be allowed. Each company must therefore choose what they want their stance to be. For example, the Trump Presidency raised the question of whether public figures should be moderated differently than regular users, and companies made different choices about how and when his content should be moderated. 

Speed and Accuracy

When choosing between human and AI moderators, you are choosing between speed and accuracy. While AI moderators are able to perform much more quickly, there is a much larger room for error. AI moderators are not able to analyze based on the context of a post and are therefore typically unable to differentiate between a serious post, a satirical post, or even a pop culture reference. 

However, while human moderators are much more reliable, they also complete their moderation at a much slower speed. This opens companies up to liability, as the longer inappropriate posts are left online, the larger their reach and the real-world impact can be. 

Keeping Everyone Happy

Regardless of how successful a company’s content moderation system is, it is impossible to keep everyone happy. A company is constantly torn between two extremes, such as:

  • Keeping users happy by minimizing censorship, while also keeping the government and external pressures happy by moderating responsibly. 
  • Ensuring they don’t violate principles of free speech, while also ensuring they don’t allow negative or inaccurate comments to lead to real-world negative consequences. 
  • Keeping their platforms user-friendly and competitive while also still turning a profit. 

Users- Unexpected Consequences

Arguably the largest challenge for content moderation is the unexpected consequences that users inevitably bring. Take YouTube for example: when the site was created, it was intended for people to share fun videos with their friends. The creators never imagined it would be used for violent or pornographic video sharing. Whether guidelines and policies are vague or users intentionally ignore regulations, the unexpected consequences of user participation are a universal constant. 

Negative Consequences of No Content Moderation

There are numerous negative consequences associated with a lack of content moderation, including: 

  • Cyberbullying
  • Discrimination
  • Racism
  • Spread of false information
  • Unsolicited pornography
  • Violent content
  • Scams and bots
  • Legal issues

These consequences move offline and have real-world consequences and damage. Sadly, it typically takes large real-world events, such as the US Capitol riot in January 2021, to force large companies to revamp their content moderation protocols. 

Content moderation policies must continue evolving as users find new and unexpected ways to use and exploit their platforms. As long as users can speak and interact, content moderators will be necessary. 

Leave a Reply

Your email address will not be published. Required fields are marked