7 guidelines for online content moderation

online content moderation

How do we monitor online content without limiting free speech? Alternately, how can we circumscribe the non-sociable aspects of online participation when we want to dig deeper into ‘hot’ issues’ in engagement projects – issues that can draw on deeply held, emotional views?

Content moderation is vital to any online engagement – from the automated moderation of inappropriate content to sanctioning “grey areas” that require human intervention. Effective content moderation buoys robust – and reflective – considerations on important public issues and facilitates participants willingness to get online and join in conversations that impact their lives. It also ensures a sense of accessibility and inclusion – uppermost to all online engagement.

In his comprehensive webinar, Digital Deliberation, which gave us insight into criteria for software selection, Dr Crispin Butteriss sets out to integrate online engagement and moderation as it shapes our experience of digital deliberation, “a social process involving, potentially, a great many people,” he says. “Deliberation requires participants to be exposed to information that is both broad and deep,” he clarifies. “The fact, that ‘emotional connection’ is a necessity to spark participation, is the reason the ‘design’ and ‘management’ of the space is so critically important,” explains Crispin:

Unlike monologue or debate, dialogue is what happens when participants start to read and respond to each other’s comments; they ask questions; they build on the ideas. They may challenge arguments or assertions, but they do so in order to better understand the rationale or the underlying belief, or background story. There is mutual respect, and there is a focus on “solutioneering”.

Online content moderation, then, is crucial.

A pragmatic set of guidelines and rules, etiquettes and sanctions, that optimise issues and listen to nuances of an issue within any given community context, provides the best possible experience for participants to engage and explore issues, allowing everyone to have their say without fear, intimidation or retribution. Used to describe the act of “rule-keeping”, moderation ensures participant comments are within the site moderation rules (is there any bad language, is the comment sexist, racist, homophobic, does it contain any links to bad content?) and checks for hectoring and intolerant behaviour. “Basically, we’re looking for any content that might drive participants away from the process out of fear of being attacked, or just because they don’t see it as a constructive space,” says Crispin.

Where software analyses comments for bad language and spam, human intervention analyses comments for more nuanced breaches of the rules. For more insights from Crispin’s webinar, read the Speaker Notes here.

Below are seven rules/guidelines for online content moderation taken from Crispin’s webinar.

1. Acceptable behaviour

You must have a clear set of rules that bound acceptable behaviour for user generated contributions. These may vary from project to project, but include references to:

  1. posting personal information,
  2. naming organisational staff, particularly in a negative light,
  3. defamatory content,
  4. intolerance,
  5. acceptable language,
  6. bullying, hectoring and insulting,
  7. external links,
  8. advertising, and
  9. comments on moderation policies and processes.

2. Breaching moderation

You must also have a clear set of sanctions for breaching the moderation rules. For example:

  1. content removal,
  2. content editing,
  3. temporary suspension of access privileges, and
  4. permanent blocking of access privileges.

3. Etiquette

You should consider including a set of guidelines for appropriate etiquette in the context of your particular project. These are, in the main, to promote positive behaviours, rather than to control poor behaviours, and may include broader instructions like “be respectful”, and specific education like, “avoid CAPS LOCK”.

4. Post-hoc moderation

Dialogue works best when it is allowed to flow, so you must find a way to use “post-hoc” moderation. That is, moderation, AFTER the comment (or content) has been allowed to go live on the site.

5. Protocols around comments

Depending on the perceived “risk” of user-generated content egregiously breaching the site rules, you will need to tighten or loosen the protocols around the “comment review period”. Very low risk issues and groups may require almost no moderation, whereas highly emotional and politically contested issues may require real-time 24/7 human oversight.

6. Automated and human filters

Your moderation should include BOTH automated filtering AND human systems. Automated filters are good at picking up black-listed words and SPAM, they are incapable of picking up other poor behaviours.

7. ‘Back up’ processes

Your moderation system should also include “back-up” processes, such as “community flagging”, because your moderators may not be familiar with all of the nuances of the issues under consideration, and may not, therefore, pick up all of the issues.

At Bang the Table, we independently moderate all of our client’s sites to keep the conversation safe and on topic.

Photo by Brooke Lark on Unsplash

Published Date: 16 May 2018 Last modified on July 31, 2018

See how activates your community. Request a demo