Everything in Moderation {click to return to homepage}

"Wisdom hath her excesses, and no less need of moderation than folly." Michel Eyquem De Montaigne

On four types of moderation...

October 13, 2003

There are generally considered to be four major (rough) categories of post-level (rather than user-level) moderation systems operating on the net today. These categories are pre-moderation, post-moderation, reactive moderation and distributed moderation.

  • Pre-moderation
    Because of legal anxieties, some sites and mailing lists operate on the principle that every piece of user-generated content that could go up onto a site should be checked by a moderator (or sometimes - in extreme cases - a lawyer) before it goes live. As a rule, this method of moderation is the death of an online community but there are times when (i) it's the best way of handling user-generated content that either isn't specifically community-based (for example Amazon's product reviews and IMDB's film reviews) or (ii) it is simply too dangerous to use any other kind of moderation scheme. One form of danger is concerned with liability: some message-boards - particularly those that concern themselves with topical issues or celebrities - are prone to libel and can be a source of legal anxiety for the organisation that hosts them (particularly if they're relatively large organisations with enough money to make them worth suing). Other kinds of danger are more overtly unpleasant - messageboards and mailing lists aimed at children are likely to require at least some forms of pre-moderation-based management. Under these circumstances the cost of pre-moderation (which is high) can be a significant disincentive to build online communities of these kinds.
  • Post-moderation
    The big peril of pre-moderation is that it kills online communities stone dead. The immediacy that people want when they press their submit button is fundamental to all online communities and most sites based around user-generated-content. That's where post-moderation comes in. Post-moderation is based again on the assumption that - for security, legal reasons or behavioural problems - every piece of user-generated content needs to be checked, but rather than checking them all before they go live they are instead checked as soon as possibly afterwards. It's not as secure an approach as pre-moderation - after all dubious content will be live on your site - but it does give communities a space to breathe and users the instant feedback they need when they want to put something online. It's worth remembering, however, that every post still has to be read and checked - and that's still profoundly time-consuming and expensive.
  • Reactive-moderation
    Reactive moderation is based on the assumption that if something bad is happening on a site, then the users will spot it quickly and can alert the moderators. This is becoming by far the most common form of moderation for message boards in particular, because the cost of maintaining pre- or post- moderation is so extreme and because the legal situation seems increasingly to be based around the responsibilities of community moderators to remove dubious content, rather than to prevent it being posted in the first place. It can also be more responsive than post-moderation as well, because only the trouble-generating content needs to be checked and because the your community can direct you straight to the problematic areas. You are - however - relying on that group of people who you want to see abusive content least to tell you when they've found it - and not all organisations are comfortable with that - particularly the highly brand conscious.
  • Distributed-moderation
    Distributed moderation is - for the most part - not something that companies tend to rely on as yet. Fundamentally the principle that a community can self-moderate and collectively decide what's appropriate and innappropriate behaviour for themselves can seem a worrying jump in the dark for a company to make, so for the most part distributed moderation of any kind often consists of content rating schemes and is overlaid with aspects of the other moderation systems. Prime examples of this kind of distributed rating system are Slashdot and Kuro5hin.

Comments

Michael Hardner said:

My experience is that within a community, an anonymous reactive moderator chosen from within the site works best. Like all leaders, the rabble will tire of him/her and he/she will have to be replaced.

If you think about it, this is how our nations work too...

Michael Hardner

Eli the Bearded said:

Maybe you are just limiting yourself to website moderation, but Usenet has had moderation for a very long time.

Some noteworthy Usenet moderation schemes:

1) A single pre-moderator. Originally the most common method, but it does not scale very well.

2) Team pre-moderation. I think this is most common now for non-announce groups.

3) Self-moderation. Used successfully in several places, but generally frowned upon, this has requires participants to approve posts themselves. Most useful for stopping drive-by postings and naive crossposts.

4) ARMM. Automatic Retroactive Minimal Moderation.
This is a bot that post-moderates when certain conditions are met. For example, if the subject doesn't include a particular word, cancel the post. While very well known, I don't know if any group other than alt.sex.cthulhu ever used it officially. ARMM is well known because the first version was poorly written and went on a posting rampage.

5) Robot assisted pre-moderation. Often used with method (1) or (2) to implement whitelists, or other automatic pre-screening. When I moderated alt.sex.stories.moderated I used this.

6) NoCeM. Pronounced "no-see-'em", I'm not sure if this ever stood for anything. It is a protocol for distributed opt-in cancelling. The opt-in can be on the user or spool level for news. The source can be automatic or by human hand. The closest thing outside of news might be the Razor spam signature service. The way NoCeM works is you select a source or sources for NCM messages and then your software blocks everything mentioned in them.

There are probably some other methods that I'm not thinking of now.

fluffy said:

Slashdot and Kuro5hin are also perfect examples of how distributed moderation fails miserably as soon as the ratio of assholes to legitimate users exceeds the trust ratio coded into the system. (In Slash, it's essentially 1:1 - one good moderation cancels out one bad moderation - while in Scoop it's 1:5 - five good moderations cancel out one bad moderation.

Also, the "asshat quotient" seems to follow the opposite behavior of normal statistical samples; as the community gets larger, the signal:noise ratio (in both comments and moderation) gets lower.

Tim said:

As fluffy hints at, part of the problem around distributed moderation is that the system partly relies on some for of rating / reputation system. These expose the site (assuming we are talking mainly about message board moderation) to the possibility of gaming.

Reactive moderation while cheaper also exposes the site to the possibility of brand damage which is very different to issue of legal obligations and exposure.

Richard said:

Please, what is the full etymology of the word 'moderation'? Thanks, Richard.

-Rod Kratochwill- said:

To know what's what.

Post a comment










Remember personal info?