On four types of moderation...
October 13, 2003
There are generally considered to be four major (rough) categories of post-level (rather than user-level) moderation systems operating on the net today. These categories are pre-moderation, post-moderation, reactive moderation and distributed moderation.
Because of legal anxieties, some sites and mailing lists operate on the principle that every piece of user-generated content that could go up onto a site should be checked by a moderator (or sometimes - in extreme cases - a lawyer) before it goes live. As a rule, this method of moderation is the death of an online community but there are times when (i) it's the best way of handling user-generated content that either isn't specifically community-based (for example Amazon's product reviews and IMDB's film reviews) or (ii) it is simply too dangerous to use any other kind of moderation scheme. One form of danger is concerned with liability: some message-boards - particularly those that concern themselves with topical issues or celebrities - are prone to libel and can be a source of legal anxiety for the organisation that hosts them (particularly if they're relatively large organisations with enough money to make them worth suing). Other kinds of danger are more overtly unpleasant - messageboards and mailing lists aimed at children are likely to require at least some forms of pre-moderation-based management. Under these circumstances the cost of pre-moderation (which is high) can be a significant disincentive to build online communities of these kinds.
The big peril of pre-moderation is that it kills online communities stone dead. The immediacy that people want when they press their submit button is fundamental to all online communities and most sites based around user-generated-content. That's where post-moderation comes in. Post-moderation is based again on the assumption that - for security, legal reasons or behavioural problems - every piece of user-generated content needs to be checked, but rather than checking them all before they go live they are instead checked as soon as possibly afterwards. It's not as secure an approach as pre-moderation - after all dubious content will be live on your site - but it does give communities a space to breathe and users the instant feedback they need when they want to put something online. It's worth remembering, however, that every post still has to be read and checked - and that's still profoundly time-consuming and expensive.
Reactive moderation is based on the assumption that if something bad is happening on a site, then the users will spot it quickly and can alert the moderators. This is becoming by far the most common form of moderation for message boards in particular, because the cost of maintaining pre- or post- moderation is so extreme and because the legal situation seems increasingly to be based around the responsibilities of community moderators to remove dubious content, rather than to prevent it being posted in the first place. It can also be more responsive than post-moderation as well, because only the trouble-generating content needs to be checked and because the your community can direct you straight to the problematic areas. You are - however - relying on that group of people who you want to see abusive content least to tell you when they've found it - and not all organisations are comfortable with that - particularly the highly brand conscious.
Distributed moderation is - for the most part - not something that companies tend to rely on as yet. Fundamentally the principle that a community can self-moderate and collectively decide what's appropriate and innappropriate behaviour for themselves can seem a worrying jump in the dark for a company to make, so for the most part distributed moderation of any kind often consists of content rating schemes and is overlaid with aspects of the other moderation systems. Prime examples of this kind of distributed rating system are Slashdot and Kuro5hin.