On stealth moderation or "Blame the technology"...
October 18, 2003
One of the biggest problems with finding ways to moderate users is how to handle the reactions of the people you moderate. If a user is banned or one of their posts is deleted, then - for the most part - it's a total fantasy that they'll look back at their actions with shame, accept that the response was justified and move on to other services on other people's sites where they will now have learned their lesson and operate more responsibly. For the most part, deleting posts and banning users is considered either "unfair", "excessive" or even an overt act of aggression against the user concerned - no matter what kind of appalling behaviour they've been undertaking. Some users genuinely believe that their activities online have no consequences and hence they cannot be held responsible for them.
If users believe themselves to have been 'unfairly attacked', then they'll respond in kind - a user who feels themselves to have been wronged will often use every mechanism at their disposal to make their position clear to the rest of the community - their aggressive actions will be stepped up, their contributions will become more confrontational and (if they've been banned), they'll try and find every possible way of regaining access, whether by reregistering with a different user name (often using a free e-mail address), using other computers or changing ISPs (to circumvent IP banning) or by harrassing other members of the community who they feel have been complicit with that action 'against them'.
Given that there are so many ways in which a user can cause problems for a community and given that it's extremely difficult to ban users outright, the question for people who run online communities has to be how to avoid causing situations in which users feel they have an axe to grind. One approach is purely social - and brings up the non-technical aspects of moderation. It's important to have a clear and explicitly stated set of rules that make it clear what's acceptable behaviour or not, a clear set of procedures that are undertaken when a user misbehaves and a clear path for appeal and rehabilitation that makes punishments easily understood and non-final. Having the patience to explain this process to users is a necessity under this process, and you're quite likely to find the discussion of that process a staple part of the community itself, which can be quite wearing and distract from the ostensible point of the community itself, but fundamentally it will save you considerable time in the long-term.
Another technique is purely technical and is based around finding ways to make users go away on their own, to leave your community without having to be banned. If it sounds duplicitous, it's because it is duplicitous, but it can work extremely well. The technique is well described by Philip Greenspun halfway through Chapter 15 of his Guide to Web Publishing:
I felt humiliated by the situation but for a variety of annoying reasons, it was taking me months to move my services to Oracle. Then it hit me: Sometimes a system that is 95 percent reliable is better than a system that is 100 percent reliable. If Martin was accustomed to seeing the system fail 5 percent of the time, he wouldn't be suspicious if it started failing all of the time. So I reprogrammed my application to look for the presence of "Martin Tai" in the name or message body fields of a posting. Then Martin, or anyone wanting to flame him, would get a program that did
ns_write "please wait while we try to insert your message ..."
ns_write "... deadlock, transaction aborted. Please hit Reload in five or ten minutes."
The result? Martin got frustrated and went away. Since I'd never served him a "you've been shut out of this community" message, he didn't get angry with me. Presumably inured by Microsoft to a world in which computers seldom work as advertised, he just assumed that photo.net traffic had grown enough to completely tip Illustra over into continuous deadlock.
This approach works extremely well in a whole variety of circumstances. At a company I worked with we would mark particularly troublesome users with a flag on their user record, and then whenever they tried to use the website we'd put in a random delay between their request and the page being returned. After a while the site became functionality unusable for them and they'd simply leave. On the web this kind of functionality could be easily circumvented by signing in under a different user-name - so we built it in such a way that it would leave a cookie on their browser that wasn't attached to their user name but was set when that user-name logged in. The cookie would last as long as it was able and any user logged into the board via that browser would experience the same delays. The effects were dramatic and highly successful - bad users would leave as a result of frustration without causing a fight. The service simply wasn't particularly good as far as they were concerned.
One problem with this approach - of course - is that it goes against the nature of established brands and service-providers to purposefully break their service for some users. It's always possible that it might affect how their brand is perceived and generate negative word-of-mouth. But when you consider the alternatives - rogue users manipulating and posting on a board without regard for any rules and actively trying to destroy whatever community you've created - the value of stealth moderation techniques like this becomes clear...