Everything in Moderation {click to return to homepage}

"Moderation is a fatal thing. Nothing succeeds like excess." Oscar Wilde

On stealth moderation or "Blame the technology"...

October 18, 2003

One of the biggest problems with finding ways to moderate users is how to handle the reactions of the people you moderate. If a user is banned or one of their posts is deleted, then - for the most part - it's a total fantasy that they'll look back at their actions with shame, accept that the response was justified and move on to other services on other people's sites where they will now have learned their lesson and operate more responsibly. For the most part, deleting posts and banning users is considered either "unfair", "excessive" or even an overt act of aggression against the user concerned - no matter what kind of appalling behaviour they've been undertaking. Some users genuinely believe that their activities online have no consequences and hence they cannot be held responsible for them.

If users believe themselves to have been 'unfairly attacked', then they'll respond in kind - a user who feels themselves to have been wronged will often use every mechanism at their disposal to make their position clear to the rest of the community - their aggressive actions will be stepped up, their contributions will become more confrontational and (if they've been banned), they'll try and find every possible way of regaining access, whether by reregistering with a different user name (often using a free e-mail address), using other computers or changing ISPs (to circumvent IP banning) or by harrassing other members of the community who they feel have been complicit with that action 'against them'.

Given that there are so many ways in which a user can cause problems for a community and given that it's extremely difficult to ban users outright, the question for people who run online communities has to be how to avoid causing situations in which users feel they have an axe to grind. One approach is purely social - and brings up the non-technical aspects of moderation. It's important to have a clear and explicitly stated set of rules that make it clear what's acceptable behaviour or not, a clear set of procedures that are undertaken when a user misbehaves and a clear path for appeal and rehabilitation that makes punishments easily understood and non-final. Having the patience to explain this process to users is a necessity under this process, and you're quite likely to find the discussion of that process a staple part of the community itself, which can be quite wearing and distract from the ostensible point of the community itself, but fundamentally it will save you considerable time in the long-term.

Another technique is purely technical and is based around finding ways to make users go away on their own, to leave your community without having to be banned. If it sounds duplicitous, it's because it is duplicitous, but it can work extremely well. The technique is well described by Philip Greenspun halfway through Chapter 15 of his Guide to Web Publishing:

I felt humiliated by the situation but for a variety of annoying reasons, it was taking me months to move my services to Oracle. Then it hit me: Sometimes a system that is 95 percent reliable is better than a system that is 100 percent reliable. If Martin was accustomed to seeing the system fail 5 percent of the time, he wouldn't be suspicious if it started failing all of the time. So I reprogrammed my application to look for the presence of "Martin Tai" in the name or message body fields of a posting. Then Martin, or anyone wanting to flame him, would get a program that did
ns_write "please wait while we try to insert your message ..."
ns_sleep 60
ns_write "... deadlock, transaction aborted. Please hit Reload in five or ten minutes."
The result? Martin got frustrated and went away. Since I'd never served him a "you've been shut out of this community" message, he didn't get angry with me. Presumably inured by Microsoft to a world in which computers seldom work as advertised, he just assumed that photo.net traffic had grown enough to completely tip Illustra over into continuous deadlock.

This approach works extremely well in a whole variety of circumstances. At a company I worked with we would mark particularly troublesome users with a flag on their user record, and then whenever they tried to use the website we'd put in a random delay between their request and the page being returned. After a while the site became functionality unusable for them and they'd simply leave. On the web this kind of functionality could be easily circumvented by signing in under a different user-name - so we built it in such a way that it would leave a cookie on their browser that wasn't attached to their user name but was set when that user-name logged in. The cookie would last as long as it was able and any user logged into the board via that browser would experience the same delays. The effects were dramatic and highly successful - bad users would leave as a result of frustration without causing a fight. The service simply wasn't particularly good as far as they were concerned.

One problem with this approach - of course - is that it goes against the nature of established brands and service-providers to purposefully break their service for some users. It's always possible that it might affect how their brand is perceived and generate negative word-of-mouth. But when you consider the alternatives - rogue users manipulating and posting on a board without regard for any rules and actively trying to destroy whatever community you've created - the value of stealth moderation techniques like this becomes clear...


Ken Schafer said:

Great post Tom!

This reminds me of my experiences while running Sony Music Canada's BBS's back in 1994/95. Discussion of moderation in online communities and now weblogs is giving me flashbacks.

I've posted about the need for bloggers with live comment areas to begin thinking of themselves more as moderators and community enablers than writers or publishers.

Hopefully some good cross-pollination between online community veterans and experienced bloggers can help us find a civil way of dealing with comment spam and other related issues.

Frank said:

Good article, reminds me of moderating nthellworld, which had the most angry customers, so we really had a hard job to keep the place in order.

In most communities, people join and then become angry. On nthellworld, people would join angry ;)

I've recently joined http://groups.yahoo.com/group/onlinefacilitation/ if you haven't heard of it already.

Jason said:

As a new moderator of a new messageboard with a substantial number of people new to the messageboard experience, these types of discussions have been extremely helpful.

I'm still trying to figure out the best protocol for deleting threads and posts that fall outside the boundaries we've set for our board. I'm not sure I'm ready to do the "duplicitous" thing yet but I'm happy to know it's an option.

Adrian Holovaty said:

That's a genius idea. Thanks for sharing.

One thing that comes to mind, though, is the possibility of negative side-effects. Say John Smith is the troublemaking user, and your "faulty" technology successfully frustrates him enough to leave the site. But what if he keeps in touch with one or two of your decent, *non*-troublemaking community members -- through means *outside* your community (and, hence, your control) -- and tells them about the technological problems?

In that case, I see two possible negative side-effects:

* The offending user might cause the nonoffending users to lose trust in your site's technology. (That's not necessarily a big deal, but it depends on the site.)

* The nonoffending user might reassure the offending user that there are no problems, which could lead to an even more furious and determined John Smith.

These two possibilities don't necessarily outweigh the advantages of stealth moderation, of course; I figured I'd throw 'em out there for discussion.

Nora Femenia said:

Hi Tom,
you provide a very good hope that some control is possible. I'm managing this Forum and I'm at my wit's end, because there is so much aggression. They don't want a moderated one, but I would like to have more control on the interaction.
I was thinking on finding another Forum format, which could allow me to prevent a member from posting more than three times a day, or from using aggresive words, for example.
Is it within the things that you usually do to get a look and suggest ideas? What would be the conditions to have you do that? Thanks

Scott Moore said:

I can't believe I hadn't run across this before. I'm going to have to think about the pros and cons of implementing such a system. Are there any others who have been using such a tool and have they had success?

I think Adrian has a point that anyone implementing this system shouldn't rely on the mechanics being a secret. Always assume that the person you ban or convince to leave your online community is in touch with others still participating and that any "secrets" will get back to them.

That said, I just passed this tip on to a friend who is pre-screening contributions to his community since it could help with floods of inappropriate contributions by individuals.

Sunir Shah said:

To comment on your beginning discussion about inflaming attackers, we have discussed this to some length on MeatballWiki as PunishReputation. We agree that it is a bad thing to do; instead, we prefer to DissuadeReputation. Note also the current discussion at the bottom of the latter page about the ethical ramifications of this.

Just to hammer a round peg through a square hole, your interesting abuse of a rate limiter is dissuasion as well as it frustrates the malcontent's ability to create more reputation (i.e. through posting). I would be more concerned of course when other good community members experience actual system failures after having read this account here. They might think you were banning them for some arbitrary and evil reason. The Case of Badvogato. Quietly screwing with reality is never a good strategy.

Michele said:

I think the technical approach described is extremely dishonest, but I’d like to comment on the social approach described. In my experience, the implicit moral expectations expressed (ie,“misbehaves” and “rehabilitate”) are the real problem. Moderators who enforce a subjective standard requiring politeness or respect are going spend a lot of time explaining themselves to people who don’t share the same perception of the problem behavior.

I think forums should abandon the idea that they’re some sort of club, or that everyone likes each other. A forum exists for a purpose. What is that purpose? Tell the users that they are there for the purpose. The forum administrator owes first responsibility to the forum, not any one user or group of users.

I am running a forum with a different approach and doing some research on the results for an MIS degree (UC Berkeley) I have some user surveys on the results, if anyone is interested.

skebrown said:

I think a lot of political sites could/should take full advantage of this type of moderation. What a great article!

Mike Chrysler said:

I think it would work well for technical
sites as well.

I don't think Michele has ever had
her forum hacked. It often starts with
someone who feels wronged for whatever reason.
If you've ever run a forum with portal it can
be devastating months and months of hard work destroyed.

And I hate to say it but No forum will ever
be totally secure. Forum script builders
make a better mouse trap and hackers become
more skilled rats.

Tony O'Hagan said:

I recently suggested a related feature be added to a popular forum system. I called it "Moderation by Stealth" ... having never seen this article! Out of curiosity I did a Google search and found this facinating discussion.

My idea was that you could "stealth delete" a posting by making it invisible to all except the sender. You could also extend this idea by allowing a group of flamers to "happily flame away" but only they could see the postings.

See http://forums.invisionpower.com/index.php?showtopic=106843

Tom Coates said:

The only problem with this approach is that it assumes that people on boards don't have any communication with one another and/or don't have alternative user-names. Unfortunately that's often not the case.

kevin Barbieux said:

funny, but I didn't make the above comment titled "the homeless". Some one did use my email address as their signature. Talk about Stealth!

Post a comment

Remember personal info?