Everything in Moderation {click to return to homepage}

"Out of moderation a pure happiness springs." Johann Wolfgang Von Goethe

Tagging difficult users with infectious markers...

October 28, 2003

Following on from my earlier piece on Stealth Moderation I thought I'd talk a bit about a technique we've been using on Barbelith recently to deal with a particularly thorough and unpleasant troll-attack. But first I should recap on the specific situation that we're trying to resolve with this technique.

One of the great difficulties with looking after an online community is that it's generally almost impossible to ban a user from a site if they're dedicated to breaking in. The only circumstances you can ban them are when you require payments via credit cards, hard-to-obtain unique forms of real-life identification or when you're prepared to take the situation to the police. Otherwise all they have to do is sign up for a free e-mail account, and re-register on your site. Within ten minutes they can be back causing trouble, your ability to set the rules for your community space has been completely undermined and there's very little you can do about it.

And that's only one use of multiple user names. Many trolling users will maintain several concurrent accounts, which they will use to support the position of their prime identity - making all online battles seem larger and more significant than they actually are and obfuscating the fact that - at heart - it's just one troublemaker working quite hard to spoil the experience for all the others. These alternative user names are often known as "Sock-puppets" for vaguely obvious reasons. Typically a troll of this kind will use their sock-puppets to post self-supporting messages like, "Hey, why are you being so down on the guy. I think he has a point and you're all being really **** about it". I've seen people using these multiple user names to create identities that are almost identical to other user's self-representations (a duplicated character in the username - or sometimes just a space after their name, depending on the software) and then using that identity to suggest that their alternative usernames, "might have a point - maybe it's best not to wind them up any more", or even to suggest that their alternative trolling identity might have started investigating legal recourses. Even stopping new registrations won't necessarily stop this kind of activity as long as the e-mail addresses of long-dormant users are available to be contacted and appealed to. And there will always be one user who has two or more user names who believes any kind of ban is a de facto attack and will support a long-term troll, however obviously destructive (or even illegal) they might be...

Essentially it all boils down to one problem - that you can ban user names easily, but it's far from easy to ban real-life people. There are many approaches to this kind of problem, but one thing is clear - on occasion users do need to be banned - however much we may wish it otherwise.

One approach that we've been using recently with a fair amount of success (although it breaks my first and most important rules of what constitutes a long-term successful moderation strategy) is based around finding ways of demonstrating clear links between user-names - links that indicate that they are being used by the same real-life users or groups of users. We used cookies again, so it's only going to work on platforms where you are using either a web-based interface or write the client-side software, but it really has proven extremely useful.

A user who we wish to tag is marked as tagged in the user table of the database. When they next login, a cookie is placed upon the browser that they use. From that moment on, any other user-name that logs in via that machine will immediately and automatically be tagged in turn. If that latter user then moves to a different computer and logs in, that computer too will have a cookie on it that marks it as being 'used by trolling users' - and any subsequent logins on that computer by different user names will result in those user names also being tagged. At the individual level this can mean that each new user name can be directly and quickly identified as belonging to a troublesome user, but it gets even more useful when a group of users decide to share a new user name to cause trouble on a board. Everyone of them will be tagged next time they login.

In order to make the process more useful, you can find ways of adding more information to the cookie. One particularly useful piece of information is which tagged user-name triggered the site to leave a cookie on someone's computer. This information can be particularly useful if you're unlucky enough to have attracted the attention of semi-organised groups of long-term troublemakers, since it allows you to track the course of your tag through the community and - in turn - enables you to clearly see specific relationships between individuals.

What you choose to do with this data is another matter entirely. In order to avoid many of the fairly obvious ethical issues that surround tracking user information at this kind of level, we've operated on the basis of revealing to the user that they have been banned, placing the cookie immediately on their browser and then waiting for them to try other usernames which in turn will then automatically and immediately be banned. Obviously this approach is not without its problems - for a start it makes it easier to determine what is causing the bans (particularly for the more technically literate) and may help a dedicated long-term troll find workarounds - so you might want to obscure the issue a bit by triggering a user name ban after a random number of hours or posts so there is this perception of human agency behind the scenes. Either way, it's probably best not to name the cookie after the banning process, as that might give the game away...

Comments

Chirag said:

All this is fine, but what is this is happening from a cyber cafe, public library or some such place... an unrelated peron may get caught up in the loop with no exit...

Tom Coates said:

To be honest, given the scale of the vast majority of online communities, the odds of two unrelated people using the same computer in a cyber café or library are really extremely small - particularly if you shut down the account at the moment in which banned users log in - as that doesn't allow people to tag other computers unless that user is already logged in on that computer (or unless they're prepared to try it in different environments, which would be demonstrably fruitless quite quickly). If they are logged in elsewhere, then it's probably not on a public computer, as that would give members of the public direct access to their login name anyway...

Sarah said:

Regarding cookies - nice idea but many people use cookie management programs (these are built in to some ad blocking programs) and delete cookies (selectively or en masse) as part of their computer housekeeping routine. Sadly, cookies are a naive method, especially considering the computer-savviness of many disruptive people.

Tom Coates said:

Using cookies is a far from naive approach! Admittedly there will be a group of people who can subvert that kind of approach because they're technically savvy and use cookie management programming, but I think you'll find that kind of troll more on tech-centred communities. And there are many other kinds of community. Particularly as there's no reason why you couldn't integrate it with the same cookie that holds your login information so that if people refuse it, then they simply don't have access to the site...

20721 said:

let me just say that, as a slashdot troll, i have a firewall which allows me to dynamically modify my o/s fingerprint, a highly adaptive cookie manager/poisoner that can decode many cookies in realtime (stop using urlencode!), a browser plugin that lets me modify my entire http header including user agent, a database-driven transparent proxy tracker which harvests new proxies 24/7, scripts to generate free email accounts by the 100's, good web scripting skills, and on a good day around 500 moderation points on slashdot from over 1,000 monitored accounts.

please allow me to introduce myself.

i wandered onto this article through a link on k5. abusive users are generally much more capable people than you seem to give them credit for.

let me give you 3 things that are actually hard to defeat:

1) credit cards.
2) captcha. (i hate good captcha).
3) SSL. trivial? no. you can't access an SSL site through almost all transparent proxies. it makes it very hard to switch ip addresses.

ttfn.

I would prefer not to said:

This is a pretty interesting approach, but I can see another potential problem with it. Imagine a scenario where the troll learns that such a tagging system is in place. He or she copys the cookie and creates a webpage that serves the same cookie. He or she then logs into the forum under an alternate user ID and posts a new thread with an enticing sounding link to the page that serves the tagged cookie. Any user who visits this page would become tagged.

I don't think this would be beyond the abilities of a dedicated troll...

I would prefer not to said:

Or even copies the cookie...

Tom Coates said:

That shouldn't work. Cookies set under one domain should only be accessible to that domain. If someone could copy cookies like that then they'd be able to undertake the same strategy to login to other people's accounts on Amazon.

With regards to 20271 - I have no doubt that your troll status is secure and strong! All I would say is that I suspect you're a member of a relatively small and particularly technically skilled elite among trolls and - as such - not really the main subject of my posting on this site.

20721 said:

cookies from other domains: you have to deal with the browsers out there. no, cross-site cookies shouldn't work. unfortunately, 20% of internet users are still running windows 98. cross-site cookies work GREAT. the minute i discovered an evil cookie of this nature i'd be working on getting a cookie-implanting link inserted into a comment. it doesn't have to work on all browsers to ruin the system, hell it only needs about a 5% penetration rate. i can tell you from prior experience these attacks work.

"small and skilled": isn't that a perfect description of the population you're labelling as "trolls"? i believe you have misunderstood my point. my point was, if you embark down a road of engaging the enemy with security based on obscurity on the hopes that the enemy is dumber than you, you will lose- badly. well designed solutions, as a starter, are ones which you yourself cannot think of a way to break. this should be the starting bar.

regarding your article on stealth moderation, consider that you first state that there must be a public set of rules and consequences, and then you go on to state that there must be a secret set of undocumented consequences. are these two statements not mutually incompatible? what happens when an abusive user figures out that you're lowering his response times and alerts the community? how will they respond? will you be forced to go into stalin mode, deleting any comment which might reveal your secret? or will you fess up and hope everyone forgives you?

hint: this has happened many times on slashdot, to ruinous effect.

i suggest instead that you start with secure solutions and work within their framework.

interestingly, your site allows POST method from free anonymous proxies:

http://anon.free.anonymizer.com/http://www.everythinginmoderation.org/

hope you guessed my name!

Tom Coates said:

Nope - i don't think I'm talking about a small group of highly skilled individuals when I'm talking about trolls. That there are technically skilled individuals operating as trolls is self-evident but - on the whole - they're massively outweighed by that class of people that "one might want to exclude from an online community" based on whatever particular criteria that an individual community decided were appropraite.

With regards to the techniques one employs - having a variety of tools at ones disposal is important, even if they don't all work in all situations. You'll get no disagreement from me that the best systems are ones which are clearly stated and can be openly presented to people for analysis - much like techniques in cryptography. I'd also agree that online communities should overtly state the specific rules and objectives of the site and the consequences for dealing with it and have some kind of mechanism for enforcing those decisions - whether that be a highly distributed moderation system or credit card payments. But they only go so far, and I don't think it's unreasonable to suggest that site owners should either be aware of - or allow themselves to be inspired by - different approaches.

Tom Coates said:

I'd just like to add that options like requiring credit card sign-ups are clearly highly effective, but then again they're not particularly useful in those circumstances where people wouldn't be prepared to give them, in which case they'll just kill the site. That would be true of most sites, in fact. And captcha is all very well, but it makes sites totally innaccessible to people using speech browsers and the like, and so for many government funded organisations would actually be illegal (if there were no other options, at which point it tends to use its value). It's a much harder project finding ways of moderating relatively free and open sites in sensitive ways, and most of the battles between site owners / communities and trolls that operate on those sites are heavily stacked towards the troll. Part of this site's purpose is to redress the balance by supplying a variety of techniques - more or less powerful - that people might use to inspire thought around these areas in their own communities.

20721 said:

i'm not saying that you shouldn't explore all the options. but "stealth moderation" is, by its nature, a secret discouragement system. it means that, in your list of rules & consequences, you must lie to your community and accept any negative consequences of this lie. i just can't seem to reconcile it with your statement that there should be a clear code of conduct.

i believe that it takes a certain amount of hubris to assume that the people you want to exclude are, by their nature, not as smart as you. you may be right about the people you're trying to exclude; i defer to your judgement, i'm not a member of the communities you are; but where i come from, the best & the brightest are the ones being cast out. they're cast out from communities by the following chain of events:

1) secretive backhanded moderation tactic by the admins is discovered
2) someone alerts the community
3) the most technically apt in the community are able to reproduce the backhanded moderation tactic and verify its existence
4) these people call foul and are labelled "trolls" for doing so, leading to the institution of more of 1) (repeat).

this is how i started down the road i'm on. i was one of the many people who discovered that the people at slashdot were secretly moderating the users' comments, and one day they moderated the same comment 800 times - and then they lied about it, and said anyone who told the truth about it was a "troll". hence i became what they called me.

what i am trying to impress on you is that when you talk about creating a false code of conduct and secretly screwing users, you are engaging in duplicitious behavior. at this point, the truly objective observer must ask: who's being more abusive - the lying administrator, or the banned user?

treating people with respect tends to minimize the creation of a hostile populace. that's all i'm trying to say here. treat people with respect. be honest with them. don't try to solve a social problem with a technical solution - don't try to secretly tag their browsers and "infect" their computers, don't emulate the god damned RIAA for crissakes - be honest, be mature, be a good person. if you need to take disciplinary measures in an online community, make them public, make them based on public policy, and make them effective.

trying to excuse your backhanded cookie-poisoning scheme as "considering all the options" is, IMO, crap. it's not an option to an honest person. and rue the day when someone posts the inevitable "OMG, look at this cookie they poisoned my browser with!!!" message.

I would prefer not to said:

I wasn't refering to copying cookies from an unsuspecting users computer, but to serving a copy of the tagged forum cookie (with domain restrictions removed by putting three periods '...' after the domain serving the cookie). As far as I am aware this is an expliot that worked and still works with Netscape 6.1, Mozilla 0.9.7 and IE 5. Might even be an issue with later browsers.

Tom Coates said:

Back to 20721 - I think maybe we have a miscommunication about the circumstances we're talking about. When an individual comes onto a board or into a community and starts talking about feeding lesbians to alsatians or killing black people with tire irons (as has happened on a community I started) and none of your conventional mechanisms for getting rid of them work, then I will quite cheerfully advocate the use of rather more hardcore methods. Fundamentally the owners of these shared community spaces are in part responsible (sometimes legally) for the behaviour of people on their boards, and under certain circumstances they should feel totally within their rights to employ any technical 'trick' they need to. Again - you will get no argument from me that it's not a desirable situation to find oneself in or that using techniques like this to deal with people you simply don't like or as a method of first-resort wouldn't be recommended, but - in certain circumstances - it can be both effective and necessary.

It's been really useful you posting your opinions however, because you've outlined a range of circumstances and potential consequences that might emerge should people use this kind of technique which should give people a better sense of the potential benefits / drawbacks of it as an approach.

20721 said:

cookies guy: agreed! the exploit that allows you to insert cookies readable by other domains exists in MANY browsers and was the one i was referring to, as well. i have some javascript that works across many of the available platforms...

tom:

"under certain circumstances they should feel totally within their rights to employ any technical 'trick'"

i guess this is where we fundamentally disagree. i acknowledge freely that there exists no limit to the vulgarity of what can be entered into anonymous discourse on the internet, every politically incorrect viewpoint on earth from anti-semitism to racism of every stripe, jumping unexpectedly into a spirited discussion about lawn care or function pointers. i know this.

i know also that law enforcement on the internet is almost completely unresponsive except when defending the interests of highly profitable corporations, stacking the odds against site administrators even more and making the job one of the most thankless & frustrating imaginable.

but at heart, i am a liberal, and i believe that in order to stake out the moral high ground, you have to be honest and forthright in your affairs, ESPECIALLY when you are dealing with those that anger you the most. in virginia today the men who gunned down 14 people are getting to stand trial, parade around in suits, grill the people they shot at and their families, and basically enjoying a due process that they do not deserve under any moral code imaginable.

and i defend their right to that process. i believe in that right because i believe that we as a race cannot ever trust in our own infallability of judgement of others. and while the cases you cite are extremely clear-cut, i know that they are not all that way.

in short i believe that the people who must be treated with the most public, forthright, and open methods of censure are those who offend us the most. i do not believe that trickery is ever as effective as open methods because trickery is, at its core, dishonest to both the person being tricked and the online community you have secretly enacted policy for.

i believe that secret punishments inevitably lead to abuse and combativeness, that they lead to an arms race against people of equal intelligence and unlimited free time.

anyway i could say more and dump off links to historical examples, but i think we've both highlighted our viewpoints clearly (and in my case with excessive verbosity as well).

thanks for the thought-provoking discussion, for your courtesy, for your sympathy, and your taste.

Waldo Jaquith said:

My friend Justin revealed to me a damned fine technique for cross-referencing user accounts: compare passwords. So many people use passwords like "password," "secret," "god," etc. that it's not a useful method when looking at a large dataset. But when attempting determine if a small number of accounts are shared by the same person, this can be a useful thing to which to refer.

Note that if you are storing passwords properly -- which is to say that you're not storing them, just hashes of them -- you can only determine if the user is using identical passwords. If they're using "monkey2" for one account and "password2" for another, the hashes will be drastically different, and so no comparison is possible. Unhashed passwords do not present this problem, of course.

Richard Soderberg said:

Do we harbor such inflexibility towards others in our forums that, when they present themselves in a manner that is disruptive or unwanted, we cannot simply move on?

During years of IRC, I've only used the ignore feature *very* occasionally, and not within the past three years; it seems rude, somehow, to stop listening to someone. I've exercised the right *not* to respond repeatedly and often, but I don't really feel that I have a right to use technology to remove their words from my datastream.

I'm not averse to using peer-driven filtering methods to modify what I see, however; this is the most effective route to hiding spam from my daily view that I've found. I don't mind spam, though, except that I have no way to prioritize it in my workflow at this time; if spam were flagged as the lowest priority in my mailbox, and I could accurately sort by priority, I'd never worry about it again -- there's too much content to deal with to pay attention to things below a certain threshold very often.

This applies to how I read Slashdot, as well; I'm perfectly willing to (and, in fact, by default) filter comments based on a custom set; I downgrade things marked Humor, because generally if something's moderated Humor I don't have time to sit back and remember how to appreciate that kind of joke. I focus on Insightful, and to a lesser extent Interesting, because that's where I find the value that's relevant to me.

Occasionally, though, I dive into the threshold of -1. It happens rarely, but it can be fun -- when I'm in the mood. When I'm not, the filtering I use hides the content from me *temporarily* -- when I'm in the mood. I don't begrudge them their right to use the forum for purposes that attract negative moderation, though; I would find it annoying were I unable to prioritize their enjoyment out of my enjoyment.

The right to use a public forum is indeed the right to do whatever one wishes; lacking in most public forums is the capability for users to indicate, for the sake of calculating priorities, that which they wish to see more of over that which they wish to see less of. I dislike Slashdot's use of -1 to +5 thresholds, though; I feel it should be more like the personality bar graphs in the Sims.

Each post should have a community-driven, peer-opinion formed collection of rankings; some posts will collect more Humor rankings than Thoughtful, others will collect more Esoteric than Popular, others will collect more Inflammatory than Diplomatic. Given access to a wide variety of rankings, I can come to better understand my preferences and start to indicate them to the system; if I like diplomacy over inflammation, I can tell the system as much, and it can sort my view more usefully.

Most posts would be rated in multiple categories, something I'm okay with; I require every user to be capable of rating, though, and all ratings must be visible publicly *and* attributable to individual users, without administrative intervention. It must be an open system, or there can be no peace -- for who can trust what is closed and unviewable to the public?

20721 indicates above that 800 moderation points were applied to a single message in a day, and that his identifying of this problem resulted in the label 'troll'; to this, I say "identify publicly each user's rating": there should be no hiding of opinions behind curtains, no shielding of users from the opinions of others. By placing your name to an opinion in a forum, it is implied that others have a right to do the same.

I'm not sure how to handle anonymous moderation technologically; it is my preference to identify users first, to prevent gaming of the system. I don't know if it's right to stop the thousand users accounts described by 20721, because the option to hold multiple identities is indeed a right in my mind.

I am bothered that a large quantity of identities can be used to control process, shifting too much power into the hands of one controlling individual. I think that the most effective route to solving this is a combination of tapping into a network-wide reputation system -- eBay, Slashdot, Kuro5hin and Advogato are all useful sources of reputation data -- and applying a 1/x scale to all moderations, such that the 100th vote for a given weight (say, Humor) means much less overall than the 1st vote does.

While it's impossible to protect communities against the efforts of one person masqueraded as a thousand, it ought to be a lot more intelligent to channel those efforts into something useful to the entire community. I would sacrifice my ability to post comments on Slashdot for all time if I were offered the chance to moderate every comment, all the time.

I'd love to hear your thoughts on this, 20721, even though you've already stepped away from the discussion; if not, thank you for the time you've already committed, and happy trails.

Mike said:

I think, for me, the problem with trolls is they force themselves into societies which don't necessarily want them. I used to run MUDs, and every now and then it would happen that a user would do something he oughtn't to have (it's almost always a he) and I would have reason to not want that user logging in any more. So we'd ban usernames, IPs, and, if necessary, IP blocks. (This was in the days before easy access to proxies, etc. :) )

What gave me the right to do that? MUDs - and message boards, and weblogs, and IRC servers - are not necessarily public resources, they don't tend to be democracies. They tend to, in fact, be autocracies, and why shouldn't they be? Their hardware is not generally community-owned, nor is their bandwidth, or the time taken in setting up the software to enable the community.

So why should *anybody* have the *right* to join that community? (Personally, I have difficulty with the concept that human beings have any inherent rights at all, let alone the right to abuse the good nature and resources of others, but that's another discussion.) *Why* is it wrong to desire to not see input, such as it is, from trolls? For very large sites, such as /., one could argue that the community is so large that it's almost impossible to gain consensus from the users: "ban this guy, don't ban that guy", but I think user-guided moderation - done properly - goes a long way towards accomplishing that goal. I don't know that I'd say that /. does it properly; I think that's likely virtually impossible, given that it's also virtually impossible to guarantee "one person, one vote".

Like Richard Soderberg, I don't make much use of ignore features, killfiles, and the like. Unlike him, perhaps, that's because I've heavily restricted the communities that I care about. I stopped running MUDs partially because of abuse from people who felt it was somehow their right to dictate to me what I should do with my creation. I stopped reading most usenet communities because of the influx of trolls and spam (which, in that context, I consider also to be trolling, just a specialised subset of such). I read a few web boards on subjects I'm interested in, but I rarely venture into the comments and even less rarely actually post one myself.

I guess what I'm trying to say is that I believe behaviours like trolling - but not necessarily restricted to trolling - can cause the trust in a community to break down, and that causes the community to no longer be one - so what's the point? In some (many? all?) circumstances, it's the online equivalent to vandalising public parks.

E said:

The European Convention on Human Rights talks about the right to "peaceful enjoyment of possessions". Some of the "rights" in the Convention are controversial, but this one seems basic to civilised society. How different should things be in cyberspace?

LKM said:

> Personally, I have difficulty with the concept that human beings have any inherent rights at all

How can you say that you have the right to own something and then, somewhat below that, claim that humans shouldn't have any inherent rights at all? I think you misunderstand what "rights" are.

nyaya said:

to move away from the technical argument slightly, I'm seeing two sub-threads immerging:

1) Should one use any method available to enforce community rules (including non-published ones) vs. publicly state all rules and enforcing methods and use them

2) Do we have the right to create rules of conduct and enforce them within our communities, and what happens if we aren't physically able to enforce them?

Mike's argument above is that if we privately own the communities, then the owner has the right to be dictator, set the standards they please and kick the undesired out. This ideology might be suitable for small groups, but we hope it doesn't spread to nations or large electronic communities. Communities are increasing in size, reaching greater distances and becoming more diverse. A semi-democratic method of deciding rules and punishments (and possibly issues of ownership) is a likely future...

If we then presume a democratically chosen set of rules and punishments: what happens if we can't enforce them? To give an example in the physical world, what happens when you have two people driving around shooting people at random in a community? The community agrees this is not desired behavior. What if the police have no legal or physical power to stop the snipers? Do you simply ignore the snipers? Wear sunglasses? The social consequence of doing so is a decline in outdoor public activities, living in a state of fear and not meeting as many of your neighbors.

Conclusion: At least in the offline world, lack of ability to enforce rules decided by the community paired with people willing to break the rules leads to the demise of the community (or at least to an unhealthy state). Admittedly, physically shooting someone and verbally attacking them online are different crimes, but the results are similar.

"20721" justifies their behavior by saying it is to promote public disclosure of enforcement policies, but I think he/she might still be trolling if they were made public and used that way. 20721, what would Slash have to do to convince you to stop being a troll? Having watchdogs for governing bodies is useful, having anti-social personalities ruining good communities isn’t.

romulus said:

I think the point that is being missed is the argument that, if you come out and sell yourself as a pro-openness, pro-freedom forum, and then engage in closed, censoring, restrictive, underhanded tactics, in order to control your forum to only that expression you like, then you're a hypocrite, and should be exposed as one. At least, your inconsistencies should be publicly displayed. In 20721's case, he ran into a Slashdot underhandedness which is not being used to improve the quality of the community, but to cover up uncomfortable truths which would taint Slashdot's image as a forum about openness (e.g. open source, free speech) if it were exposed.

What if, in nyaya's example, the police chose only to arrest one of the snipers, because that one was shooting people in a favored group, but the other was shooting people in an unfavored group? What then? Some would say civil disobedience would be a valid way to expose that inequity.

Mike said:

LKM - well, as I said, that's another discussion altogether; let's just say that if anybody has any "rights" in an online community, it should be those who set it up, not those who "merely" participate in it, particularly those who have had it made clear to them that they're not welcome, no? Yes, this glosses over the fact that without users, there is no community, or the community is so small as to be utterly irrelevant.

nyaya - we already do this at national levels, they're called immigration policies. Generally, societies don't willingly allow anti-social people to join them. The definition of anti-social changes depending on the society, of course. Most western countries don't reserve the right to eject those who do not conform to their society's rules, but they do reserve the right to have such an individual forcibly confined. I'm not sure what online actions could map to a jail.

romulus - agreed, but must they necessarily be exposed within that community? I'm not disagreeing, just wondering.

buhagiar said:

Slightly OT - but this topic of truth in online communities is a very interesting one. For the past two years this page (http://www.metafilter.com/newuser.mefi) has greeted anyone who wants to become a Metafilter member. Only once, for a short period, have I seen a different page taking on new members. After two years, I'm not sure I believe it anymore. I'm starting to wonder if perhaps what is really going on is that the owner of the site doesn't actually want any new members, but is afraid to do so for fear of getting adverse publicity.

In a slightly different vein - internationalisation. Despite the fact that any site can be viewed from anywhere in the world (China's firewall notwithstanding), some people in online communities really dislike having to deal with posts from countries other than their own, and either ignore them, or actively discourage them.

For example: Have a look at any story on Slashdot that has anything to do with Australia. Likewise kuro5hin and Metafilter.

Mr. Darl McBride said:

I have to admit that I'm a bit confused by this line of questioning. It was my understanding that the Gay Movable Type Bloggers all had Macs, and were therefore unable to block IP addresses anyway?

I'm willing to accept a cookie however, if that helps.

Waldo Jaquith said:

Political theorist Hedley Bull wrote, in a 1977 essay, that order among states and justice within them are often mutually exclusive. That is to say that a state that pursues order does so at a loss of justice, and states that seek to be just do so at a loss of order.

Mr. Darl McBride said:

All I know is that there's a lot to be read into the schoolmarm line: "Ignore him and he'll stop."

There's also a rare kind of fun to be had when someone tries to turn this sort of thing into a tech war.

cedric Muller said:

hey, you could use a small 1px by 1px swf that writes a SharedObject! temporary solution though,..,.

Doug Gibson said:

Wouldn't it be interesting if 20721 was really Tom Coates. I mean, the whole discussion didn't REALLY get interesting until 20721 came into it...

Tom Coates said:

I'm afraid I'm quite dull.

Richard Soderberg said:

What's interesting about "ignore them and they'll stop" is that it's an ineffetive solution when scaled up to a thousand users -- because, out of a thousand users, someone is always guaranteed to pay attention. We like drama -- otherwise soap operas would never spawn magazines about what's happening next week -- so in large groups it doesn't seem to work simply to hope that users will all work in cohorts to end what I see as a form of online drama.

If we could get them to self-tag as drama, though, that'd make all this moderation a lot easier -- and then I could subscribe to the drama threads separately, for entertainment value. I guess half the fun in it is seeming like a real comment, though.

Jamie McCarthy said:

"20721" is trolling you all. He has scripts that generate hundreds of accounts, sure -- and we nuke them all after he runs them. He doesn't have thousands of accounts, and he sure doesn't have 500 mod points. (And the fact that he gets off on making up this stuff, I find really quite sad.)

I'm not going into detail about how I know this because (1) it's boring and (2) I've no desire to explain to the trolls how we're shutting them down. What I will say is that our system of having users metamoderate other users' moderations is an excellent self-correcting scheme. Anyone who starts doing stupid moderation gets slapped down repeatedly by our users, until the mod points start coming slower and slower, and then finally not at all.

If he really owns 1000 accounts getting mod points, he's welcome to prove it by sacrificing just 6 of them, and moderating this comment from -1 to 5: http://slashdot.org/comments.pl?sid=84769&cid=7395503

I skimmed the rest of this discussion and I guess Slashdot is being slammed for not being "open" enough. All I can say is to read the FAQ (http://slashdot.org/faq/com-mod.shtml#cm605 for one), and to point out that there is no other system that has worked half as well. True, discussion on Slashdot is not the best it could be (and of course we hope to continue to improve it in the years to come).

But when other websites get to a tenth the size of Slashdot, they are drowned in trolls, and attention-seekers in general. Slashdot has many parts of its moderation and submission systems invisible to the general public, this is true. And that is a large part of why it hasn't turned into a navel-gazing clique. On sites that make "openness" into a fetish, half the discussion ends up being about the site itself, and the site gets dominated by a handful of people with too much time on their hands. Which is fine if the site owners don't mind -- but that's not what Slashdot is about.

I assume anyone reading this blog is already familiar with this phenomenon. We've all seen public discussion forums with far fewer than a quarter-million active users crash and burn. Maybe the question is not "how can Slashdot refuse to let every user see every inch of its database," but instead: "why do sites that pride themselves on openness always seem vulnerable to attention-starved social engineers?"

Oh, and re the topic of this blog entry: Slashdot doesn't do any of this kind of tagging. I can't imagine it would be productive against determined trolls (of which Slashdot has at least a dozen).

Tom Coates said:

Jamie - thanks for your input. If you do have any other information regarding online moderation schemes, I'd be really grateful if you'd consider sharing them as there are a lot of online community maintainers out there who could probably use the information.

20721 said:

a lot of people to answer, i'll try my best to get to you all:

nyaya: you haven't caught the irony yet. a "troll" on slashdot is anyone who disagrees with the administration - it's the name they call us. therefore if the administration & i ceased to disagree, they would stop calling me that name, and i would cease to be one. to answer your question more explicitly, those of us who resist have through the years made only one request: that the slashdot administration be honest about the influence it exerts over the moderation system. specifically, because admins have unlimited moderation points, we'd like to see a visible marker every time an admin moderation is made. otherwise who's to say what agenda they're pushing? this "freedom of information act" is necessary because they have historically mass-moderated certain topics in secret - research the mass-moderation scandal if you are so inclined.

mr soderberg: not only was the post moderated over 800 times, but half of that was THE ADMINS MODERATING IT DOWN - a fact which they only admitted to a week later. the admins trump 400 users - and in their mind, this is justice. this is the pattern of dishonesty i refer to.
i don't explicitly disagree with the idea of anonymous moderation. i think of users as voters: it's ok to protect their vote with anonymity. it's a choice of the forum admin. however i think of admins as the government: the government has unlimited power and should be open when using it. i oppose the sealed warrants granted by the PATRIOT act on this grounds. what other than unchecked secret power has been the greatest threat to democratic societies? slashdot's "rtbl", the mass banning of moderators who disagreed with the slashdot administrators - this was the star chamber, the secret fbi wiretaps, the things we wrote the freedom of information act to prevent. when admins moderate, masquerading as users, this is like allowing the government unlimited votes for its own candidates. as far as multiple accounts go: jamie mccarthy has multiple slashdot accounts, so that should settle it for you. as far as your request to make moderation non-anonymous, don't push it too far on slashdot or pretty soon you'll be in the hole with me.

romulus: i have NO PROBLEM with closed web communities. i am a member of a few. my problem specifically is with communities that claim to be one thing (user moderated) but are secretly another (secret admin retaliation against political opposition). in short, as you have pointed out, the problem is one of hypocrisy.

Mike: "I have difficulty with the concept that human beings have any inherent rights at all". wow. i'm gonna take jefferson over you on that one.

jamie: i realize it must be "frusterating" not to be able to moderate me down on this forum (and then pretend the users did it!), but there's no need to hurl insults. as you well know, moderating the comment you linked to would cause our system to be discovered and banned. it's a nice catch-22 for you: if i moderate the comment, you catch me, and if i don't, you can call me a liar.

i'm not going into detail about what i've got because (1) it's boring and (2) i've no desire to explain to the admins how we're defeating their system.

honestly i wish you'd stop being so defensive and combative about your work and just let people see editor moderation, but it's been three years now that you've spent secretly raping the moderation system so admittedly you've got your work to protect, and i can respect that.

i am not tom coates, and tom coates is not dull! in my opinion anyone who proposes trying to secretly poison the browsers of users the admins don't like is a VERY interesting person.

Yo Vinny said:

I think this is fascinating. It's been going on for decades now. I remember people using double handles on BBS's for both creative and trouble-making reasons.

Sometimes it's done to stimulate messages and reactions. Other times it's just somebody having their own brand fun. We used to call a BBS that had it's own "Law of the Land." If you violated either the LOTL or sub boards rules, you could be "arrested" and taken to "court." If you were in jail it was the only sub board you were allowed to see during your sentence.

The Sheriff's job was to police the boards (but not delete/alter messages unless they contained something illegal..)

It was a fun system, even the SysOp wound up in "court" for one reason or another. He also believed that people would call back if they got email after the first time they logged on. He formed the "New User Welcoming Committee" with both real and double handles. This stimulated posts and kept the place busy 24 hours a day. One of the pieces of advice he sent out to new callers was "Don't repond to anybody unless you want to get involved with them." "Nobody here likes you very much, nobody here likes each other very much." Pure gold.

I think the only real rule was not to physically mess with the BBS (crash, etc.) because if it wasn't up, well, then there'd be no BBS.

As much as I think it's interesting, I don't have a solution. Everybody's got an opinion and I enjoy reading them all. Like they say, it's when they stop noticing that you have to worry...

Waldo Jaquith said:

All of this was so much easier back in the CID-enabled days of BBSing. If a user was a problem, their phone number could be blocked at the telco level or by the BBS. Modem-equipped computers were sufficiently rare that a problem user would very rarely be able to find an alternate method of connecting, and thus the problem was solved.

Jamie McCarthy said:

Tom - you're on my NetNewsWire, I'll keep reading :)

Paul Smith said:

Interesting site and lots of overwhelming information. This all seems to be from an administrative level however.

What can a community board user do when the board is attacked by trolls and the [terms of service] for the boards do not seem to be enforced and there is no visible path to even lodge a complaint?

How can one find an administrator when they are not posted?

Thanks

romulus said:

jamie,

i have to admire your definition of the term "stupid" as in the phrase "stupid moderation". As I understand it, you have people moderate, and then you have the court of public opinion decide on that moderation.

And you punish those moderators that get voted against. And apparently punish those voters that vote unpopularly. Which is the reason I eventually refused moderation even though my karma is sufficient to get mods fairly often.

The problem is, not even the majority if Slashdot posters -- especially the majority of Slashdot posters with an anti-mod axe to grind -- can be relied upon to make sensible judgements. At best, you would have to expect that the metamodding population are going to be equally as wrong as the modding population.

Time and time again while metamodding I saw well thought out posts get modded down, not for content or even validity or truthfulness, but for unpopularity. That's bullshit, and not what the mod system is for (at least I haven't been able to find the "Lynch" modding choice).

And to Waldo: I don't know about you, but in the BBS circles I ran in, *68 was your friend, and encouraged by most (including sysops). (Yes Virginia, we were privacy nuts even then.)

Zbigniew Lukasiak said:

The slashdot examples shows it pretty well, how the everage opinion of a large number of people is pretty mediocre. But from the other point we need some kind of voting and moderation and voting is allways a popularity contest. I believe the solution is that everyone could choose the group of people whose vote would count for their individual message filter. My experience with slashdot has very much improved after I perused the Zoo system of friend/foe indicators. The problem is that it is a bit too constrained to work well.

Joyce said:

Maybe they can block trolls using some baysian method like some email filters block spam

Darryl Troll said:

Sorry Jamie, but your anti-troll methods are weak and pitiful. Have you checked the BSD section of slashdot?

Two out of every three posts are troll posts or crapfloods- effectively obliterating that section.

Try again.

Kakjs said:

The way I solved this whole problem was to set up an alternative discussion forum for that particular subject (i.e. person), and subsequently move any postings they continued with on the main forum, over to that specialty forum. These people hate being ignored, so you have to cultivate them a little by replying to their postings over in the new forum.

That solution satisfied their demands for free speech (yeah, i know, I know, they don't have these rights, but try telling this to them!), while uncluttering the original forum from their drivel.

ALLIN said:

>1) credit cards.
>2) captcha. (i hate good captcha).
>3) SSL. trivial? no. you can't access an SSL site through almost all transparent proxies. it makes it very hard to switch ip addresses.

About CAPTCHA implementation I suggest to read

How to protect online forms

Post a comment










Remember personal info?