Everything in Moderation {click to return to homepage}

"To go beyond the bounds of moderation is to outrage humanity." Blaise Pascal

Moderation systems for kids?

October 15, 2003

A few weeks ago the Washington Post put up an article called Cliques, Clicks, Bullies And Blogs all about how children and teenagers are using the affordances and limitations of social software and community spaces as mechanisms to help them assert their dominance (often through bullying) in schools' social shark tanks:

"The Internet has transformed the landscape of children's social lives, moving cliques from lunchrooms and lockers to live chats and online bulletin boards, and intensifying their reach and power. When conflicts arise today, children use their expertise with interactive technologies to humiliate and bully their peers, and avoid reprimand from adults or foes. As parents plead technological ignorance with a my-Danny-hooks-everything-up sort of pride and many schools decline to discipline "off-campus" behavior, the Internet has become a free-for-all where bullying and cruelty are rampant."

These unpleasant but intriguing situations are almost directly illustrative of some of Clay Shirky's points in his article A Group is its Own Worst Enemy. In the article he talks about the importance of structuring the space for social interactions online. He gives an example of a community that became over-ran with new members who didn't have any respect for the established patterns of behaviour that had evolved:

"The place that was founded on open access had too much open access, too much openness. They couldn't defend themselves against their own users. The place that was founded on free speech had too much freedom. They had no way of saying "No, that's not the kind of free speech we meant." But that was a requirement. In order to defend themselves against being overrun, that was something that they needed to have that they didn't have, and as a result, they simply shut the site down. Now you could ask whether or not the founders' inability to defend themselves from this onslaught, from being overrun, was a technical or a social problem. Did the software not allow the problem to be solved? Or was it the social configuration of the group that founded it, where they simply couldn't stomach the idea of adding censorship to protect their system. But in a way, it doesn't matter, because technical and social issues are deeply intertwined. There's no way to completely separate them."

That space - where technical and social issues are deeply intertwined - is what I consider to be the heart of the issue of moderation, which is almost as much about creating spaces where people have a purpose and an aim as it is about finding ways that those groups can effectively be managed or self-manage. Moderation systems are precisely designed to cool off people's worst excesses and to try - through continual pressure and effort (either manually or technological given) - to find processes and systems that make a community able to survive significant pressures and still achieve useful things.

The most interesting analogue between the two articles is in the specific kinds of behaviour that the groups are undertaking. Clay details some interesting chunks of Bion:

The first is sex talk, what he called, in his mid-century prose, "A group met for pairing off." And what that means is, the group conceives of its purpose as the hosting of flirtatious or salacious talk or emotions passing between pairs of members.

The second basic pattern that Bion detailed: The identification and vilification of external enemies. This is a very common pattern. Anyone who was around the Open Source movement in the mid-Nineties could see this all the time. If you cared about Linux on the desktop, there was a big list of jobs to do. But you could always instead get a conversation going about Microsoft and Bill Gates. And people would start bleeding from their ears, they would get so mad.

The third pattern Bion identified: Religious veneration. The nomination and worship of a religious icon or a set of religious tenets. The religious pattern is, essentially, we have nominated something that's beyond critique. You can see this pattern on the Internet any day you like. Go onto a Tolkein newsgroup or discussion forum, and try saying "You know, The Two Towers is a little dull. I mean loooong. We didn't need that much description about the forest, because it's pretty much the same forest all the way."

These three patterns - groups ostensibly about something that are instead actually obsessed with sex, cliques and "things that an individual associates their identity with" (you could probably make ome analogy with brands here without much trouble) - are familiar to all of us. But we're also more than aware that when we were teenagers we were more susceptible to that kind of behaviour than we are as adults. Which brings me to an intriguing question. Given children's uncanny ability to manipulate authority figures and to feel out the rules of any given situation and attempt to manipulate them - what kind of moderation process might both reduce the incidence of this kind of cliquey, pack-like bullying and implicitly educate the children in question about why that behaviour is counter productive? Would a distributed moderation system that put power in the hands of all the children be a useful or counter-productive approach? And if so, how should their power scale? Should a clump of twenty people together have radically more power in their community than five or one? Answers on a postcard please...

Comments

Brian said:

A possible (if ostensibly complex) tack is not just moderation but meta-moderation. I'm taking Slashdot as the obvious guide here.

One nice thing about meta-moderation is that if a message is presented anonymously and -- to at least a certain degree -- out of context, it's harder to game the system. It's tough to penalize the author or the the moderator, since

1. Both of them are anonymous. Of course, a trivial amount of research (just find the message in context) yields the identity of at least the author. But even so,

2. What's being rated is the quality and accuracy of the rating, not the message itself.

Of course it's possible to game this system, like any other. But it may prove a way to put the brakes on and collect information, useful in aggregate, on moderation while forcing users -- in this case children -- to think about what they're doing.

Moderation can be used to "merely" crimp abuse, or also to highlight quality. Meta-moderation is useful to (a) highlight the process in a potentially educational way and (b) crimp abuse of moderation.

It's my thought that (a) above might be a way to publicize the Golden Rule -- which, in my experience, people of all ages will follow if they think that others are going to follow as well. If (as your first post notes) high-profile moderation schemes are open to gaming, they might have other side effects, and possibly not all for the worse, either.

Adrian Chan said:

Tom,
I think the answer depends on whether you want to change kids' behaviors, or build an effective moderation system. If it's the former, i think you have to take away the audience, which means end the game, in order to show that it's counter productive to make fun of kids in front of their peers... So kill the system when it gets too unfriendly. Build a social "doomsday machine" into it--when it gets nasty it stops running for a while. Would that work?
If you want to build an effective moderation system, the challenge is, as Brian mentioned, the competence of those who will game the system. Between public exposure and anonymity (taking the public away), is there a middle ground? There might be, technically speaking. But by and large it seems that self-reinforcing and normative constraints are what you want, and "learning" where that line is, comes only if you can overstep it.
Just my thoughts,
cheers

Nui T. said:

Interesting information on what goes on in children's online worlds. I worry about my daughter's use of the internet, whether it will help her socially or not. In my respect for her privacy, I can't monitor what she does on the internet. But I do get tense because she tenses up everytime I pass by when she is using the internet, can't help wondering if she is doing something she shouldn't be doing, but again it could be something about my relationship to her. Thanks for some new insights.

Yesterday, I read this interesting post on place and blogging. I think it provides another explanation of why kids would extend their behavior onto their online community.

Mike Chrysler said:

We had similar travails at The Palace

When palace was created by TPI. It followed
the cognative dissonance technique of needing
a cc and setting up a reg key system. If a palace
member caused enough distress a "God"(now admin)
of a forum could ban the reg key (palace had/has
it's own browser)If enough God's (site owners/admins) complained the palace body
could bann the user from the entire community.

As the palace developed so did moderating control
levels.
http://www.rider.edu/~suler/psycyber/jbum.html

We also had parental controls for the browser
that parents could set up separate passwords.

As time moved on and the software was sold to
the Electric communities the revenue source
changed from subscription to advertising.
I hated those pop-ups. I figured I bought
and paid for my licence to run my palace
server on server space that I pay rent for.
If people come to my palace I should be in
control of the advertising dollar.


When this occured anyone could reregister.
So ip bans were slow at best and one could stop
/slow down a deviant.

To complicate the matter new hacks of the software were being forged. And even though
the parent company succesfully sued the authors
of such hacks the software was still on the market.

When the parent company folded there was no legal
way to get reg codes for palace users.
An uneasy conciousness set in.
In order to get new numbers key generators were built.

Many of the members of palace fell away from
the community because of the lack of controls.

While there are still palace communites around
and some of the original company software designers have since started up a new version
of the same thing in Manor. They have distanced
themselves from original palace owners who not
only bought membership and server liscenes.

Manor doesn't come close to the early versions of
palace in security or features.

Palace and manor will always be shadows of there
former selves. There are very few God's left
simply because they cannot exercise control
over there own domain. The privacy of palace
has all but been removed.

Some good readings from the palace
years can be gleened here

Post a comment










Remember personal info?