08.19, Wednesday 28 Apr 2004 Link to this post
On social software consultancy | With Jack Schulze, I've been doing a little consultancy at TimeBank recently on how thinking in social software, adaptive design and associated areas can improve a couple of their projects, and tie into their strategic thinking.
I'm not going to say anything about the projects or the outcome of the seminar day at the moment, just say what social software looks like, deploying in anger, a year in. This is what consulting and teaching non-expects in this area looks like.
Defining sosowa was the source of many posts this time last year. Having to compress it to be useful was a useful process. We're using something like this:
Social software's purpose is dealing with with groups, or interactions between people. This is as opposed to conventional software like Microsoft Word, which although it may have collaborative features ("track changes") isn't primarily social. (Those features could learn a lot from social software however.) The primary constraint of social software is in the design process: Human factors and group dynamics introduce design difficulties that aren't obvious without considering psychology and human nature.
This ties nicely with adaptive design, in that social software encourages you to fulfil latent needs first, then embark on not a development cycle but a dialogue with user concerns in which you listen to their emerging needs and implement them in code -- but you have to give users the ability to stretch the system otherwise you'll never even notice those new needs.
Areas of interest
To design effective social software, you should have some awareness in a number of areas.
- How groups work at different sizes, and at different points in their lifecycle. Articles such as Ross Mayfield's Ecosystem of Networks and Clay Shirky's Communities, Audience and Scale are useful. The Dunbar Number as a Limit to Group Sizes is an excellent critique of indiscriminate use of the Magic Number 150, and more about different group sizes. I've also picked up a little from Forsyth's book Group Dynamics, which also has an excellent resources page.
- There are different classes of individual, relationship and discussion. On individuals, there's Hearts, Clubs, Diamonds, Spades which says that MUDs have four types of players, and an empiral study leading to a different typology. This is well-known though, with Belbin team roles common in business. Then we've got friends, coworkers, enemies and so on, then for discussion: broadcast, few-to-few, many-to-one, goal-oriented, undirected. I'm still looking for links that discuss different dynamics here.
- Human psychology and drives, such as politeness. People respond to incentives! And people want to be polite, and want to share, and want to socialise, and act differently if they feel they're in a group (even if they're in a group with a computer, say Reeves and Nass in The Media Equation).
So one of the things Jack and I are doing is producing a Primer in these areas to provide an overview but also a bibliography so the TimeBank team can dig deeper themselves. We've spent quite a lot of time running through their projects and these ideas to learn what's appropriate, where they need extra knowledge and so on.
Clay Shirky's essays (these in particular) over 2003 figure pretty big when the areas of concern to social software are summarised. That's not a surprise, they're great essays. But also, looking back, they're the only standalone, well-written essays there are. Outside the context of the early 2003 discussion, most of the weblogs posts just don't make any sense.
So already we're got a way to put the sosowa ideas into practice. I use this example:
We might consider the way, in groups of three or more, how there's always the possibility of two people being disloyal to the gathering, and how to moderate that behaviour. In the physical world, disloyalty is visible because all interactions within the same context are visible to all local group members, and the disloyalty is moderated through politeness and social pressure. There is a different mechanism to exert social pressure in a peer-to-peer small group (of 6 people, say) than a broadcast larger group (a classroom type situation). Contrast this with online small groups and we see misunderstandings by email, people being left off cc lists and so on.
Now that's become a solveable scenario. We need mechanisms in the online software to bring in a similar incentive structure to the offline world.
The single most useful piece of thinking I've been using is Stewart Butterfield's March 2003 post on the devices in social software, mechanisms successful pieces of social software tend to have.
I'll describe each of these, as I see them, critiquing AOL Instant Messanger (just as an exmaple), and then describe how we put them into use.
- Identity | Your identity is shown by a screenname, which remains persistent through time. There are incentives not to change this, like having your list of friends stored on the server and only accessible through your screenname. This acts as a pressure to not change identity. Having a persistent identity is more important than having one brought in from the physical world.
- Presence | Presence is awareness of sharing the same space, and this is implemented as seeing when your friends are online, or busy. AIM isn't particularly good at group presence and visibility of communication, although other chat systems (such as IRC and early Talkers) use the concept of "rooms" and whispers.
- Relationships | AIM lets you add people as buddies. From that moment, their presence is visible on your screen. This is a relationship, you're allowed them to have an effect on your environment. Not terribly nuanced however.
- Conversations | Conversations are implemented as synchronous messaging. There's a difference between messaging and conversations. Messaging is just an exchange of text with no obligation, but conversations have their own presence and want to be continued. AIM does this by having a window for a conversation. It's difficult to drift out of it, it hangs there, requesting you continue. Contrast this with email which often is just messaging, and conversations die easily.
- Groups | AIM isn't great at groups. Although you can have group chats, the group is transient. People have more loyalty to a group when there's some kind of joining step, when they've made some investment in it. Entering a window just doesn't do that, and there's no property of the group that exists outside the individual user's accounts.
- Reputation | Reputation is used more in systems which allow meeting new individuals. AIM's simple version of this is "warning". Any user may "warn" any other user. A users total "warn" level (a figure up to 100) is shown to everyone they communicate with. Unfortunately, it's not a trustworthy reputation system, and reputation is notoriously difficult -- but humans are great at dealing with it themselves, given certain affordances: persistence identities, and being able to discuss those identities with other people. AIM's simplistic relationship system makes reputation not so important though.
- Sharing | People like to share. With AIM, sharing is often as simple as giving a friend a link to follow. Other systems, such as Flikr, are about sharing photographs. These act as small transactions that build genuine group feeling.
Clay's essay A Group Is Its Own Worst Enemy provides a great overview of many important concepts here, especially on having a cost to join (which provides the feeling of membership). Incidentally, knowing this is a fantastic rule of thumb: Forums which require a joining step are better that forums that feel like window-shopping. It's a more social design decision on the same level as not putting the reply box at the top (so the user has to read the whole conversation).
Putting this into practice
These seven items act as a tremendously useful framework to critique and comment on social software. We combined them with a further process:
- Define the goals of your software (the strategic goals, and the user goals)
- Consider incentives: For goal-oriented systems, why do people take part? Why do they come back?
- Consider moderation: If you have a disloyal user, how do you stop them starting fights and poisoning the community? If you block them, you might enter an arms race.
We took a dynamic approach to this. It's very simple-minded design which only gets a person to use the site once, because it's easy to operate or whatever. A more dynamic approach is to consider the consequences, and use mechanisms such as people like sharing to design in the appropriate incentive and moderation feedback loops.
And dynamic's the key. We've attempted to understand user flows, and apply the ideas above to different types of user within the system, using one or two of Matt Jones' UCD tricks (they're handy to use and easy to teach). We've been teaching the technical and other project team members to use these tools and ideas to develop a common vocabulary, and to use social software ideas to make simple social tools more likely to succeed.
(So, for example, we can see that some users aren't going to be comfortable approaching people online - they're educated, highly intelligent folk who aren't good with computers - then we can use mechanisms such as slowly building relationships (making use of presence) and reputation to build an approach with a gradual gradient.)
And then of course it all gets into adaptive design and development processes, and also onto the specific small touches we recommended for the individual projects, but I'll get onto those another time.
I've put many of the social software links we'll be using in the Primer in my links directory.
Of course half-way through gathering these, the Many-to-Many Social Software Reader (and Timeline) launched. Handy, but unfortunately not completely appropriate -- I think they're aimed at providing a history and resource for practitioners, and not a quick way in to pick the most appropriate ideas for people in other fields. The latter is what Jack and I are providing.
Comments always welcome.