First then, let’s summarise. I hope I’ve managed to convince you, by way of rambling and a few silly ideas, that it’s fruitful to apply the sensory model when we’re building interactive systems.
Reception isn’t enough. We can’t deal with the raw data stream. We have to have perception. Okay, how have we seen that reception becomes perception?
Expectations create structure: 2d becomes 3d
We don’t look at the raw stream, we look at established patterns in the flow—just as we see in 3d even though only 2d information comes into our eyes.
As designers, our job is to find and present common patterns. Give me a navigable sitemap, not a bunch of links. Give me a list of ongoing conversations on my phone, not a text message inbox, and a dumb addressbook. Um. That was idea #9.
Focus, and peripheral vision: allocating attention
When I’m paying attention, that’s the feeling of allocating my brain’s processing power to something. You might be paying attention to this presention, or to IRC. It’s voluntary, so let me decide what’s important in the information, and let me look closer at it.
We look closer either by paying attention, inside the brain, or by orienting or moving closer, with the body in the world. You select something at the expense of other things.
The flipside of focus is peripheral vision. Let me ignore what I want to ignore in the information, but have it leap to my awareness if something tremendous happens. Let me place the information flow off to the side.
If I’m writing loads in my text editor, my email client should go into peripheral vision mode, and only alert my to new emails if I happen to bring it to the front—or if I get an email to my client contact address. Why not? That’s idea #10.
Treat different levels of meaning differently
We have multiple distances from our body.
There’s the surface of the body itself. That’s the current webpage, using the earlier model.
There’s the environment at large, which is the furthest away. That’s the rest of the web.
But there’s also the propioceptive distance—everything that’s in arm’s reach. This is everything that you could touch next… or could touch you. We treat it specially.
If somebody stands in that range your body, it’s disconcerting, sexy… maybe.
But we rarely have this level, for some reason, with technology. So give me an “I must call them back” button on my mobile phone, to keep people in this zone, and subsequently highlight their names in my addressbook until I do get in contact.
Another idea: preload the blogs I go to every day. Let me associate blog URLs with the names in my addressbook, and if I visit the blog of someone I know, give me an easy way to respond to one of their posts by email, in the browser. Let me keep the conversation going easily with anyone standing nearby.
Those were ideas #11 and #12, by the way. I’m going to stop this now. No more ideas.
There are more tools we have, as part of perception… coincidence detectors when events happen across senses. Probability measures, to tell whether something’s unusual or breaking the physics of this world. A rhythm extractor. A device to tell the rate of change. An abstractor, so we can give simple names to complex things, and tell other people about them. And we generally also have the ability to tell when other people are using their senses on us.
I don’t want to go into any more of these.
All i want to say is: these are the things to pay attention to when you’re letting people interact with information. These are the things you need to build into your technology so users are not just bombarded with a raw stream of data that they have to consciously interpret.
Okay, next point: Why now?