Attenuation is something I'm pleased to see is a theme of O'Reilly's ETech 2006. It's trite to talk about information overload simply because attenuation is more important than that: Any time you have to make a choice about anything is a time when you need to attenuate, and maybe you could externalise that method of choice into the system itself; any time there's too much complexity to be understood immediately is a time when time-based attenuation can help (sometimes we call this "teaching"). Maps are a wonderful form of attenuation, for pre-existing information. Another is the taking of a position in a landscape of information flow: You place yourself where peer- or authority-selected information will come by--we do this by choosing to read such-and-such a newspaper instead of a different one. Being concerned with attenuation is being concerned with the algorithms, the co-production of the algorithms with the people who sit in the information flows, the design factors (so that some information flows automatically hit your brain at a higher interrupt level)... It's a big topic.
There's a ton of information coming in via our senses. Not just perceptions of light and sound, but patterning, memory, associations, possibilities, more. The mechanisms to whittle that down to the very few packets that reach conscious perception (and just below that) are impressive indeed, and solve a real problem: Given limited processing capacity, and even more limited capacity for action, what should be processed? The feeling of the brain allocating processing time to something is what we call attention. There are automatic routines to push information up to be attended to, to pre-allocate attention--and to de-allocate it. There are ways to deliberately ignore colours, shape and movement, and your brain will help out by ignoring things it guesses you want ignored, too. It's a job-share the whole way, between consciousness and automaticity, with attention being parcelled out in a few large and many tiny chunks. The quirks of these heuristics make up much of the material in Mind Hacks, and also comprise my talks (and work) on user interfaces (in short: if it's important, use tricks to bump the information up to conscious attention faster; if it's not, don't). Before I started the book, the brain felt like a device that pulled information from the environment. At the end, I saw it was something that was flooded with information, and would change its position in the environment to get flooded with the right kind of information, and continuously slim down the information load, keeping as much information outside itself as was reliably possible. These methods were so impressive - they seemed to sum up the job of the brain so well - that my co-author and I asked for a quote to open the book (Rael said yes), and here it is, from Theodore Zeldin's An Intimate History of Humanity: What to do with too much information is the great riddle of our time.
Where else is attenuation exhibited? In discussing the market, Herbert Simon (in The Sciences of the Artificial [my notes]) says that market processes commend themselves primarily because they avoid placing on a central planning mechanism a burden of calculation that such a mechanism, however well buttressed by the largest computers, could not sustain.
How? This is crucial: Markets appear to conserve information and calculation by assigning decisions to actors who can make them on the basic of information that is available to them locally--that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.
[p34; my emphasis.]
The concept of locality is a key one for information filtering. The www used to be spatial - you could only navigate using deliberately placed links connecting pages together - but then along came search engines, which collapsed the whole information space into a universe only a few keywords in diameter. Different kinds of distance, however, can be invented: Distance is the measure over which meaning (or signal) attenuates, and this measure can be geographic distance, or cultural distance, or software abstraction layers if you're - say - talking about the meaning of a symbol. By reintroducing some form of distance dimension into an information space, the concept of locality is introduced, and locality is the place in which all signal is shared--the signal gets weaker and eventually disappears at any non-local places. By allowing people to position themselves in a space as they choose, they can simultaneously gain access to their chosen signal, and - by their position - provide information to other people who are making similar choices.
In Web 2.0, these distances are found in social networks, tag networks, relatedness, and regular linking. You establish a position in these networks just by having a presence (by acting), and you encounter relevant information. This is the good side of the echo chamber. You don't need to read 100 blogs if the people near you do. Or, in old media, it's what tv channels are for.
Anyhow. Locality, arising from an established measure of distance (of whatever kind), is a reasonably well-understood way to allow fairly passive attenuation, and we can use fairly traditional design tricks to help out: landmarks, instant feedback, reinforcement, hiding the overview (on the platform, when you're right up close to a Tube station, you only see the current line - that part of the network which is local - and not the whole map). The novel part is establishing the distance measures.
Another mechanism for attenuation is what Harold Morowitz calls selection algorithms or pruning rules (The Emergence of Everything). Out of colossal possibilities at every layer of emergence, somehow only a few things happen. The Pauli Exclusion Principle forces the universe to have structure by preventing two things from being in the same place at the same time. And natural selection is the pruning rule at the level of living species: Evolution isn't computed, it's the mechanism of selection itself (gosh, sounds a bit like attention).
Says Morowitz (p131): We have from time to time mentioned the concept of "niche." This ecological construct is a part of the Principle of Competition Exclusion.
And (p134): Competitive exclusion is a major pruning rule in the emergence of biological taxa.
--which you can see everywhere. The evolution of open source code takes just this route: There's often no long-distance planning, just a hundred growth points directed by the heuristics of a large number of individuals, who thereby impede the growth of similar features. And it creates platforms: If the Arabic numbering system didn't act as a pruning rule (via competition) on other numbering systems, we wouldn't be able to rest so much on it as a platform. Competitive exclusion is platformisation, or, if you like, your usual network effect. A good way to attenuate information is to move to a just-in-time approach for anything that can be reliably close at hand (anything which is a platform). For example, if you always have a watch on your wrist, you as-good-as know the time as you've expanded your memory into the environment. This is what's termed extelligence.
On the www, why bother reading all the news sites when you can depend on blogs, as a platform, to filter the best for you--or indeed, a site to pick out the best from those blogs? The social attention in a particular news area will then point you to the best news story. But competitive exclusion is only one of the pruning rules that Morowitz mentions. How about the others?
In my talk on The 3 Steps: The Future of Computing, I suggested using reverse unit testing in programming: a function call would state what answer it expected to get, and methods that didn't match that pattern would be pruned away. It would be a way of growing a large code-base without having to deliberately build the network of calls. How about in databases, to follow the way proteins find each other? Look for answers that bind to a particular shape, instead of issuing a specific query. How would that look, on the www? I'd like to use a real-time monitoring service like PubSub, only not to catch keywords or URLs. Instead I'd like to catch particular patterns of inter-website conversation, like clusters of three or more posts above a certain connectedness level... and then find only the most popular links in those. That'd be a list not of the most linked to (the most popular, we call it), but the most provoking sites.
So, attenuation. We use it in: filtering information; providing distance and locality for user control over what they receive, and feeding that back into the system; providing semi-automatic mechanisms to bubble up information; distributed processes to give integral results over large data-sets (the stock-market), or no results at all (other markets); emergent selection algorithms of exclusion, or of form. We have many places to look for inspiration, and we can design to operate with familiar patterns, since there is already human use of attenuation.
I'll add one last one, because it's something that is especially human: Implicature. Conversational implicature is when you prune (and adapt) what you say, according to what you know your conversational partner already understands about you. They'll assume you're following certain maxims, and because of that platform of understanding, you can be much more meaningful. For example, if I say I have a dog, that's essentially meaningless unless you assume I'm following the maxim of relevance--that is, I'm saying it for a reason. Only by presuming I'm being meaningful - that the statement passed a certain threshold before I uttered it - can you understand it as something important, or surprising, or silly. Only by presuming I'm being meaningful does me giving you an mp3 mean it's a gift, not a so-called viral plant from a marketing drone. Mutual implicature allows ever greater flow of meaning, and it's why apparently genuine comments left as marketing, not as gifts are so poisonous.
Implicature (more links) isn't possible in a system of one-off exchanges. It requires conversation, which itself requires repeated, fluid interaction, identity, and all the attributes of social systems we take for granted: visibility, shared values (at least locally shared, if that isn't a tautology since we redefined locality), and ways of ensuring conversations don't break down, like plausible deniability through noise. Social software ideas, really.
In a www where information is published and consumed in separate steps, the implicature that made the early constellation of blogs so compelling (because conversations weren't formalised into these two parts) has ebbed away. Conversations don't have to be literal words between two people, but they do have to include some kind of directedness of the publishing (or utterance), and some acknowledgement and visibility to the consumption (the listening). As good as the social networks and tags in sites like del.icio.us and Flickr are, we need more vias and more back-at-yas to shape the flow and promote the conversation, and therefore the implicature, that is necessary for the personal, meaningful, helpful attenuation we rely on in everyday life.
Attenuation is something I'm pleased to see is a theme of O'Reilly's ETech 2006. It's trite to talk about information overload simply because attenuation is more important than that: Any time you have to make a choice about anything is a time when you need to attenuate, and maybe you could externalise that method of choice into the system itself; any time there's too much complexity to be understood immediately is a time when time-based attenuation can help (sometimes we call this "teaching"). Maps are a wonderful form of attenuation, for pre-existing information. Another is the taking of a position in a landscape of information flow: You place yourself where peer- or authority-selected information will come by--we do this by choosing to read such-and-such a newspaper instead of a different one. Being concerned with attenuation is being concerned with the algorithms, the co-production of the algorithms with the people who sit in the information flows, the design factors (so that some information flows automatically hit your brain at a higher interrupt level)... It's a big topic.
There's a ton of information coming in via our senses. Not just perceptions of light and sound, but patterning, memory, associations, possibilities, more. The mechanisms to whittle that down to the very few packets that reach conscious perception (and just below that) are impressive indeed, and solve a real problem: Given limited processing capacity, and even more limited capacity for action, what should be processed? The feeling of the brain allocating processing time to something is what we call attention. There are automatic routines to push information up to be attended to, to pre-allocate attention--and to de-allocate it. There are ways to deliberately ignore colours, shape and movement, and your brain will help out by ignoring things it guesses you want ignored, too. It's a job-share the whole way, between consciousness and automaticity, with attention being parcelled out in a few large and many tiny chunks. The quirks of these heuristics make up much of the material in Mind Hacks, and also comprise my talks (and work) on user interfaces (in short: if it's important, use tricks to bump the information up to conscious attention faster; if it's not, don't). Before I started the book, the brain felt like a device that pulled information from the environment. At the end, I saw it was something that was flooded with information, and would change its position in the environment to get flooded with the right kind of information, and continuously slim down the information load, keeping as much information outside itself as was reliably possible. These methods were so impressive - they seemed to sum up the job of the brain so well - that my co-author and I asked for a quote to open the book (Rael said yes), and here it is, from Theodore Zeldin's An Intimate History of Humanity:
Where else is attenuation exhibited? In discussing the market, Herbert Simon (in The Sciences of the Artificial [my notes]) says that How? This is crucial: [p34; my emphasis.]
The concept of locality is a key one for information filtering. The www used to be spatial - you could only navigate using deliberately placed links connecting pages together - but then along came search engines, which collapsed the whole information space into a universe only a few keywords in diameter. Different kinds of distance, however, can be invented: Distance is the measure over which meaning (or signal) attenuates, and this measure can be geographic distance, or cultural distance, or software abstraction layers if you're - say - talking about the meaning of a symbol. By reintroducing some form of distance dimension into an information space, the concept of locality is introduced, and locality is the place in which all signal is shared--the signal gets weaker and eventually disappears at any non-local places. By allowing people to position themselves in a space as they choose, they can simultaneously gain access to their chosen signal, and - by their position - provide information to other people who are making similar choices.
In Web 2.0, these distances are found in social networks, tag networks, relatedness, and regular linking. You establish a position in these networks just by having a presence (by acting), and you encounter relevant information. This is the good side of the echo chamber. You don't need to read 100 blogs if the people near you do. Or, in old media, it's what tv channels are for.
Anyhow. Locality, arising from an established measure of distance (of whatever kind), is a reasonably well-understood way to allow fairly passive attenuation, and we can use fairly traditional design tricks to help out: landmarks, instant feedback, reinforcement, hiding the overview (on the platform, when you're right up close to a Tube station, you only see the current line - that part of the network which is local - and not the whole map). The novel part is establishing the distance measures.
Another mechanism for attenuation is what Harold Morowitz calls selection algorithms or pruning rules (The Emergence of Everything). Out of colossal possibilities at every layer of emergence, somehow only a few things happen. The Pauli Exclusion Principle forces the universe to have structure by preventing two things from being in the same place at the same time. And natural selection is the pruning rule at the level of living species: Evolution isn't computed, it's the mechanism of selection itself (gosh, sounds a bit like attention).
Says Morowitz (p131):
And (p134): --which you can see everywhere. The evolution of open source code takes just this route: There's often no long-distance planning, just a hundred growth points directed by the heuristics of a large number of individuals, who thereby impede the growth of similar features. And it creates platforms: If the Arabic numbering system didn't act as a pruning rule (via competition) on other numbering systems, we wouldn't be able to rest so much on it as a platform. Competitive exclusion is platformisation, or, if you like, your usual network effect. A good way to attenuate information is to move to a just-in-time approach for anything that can be reliably close at hand (anything which is a platform). For example, if you always have a watch on your wrist, you as-good-as know the time as you've expanded your memory into the environment. This is what's termed extelligence.On the www, why bother reading all the news sites when you can depend on blogs, as a platform, to filter the best for you--or indeed, a site to pick out the best from those blogs? The social attention in a particular news area will then point you to the best news story. But competitive exclusion is only one of the pruning rules that Morowitz mentions. How about the others?
In my talk on The 3 Steps: The Future of Computing, I suggested using reverse unit testing in programming: a function call would state what answer it expected to get, and methods that didn't match that pattern would be pruned away. It would be a way of growing a large code-base without having to deliberately build the network of calls. How about in databases, to follow the way proteins find each other? Look for answers that bind to a particular shape, instead of issuing a specific query. How would that look, on the www? I'd like to use a real-time monitoring service like PubSub, only not to catch keywords or URLs. Instead I'd like to catch particular patterns of inter-website conversation, like clusters of three or more posts above a certain connectedness level... and then find only the most popular links in those. That'd be a list not of the most linked to (the most popular, we call it), but the most provoking sites.
So, attenuation. We use it in: filtering information; providing distance and locality for user control over what they receive, and feeding that back into the system; providing semi-automatic mechanisms to bubble up information; distributed processes to give integral results over large data-sets (the stock-market), or no results at all (other markets); emergent selection algorithms of exclusion, or of form. We have many places to look for inspiration, and we can design to operate with familiar patterns, since there is already human use of attenuation.
I'll add one last one, because it's something that is especially human: Implicature. Conversational implicature is when you prune (and adapt) what you say, according to what you know your conversational partner already understands about you. They'll assume you're following certain maxims, and because of that platform of understanding, you can be much more meaningful. For example, if I say I have a dog, that's essentially meaningless unless you assume I'm following the maxim of relevance--that is, I'm saying it for a reason. Only by presuming I'm being meaningful - that the statement passed a certain threshold before I uttered it - can you understand it as something important, or surprising, or silly. Only by presuming I'm being meaningful does me giving you an mp3 mean it's a gift, not a so-called viral plant from a marketing drone. Mutual implicature allows ever greater flow of meaning, and it's why apparently genuine comments left as marketing, not as gifts are so poisonous.
Implicature (more links) isn't possible in a system of one-off exchanges. It requires conversation, which itself requires repeated, fluid interaction, identity, and all the attributes of social systems we take for granted: visibility, shared values (at least locally shared, if that isn't a tautology since we redefined locality), and ways of ensuring conversations don't break down, like plausible deniability through noise. Social software ideas, really.
In a www where information is published and consumed in separate steps, the implicature that made the early constellation of blogs so compelling (because conversations weren't formalised into these two parts) has ebbed away. Conversations don't have to be literal words between two people, but they do have to include some kind of directedness of the publishing (or utterance), and some acknowledgement and visibility to the consumption (the listening). As good as the social networks and tags in sites like del.icio.us and Flickr are, we need more vias and more back-at-yas to shape the flow and promote the conversation, and therefore the implicature, that is necessary for the personal, meaningful, helpful attenuation we rely on in everyday life.