The Extended Phenotype gives good concept. Here's a favourite, when Dawkins is discussing different opinions of what an "individual" should be considered as, for plants:
Janzen (1977) faces up to the same difficulty, suggesting that a clone of dandelions should be regarded as one 'evolutionary individual' (Harper's genet), equivalent to a single tree although spread out along the ground rather than raised up on the air on a trunk, and although divided up into separate physical 'plants' (Harper's remets). According to this view, there may be as few as four individual dandelions competing with each other for the territory of the whole of North American. [p254]
Korbo, Lorbo, Jeetbo.
You can get updates to this blog on Twitter: follow @intrcnnctd.
Blue is the colour from an active but unfed projector, before the presentation starts, and of dead or absent tv channels. It's a common colour, but I never look at it. When it's not invisible, it's rich, almost purple; the blue of dusk, or of a hot, deep, dusty, early morning. It's not like water, the ocean, and somehow, given the Battle for Blue [thanks Phil!; text], not corporate. Another oddity: It's bright and highly saturated, yet we never notice it. It fills. Like the blue of the hyperlink (which is appropriate despite being arbitrary [1, 2]), it speaks to me:
This blue is the colour of the virtual. This blue is the colour of possibility, becoming, and potential. This blue is the sky that rocket-ships the weight of cities will lift into, and of the abyss. When the cable is unplugged, this is the colour when the universe floods in. This blue is Cerenkov radiation, the energy that high-velocity sub-atomic particles shed as light, when they brake through dense material.
As the virtual becomes actual, the whiplash it experiences as it attains mass and physicality shoots blue light ahead of it, into its future destination, onto our screens, onto us.
1. What is truth?
Truth is a matter of plausibility.
2. Why is the sky blue?
The sky is blue because so many other things are not blue.
It is fantastic that something so large and so dominant in our lives has such a bright colour. Shouldn't our vision perception centre around it so that we only see a neutral shade? This would be the case if plants any animals didn't contain such luminous greens and saturated reds.
All of these colours have come to be perceived as roughly equal (although with a slight bias because the Sun is green), and so if they are mixed together the colourless colour we expected for the sky will be seen.
3. Why do women from the USA have square jaws?
During the colonisation of the North America sub-continent, Europeans were living in comparative ease compared to the difficulties faced by the Pilgrims and pioneers of the New World.
During the first hard winters, the more delicate settlers had a smaller chance of staying alive. This evolutionary pressure produced the white North American's characteristic sturdiness over the first few generations, and it has remained since.
4. Why do people blush when embarrassed?
Humans are animals which evolved in a social and hierarchic system.
In a hierarchic society, members will necessarily change position and some mechanism must be in place for this to occur, and this is some kind of confrontation, or fighting. As an aside: fighting seems to be a good way of establishing levels; it uses physical strength and well as tactics and quickness of wits, all of which were highly important when humans were hunter gatherers on the savannah. The emphasis has changed somewhat now.
However, a creature which fights to the death to find their position in society is not going to survive to take that position nor to pass on their genes. A kind of formalised confrontation evolves, and can be seen in many species. Although injuries are made, there is some kind of action of surrender in place. For example, dogs roll onto their backs to show they will stop fighting.
When people are ready for confrontation, the 'fight-or-flight' reflex activates. The body moves blood to the muscles to make them faster and more effective (this is why your stomach contracts when you are scared; the blood has been redirected).
In a ritualised fight been two people, the only true sign of submission would be the cancellation of fight-or-flight. An arms war of false submissions would probably occur before the true sign was found since evolution would favour the person who claimed to submit only to make a new attack when the other's guard was down.
The only sign taken as genuine would be a direct indication that the blood has been moved away from the muscles and the body is not prepared for action - that is, blushing. When blood is at the face, it cannot be powering the muscles.
When we are embarrassed we have been involved in some kind of confrontation and do not want it to continue. The blush is the remnant of the formalised submission of our ancestors.
5. Why do some people wear clothes?
To keep warm. Why do some societies uniformly wear clothes even when it's warm?
The fact that individuals don't feel 'bad' but instead feel 'ashamed' (shame or embarrassment is a group phenomenon) when naked implies that the wearing of clothes is a social more. (Aside: That small children don't feel that shame implies that either the ability to feel shame doesn't develop until later in life, or that the pressure against being naked is memetic rather than genetic and it takes time to be socialised.) Now the question is: How did clothes move from being an environmental necessity to a social construct?
Clothes have been the peacock's tail of human society (that is, a person may demonstrate their status by showing that they are able to spend resources on unnecessary rather than essential items). Society needs this kind of competition to function and self-organise. A fit (in the Darwinian sense) society will allow no alternatives, so the social pressure is not against being naked, rather it is against non-participation.
People wear clothes because to not is to deny their social nature.
(Originally posted at interconnected.org/truth in 1999. The final two are too explained, too plausible, and not playful enough. The two before that are how I should have continued the series. I keep meaning to write a children's questions & answers book, and should get back to it.)
Pricing. When I started freelancing, the hardest thing to do was figuring out how much to charge. After a little advice, I decided how much I wanted to earn in a year (based on previous salaries), and assumed I would be paid to work only 50% of the working days available (the remainder being holiday, time spent on overhead and looking for new work, and to compensate for benefits). I set that as my day rate.
That pricing model, however, puts the control of what work I do with whoever is paying. The first thing that happened was I gave myself two different rates. Work that would progress my career (writing, work for charities, and work I wasn't already established in) was one, lower rate, and everything else was the regular day rate or a little higher. Regarding how I spent my time, I tried to keep this to half and half.
By the time I finished freelancing, I had five pricing bands, based on a game Es and I play while we drive through the New Forest. On that drive, we usually see horses (it's a beautiful part of the country, and there are feral horses allowed to roam everywhere), and the game is to guess how many. You can see None, obviously, and the three next bands, increasing in number, are: Few, Some, and Many. There's a fifth one, very rarely used, which is Legion.
The thing is, it isn't appropriate to think about numbers of horses. When we drive through the Forest (which is heath; there are very few actual trees in the New Forest. Also it's 900 years old), we may see members of three or four herds of ponies. If we see only two or three actual ponies, that means there's likely a herd not far away. So let's say we see three ponies, and they're not together but still close. That's only one herd, and not much of a glimpse, therefore Few. But if the three are scattered (some early in our drive, some much later), that could be the edges of two or even three herds, so that's more like Some (or Few-becoming-Some).
The difference between Many and Legion is similarly nuanced. It's not enough to see ponies at every moment of the drive. You need to see cows and preferably some deer too, because it needs to be supernormal horseness, like a cubist portrait, to push up to Legion, and generic animals will do that.
With this banding system, it's really hard to break it down, and it doesn't even slightly map to numbers--but it's really natural to use. We're not measuring the number of horses we see: We're measuring the cross-section of the idea of horseness stimulated in our minds. We're reverse engineering the way we think.
As I was saying, by the time I finished freelancing, I used those bands. I did small bits of work for None for fun and favours. Regular work was charged Some, and that reduced to Few for especially interesting work, worthy work, and establishing work. I used Many to charge for work I have expertise in, when it was for people who could afford it (and they were getting a lot of value out of me) and when it was not the kind of work I wanted to continue doing forever. I never used Legion, but that's for when there's also a lot of money being made out of my involvement. This worked because, although I couldn't really describe how I made decisions, I could choose very quickly, when I met a client, at what rate I would charge for the work.
Then I started, with Jack, Schulze & Webb, which is a whole other kettle of fish. There's office overhead, benefits, and client relationships to consider, and projects have a different nature to days. Suddenly we have to think about risk, scope creep, time estimates, and opportunity cost. It's not something we have to think about too much at the moment, but it's still interesting to look at different models.
For example, there's the Project Triangle, which is a resource allocation perspective on the problem: There's a trade-off between fast, cheap, and features, and money (to an extent) buys you leeway.
And there's a civil engineering company I've been told about, which internally audits all its projects for profitability, interestingness, and how easy the client is to deal with. If it's not marked well on two of those, future projects are turned down.
It's this second one that interests me. Not using money as a way of explaining how much a project costs, but using money as a way of influencing what kind of work you get, and as fair compensation.
Work must be fun. If I didn't believe that, I'd have a highly lucrative job in the City. And work must be fair. I don't want to charge people too much for what I do for them, or too little. There must be a lot of different work, because our expertise comes from using a large variety of ideas and skills. Also, work creates more work: You get what you do. I'm on a trajectory away from programming, for example. I still do it, but I don't discount my day rate for programming work. The last thing: Money buys freedom. It's good to take well paid jobs, because that gives you the time to pursue the less obvious ideas, and room to develop your own products.
This means that my freelancing banding system still works: Charge more if there's a lot of value being extracted, if the work is dull, and if there's risk involved; charge less if the work takes the company in good directions, if it's really interesting, and if it's for people we like.
But what I'm realising is that there's something else involved, and it's the reason consultancies usually charge so much: We don't have to be doing this. We could be making our own products.
When you work on an artefact, be it a product or a company, its value increases faster the longer you work on it. Two months of effort is more than twice as valuable as one month of effort. When you bring something to the table that you've worked on for a year, it's not only a year's worth of accumulated value, it's also scarce, because people very rarely work for that long on a single thing.
Every day, when I work on projects directly for my company, I'm building something that'll get scarcer and more valuable the longer I do it. And when I do that for a client, I'm helping to build a pearl that they will keep. The argument is that they can compensate me only for the work I do, and not for this future value, because they're providing the risk. (Although if it's a long project, I'm actually doing it at the expense of my own company, which thrives on multiple, short projects.)
Okay, this is fair, and it explains one large factor of my fees: I'm being paid to not be an entrepreneur.
But realising this, it lets me tweak the fee structure, and I wonder whether we should be trying out more experimental pricing models. Let's say we provide early interaction design work and strategy (which we do, it's one of the strands). For certain clients, and certain interesting projects, we should offer to be paid in risk instead of cash. If the client does well, we share in the future value. If not... well, we have a investment in making sure that doesn't happen.
That is - and I'm still thinking this one through - Schulze & Webb could work with clients who don't have much cash (startups, in other words), and provide discounted services in return for a small amount of equity.
Call it "venture interaction design."
New Puritans are the subject of Just Say 'No' [via philgyford] in the Observer magazine. So:
a New Puritan does not binge drink, smoke, buy big brands, take cheap flights, eat junk food, have multiple sexual partners, waste money on designer clothes, grow beyond their optimum weight, subscribe to celebrity magazines, drive a flash car, or live to watch television. There's another aspect to this trend:
Arguably, these personal codes of conduct would be an arresting enough story on their own, but the New Puritan's curbs must also be extended to other people's behaviour, and wherever possible enshrined by legislation - for New Puritans do not fear the nanny state.
Which sounds like it's the end times. It could be the first appearance of what Strauss and Howe term the Hero generation. Generations, they say, occur in cycles of Prophet, Nomad, Hero, Artist. The current Hero generation consists of those folks born from 1982 to the present, and they are
conventional, uber-powerful, homogeneous and devoted to serving the state, having a deep trust in authority and being the perfect soldiers for a major war.
Contrast that to the cynical Nomads preceding them (Generation X), or the 1960s Prophets before that. And if they are indeed a civic Hero generation, that means our society is currently experiencing an Unravelling (alienation, wild times), and will shortly - pretty much around now - be entering a Crisis. New value systems will replace the old order with a new one. Previous Crisis periods include the American Civil War, and World War II--in a Crisis, everything is up for grabs, and the Hero generation are the ground troops.
In The Collapse of Complex Societies, Tainter lists some characteristics of collapse, which are reproduced in this essay, What Is "Collapse". There are two themes in that list I want to pick out. One is a move to less overall coordination and control, and another is
less investment in the epiphenomena of complexity such as monumental architecture.
On the first theme: One point Tainter makes in his book is that collapse isn't an accident; collapse isn't some failure mode like taking a wrong turn off a cliff. Collapse is a reduction in complexity that makes economic sense. As much as I admire the current view that there is no centre, no authority, no archetype for communities, I wonder whether this pursuit of heterogeneity is exactly the reduction of control hierarchy that has characterised other social collapses. To put it another way: It currently makes economic sense for us to work toward a less Fordist, more networked world. We have large corporations working towards cellular organisation, each cell following rules instead of top-down management. We have to do this because, even with computers, we don't believe we can handle the complexity of such huge machines. It makes economic sense. Oh dear. By their nature, societies are near-decomposable. They are multiple units that have hooked together and become inter-dependent for good reasons. What if we've gone through a reconfiguration, and our units are now - because of telecommunication - above geography? What if there's been a shake-out, and these units can decompose into self-sufficient groups? The libertarian tendency is just another aspect of this trend. I argue: If our society was at the peak of its powers, we would be solving the problems of complexity by delegating the choreography of the social and industrial machines to vast computers. But we tried that, and failed, and poststructuralism won, and that will be our downfall.
Second theme. The creation of monumental architecture, as Tainter describes it, acts a store of energy and effort. Monuments are capacitors of industrial capacity, built to keep a society's strength up, and that excess strength diverted to essentials on rainy days. It feels to me like the last monument in the West was the space programme in the 1960s (come on, the Saturn V was hardly pragmatic). Since then, wages have gone down (two members of a household need to work), and market forces have squeezed every ounce of value out of what we produce. Environmentalism (as recycling) and the fight against patents (or intellectual property) are, again, things we want, and more than that, things we describe as good--but really they're just more ways of making sure the produce/consume cycle gets tighter, every possible good is manufactured, and every side-effect is monitized. The United States now looks like it's wearing the baggy clothes of a much more prosperous society that has slimmed down these past 30 years. It has peaked, and can no longer touch the zenith surface.
I know I'm now arguing in favour of movements that generally concern me: Ownership, inequality in social roles, top-down control, integration, and waste. But these are signifiers of complex societies that can attain more glorious heights. So long as they can occur in good ways, why not pursue them? (More ownership, but more spaces and more fronts; more inequality, but more different social roles; more control, but also more degrees of freedom; more energy use, but from huge microwave beams coming from the Sun.) If we can all do more - whether it's moving faster with cars, or building our own houses using exoskeletons - doesn't that mean we can submit to more control too? This isn't a zero-sum game.
It's odd to see environmentalism and industry on the same side of the argument, of heading towards efficiency and low-impact, and being against monument and control. I really am scared that we see these points as morally good, that they make sense. It's this world view the Heroes will adopt, mindlessly, and take us into Crisis.
Fortunately, institutions will save us. Driven by the Cold War, our complex of society produced institutions that calcified and persist. As the pressure drops on the outside, and the complexity threatens to reduce, these large institutions (government, the media, the military-industrial-entertainment complex) will produce new pressures to shore it up. They have every incentive to fight. For us, it's only a minor reduction in control. For these institutions, it would be death. And look, we have new enemies already.
Or perhaps that's part of the pattern too. You can't escape Crisis. We can only hope to be a bad influence on the coming Hero generation. Do everything once. Twice if you like it.
From afar, during the Week of Acquisitions, we were joking that Yahoo would buy Ford. Because, if you think about it, Yahoo is a media company, and cars, well, have car stereos in them, which means Ford is a manufacturer of media delivery vehicles.
It starts making sense, really. Instead of selling cars, Yahoo could sell the service of listening to music while you commute, and you'd get the car for free. You'd fill up with music at the same time as filling up with gas. There could be discounts for promoted bands. (Although, instead of a key, you'd have to give a password and your mother's maiden name.) You'd be renting a huge, ride-on mp3 player, with wheels.
Don Norman, The truth about Google's so-called "simplicity":
If you want to do one of the many other things Google is able to do, oops, first you have to figure out how to find it, then you have to figure out which of the many offerings to use, then you have to figure out how to use it. And because all those other things are not on the home page but, instead, are hidden away in various mysterious places, extra clicks and operations are required for even simple tasks -- if you can remember how to get to them. Why are Yahoo! and MSN such complex-looking places? Because their systems are easier to use. Not because they are complex, but because they simplify the life of their users by letting them see their choices on the home page: news, alternative searches, other items of interest.
...which would all be well-and-good, if we (as users) weren't able to remember where to find services we cared about once we'd found them the first time, if we weren't brilliant at remembering paths and landmarks, if we weren't actually first-time users only once, and after that experienced users who are able to discover and learn. The great thing about not putting everything on the front page is that the surface of possibilities increases in line with the user's growing experience.
Perception is memory. If I see a page full of possibilities, it's like I'm remembering them all at once. A page that hints at possibilities lets me remember only what I care to remember. Google's IA challenges are not how to present everything, but how to make everything discoverable initially, and keeping found things found thereafter. That's why Google's emphasis on clear subdomains is so good: You bookmark, or remember to type, the perspectives on Google's index you wish to take.
You can both discover by browsing, and bookmark shortcuts plus memory to jump back in, and it's the lack of this pair that makes the desktop point and click interface so limiting. Having everything in the GUI kind of space is like having to describe a location with mime. No chance for "aisle 3, halfway along, 2nd shelf." You have to gesture to your computer, "follow me, follow me" and then mutely point "mmmph."
When Gulliver visits the great academy of Lagado, in Chapter V of Part III (A Voyage to Laputa) of Gulliver's Travels [full text], he learns about their many projects:
We next went to the school of languages, where three professors sat in consultation upon improving that of their own country. [...] The other project was, a scheme for entirely abolishing all words whatsoever; and this was urged as a great advantage in point of health, as well as brevity. For it is plain, that every word we speak is, in some degree, a diminution of our lunge by corrosion, and, consequently, contributes to the shortening of our lives. An expedient was therefore offered, "that since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on." [...] many of the most learned and wise adhere to the new scheme of expressing themselves by things; which has only this inconvenience attending it, that if a man's business be very great, and of various kinds, he must be obliged, in proportion, to carry a greater bundle of things upon his back, unless he can afford one or two strong servants to attend him. I have often beheld two of those sages almost sinking under the weight of their packs, like pedlars among us, who, when they met in the street, would lay down their loads, open their sacks, and hold conversation for an hour together; then put up their implements, help each other to resume their burdens, and take their leave. But for short conversations, a man may carry implements in his pockets, and under his arms, enough to supply him; and in his house, he cannot be at a loss. Therefore the room where company meet who practise this art, is full of all things, ready at hand, requisite to furnish matter for this kind of artificial converse.
Iceland's Blue Lagoon is a shallow lake, geothermally heated, using the run-off from the nearby power station. It varies between a couple of feet and body height in depth, and has a complex, volcanic rock shore full of turns and bends. The water is an eerie, cloudy blue, and as hot as bath water.
I'd love to show photos, only we couldn't take any because we visited at night, and spent a contented hour in the dark, hanging out in the lagoon, sometimes wading through steam that meant we couldn't see more than an arm's length away, sometimes - in the more open areas - able to see a full half of the lagoon, with areas in shade, areas like up, columns of steam over the hotter water. If you wanted to cool down, the air was only a few degrees so you could just stand up, but it was best to keep down. Gorgeous.
Something I got a kick out of was how they prevented people from mucking around. It's supposed to be a tranquil place--I can't imagine they want people standing on each other's shoulders, or, uh, making use of the dark, secluded corners.
At the front of the lagoon is a powerful spotlight, perhaps a metre across, mounted right in the middle, high up, on top of the main building. Usually it was kept pointing ahead and up, away from the water. But if people were seen playing around, the spotlight would be directed right at them. Lighting someone up in the dark means one thing: Everyone else can see them. It was a wonderful way of providing people with freedom and privacy, but selectively activating social visibility as behaviour moderation, when required. It was a gentle panopticon, using the tut-tut of disapproval in place of the inspector's gun.
Because it has no foresight, unalloyed natural selection is in a sense an anti-perfection mechanism, hugging, as it will, the tops of the low foot-hills of Wright's landscape. A mixture of strong selection interspersed with periods of relaxation of selection and drift may be the formula for crossing the valleys to the high uplands. -- The Extended Phenotype, Richard Dawkins, p40.
For a species, exploring the fitness landscape isn't a matter of a defined-position cursor moving with a hill-climbing algorithm. The cursor can vary between wide and specific. It isn't a single thing: It's a population. It occupies the base of two peaks simultaneously, moves up both, creating competing species as it does. One is selected, the other is not, and so the cursor has jumped. Or, if you like, each member of the population has a different fitness landscape, because it has a different situation in the population as a whole. Maybe the mechanism is that the population moves to a local peak of the fitness landscape, and then grows as the selection relaxes. The population growth distorts the fitness landscape for each member, and for some members a fitness bridge is created to an alternative maximum, which they can move to. The new maximum will become a different species, possibly even in a different niche. In that way, populations may bud species in the same space+time without being replaced themselves (removing "chains" of species moving towards some as-yet-unknown global maximum fitness), and valleys can be traversed while each member of the population still has a hill-climbing algorithm without look-ahead.
It's funny. The way the population moves from peak to peak feels like quantum tunnelling. Instead of modelling a particle as a probability wave, could it instead be modelled as a population of related (virtual) particles, that are themselves selected?
The little people crop up a lot in myths. I've talked before, in passing, about the possible origins of elves, goblins and fairies (although I wish I could find a reference for the fairy being a warning about getting drunk on cider), but this almost feels too literal: New types of person to explain rare neurological conditions, or encode cultural knowledge? Unlikely. Nor is it likely that these are stories of different homo species--not on this timescale, anyhow.
Yet... yet... what if these myths are of some other kind of human, and one of the ways the myths have survived has been by becoming vehicles for other bits of knowledge that aren't large enough, independently, to all form their own myths? (Like, don't get drunk; sometimes people are born with different mental setups.)
What if the little people really were little people: Children. I've been reading Collapse of Complex Societies (Joseph Tainter). Great book, and full of information and reviews of many more societies than I realised existed. It's a compelling argument that complexity growth is a cost reducing exercise in certain circumstances, and collapse is what happens when those circumstances change in certain contexts--although I thought the hypothesis itself falls victim to a number of criticism Tainter makes of other theories. One society mentioned is that of the Ik, in northern Uganda, who have experienced one of the more severe collapses known. They display almost no integration, with sharing nonexistent even among kin. This stood out:
Children are minimally cared for by their mothers until age three, and then are put out to fend for themselves. This separation is absolute. By age three they are expected to find their own food and shelter, and those that survive do provide for themselves. Children band into age-sets for protection, since adults will steal a child's food when possible.
The human brain is still measurably evolving, in the last 10,000 years or so. What if the myths are carrying information about that? What if the family unit evolved in the human brain comparatively recently, more recently than the rudiments of language? (Families are now a human universal.) What if, when the people who become the Celts moved to northern Europe as the glaciers pulled back, travelling with their families, there were already people on that land, people who hadn't developed families yet?
What if the goblins were bands of teenage boys, and fairies and elves were groups of young children? They would scatter and vanish when adults came along, and perhaps speak their own, temporary language--a group-size idioglossia, a standing wave of language that was acquired as children moved into the group, and forgotten as they moved out.
These people would have been displaced (well, replaced) by the family-using immigrants, but remembered in myth, the powerful stories passed down through the Holocene.
What struck me about the architecture of Helsinki, and then of Reykjavik, was the use of layers and grid textures. Modern buildings in both cities seem to decorate their surfaces more than those in London, and maybe it's because they tend to be shorter, but they seem less like someone has designed the base and the top, then merely extruded in-between. Reykjavik buildings, especially, seem to be total shapes: pyramids, prisms, intersecting glass and copper cuboids. Materials overlap, making use of transparency and sudden turns in the surface to let you see through, then obstruct, then see through again. The idea of the double door, a necessity in the cold climate, has extended to the whole.
As well as these sturdy buildings with permeable edges, the texture are grids: repeating copper rectangles; studs like rivets from the walls; cages over concrete. The grids are too dense and too vast to let you see their individual elements, and instead they're carpets of regularity. Matt Jones told me that the architecture was drawing on Japanese ideas of artificiality and nature, side by side, playing off each other. Beautiful--and all over both cities.
Iceland; Finland; Japan. I remember reading on Matt's weblog, ages ago (the post is gone now), about the Arctic ice-cap melting and the Northern Passage opening. There would be a new northern culture, connected by open ocean, global warming, and a different kind of aesthetic. Ben Hammersley, preparing for his 2007 North Pole attempt, tells me that the ice is getting thinner year on year. And people are buying up land in Canada, in preparation for the ports they'll be able to build there.
From Sergei Medvedev's 2000 essay, The _Blank_ Space:Glenn Gould, Russia, Finland And The North, the section on the idea of a Northern Europe:
A shared periphery, a cooperative psychological setup, and an experience of local networking exempt the North from the traditional territorial discourses based on power, history and identity, placing it in a deterritorialized post-national paradigm in which spaces are increasingly imagined and communicated. The North emerges as one of the so-called "meso-regions", i.e. less determined by geography than by ideas, symbols, visions or strategic instruments, all aimed at mobilizing resources to solve common problems.
The architecture I saw reflects this. The openness of the government in Estonia, the mobility of Japan and Finland, cheap energy in Iceland, the competing narratives that have filtered up to the north from lower latitudes, the reality of the melting ice: These elements combine to produce an approach which is pragmatic, uncentred, combinatory.
I've finally come to a rest, after Helsinki (as S&W, presenting project deliverables to Nokia), New York (for State of Play [notes] and meeting more great people than I can mention, both new and old), and Reykjavik. I'm stationary for a couple of weeks, then off to Design Engaged in Berlin. I must say, I'm totally over-excited. The people, what it was like last year, the ideas... I'm grinning just thinking about it.
Anyway. It's good to be back in London. My friends tell me that they have months when all their friends get married, or get pregnant. While I've been offline, Tom has found a new home with Yahoo and Adam's book is on Amazon, conferences have happened, and a couple of dozen services have launched or been acquired. I feel behind the curve. Hey, and Ning launched. That was my summer job. Bloody good. And lots to say about it, too, when I get a chance.
One day, somebody will invent a system for copying a MMO game by sending character probes into it, and raytracing the narrative. What does a low-res state space look like?
Attenuation is something I'm pleased to see is a theme of O'Reilly's ETech 2006. It's trite to talk about information overload simply because attenuation is more important than that: Any time you have to make a choice about anything is a time when you need to attenuate, and maybe you could externalise that method of choice into the system itself; any time there's too much complexity to be understood immediately is a time when time-based attenuation can help (sometimes we call this "teaching"). Maps are a wonderful form of attenuation, for pre-existing information. Another is the taking of a position in a landscape of information flow: You place yourself where peer- or authority-selected information will come by--we do this by choosing to read such-and-such a newspaper instead of a different one. Being concerned with attenuation is being concerned with the algorithms, the co-production of the algorithms with the people who sit in the information flows, the design factors (so that some information flows automatically hit your brain at a higher interrupt level)... It's a big topic.
There's a ton of information coming in via our senses. Not just perceptions of light and sound, but patterning, memory, associations, possibilities, more. The mechanisms to whittle that down to the very few packets that reach conscious perception (and just below that) are impressive indeed, and solve a real problem: Given limited processing capacity, and even more limited capacity for action, what should be processed? The feeling of the brain allocating processing time to something is what we call attention. There are automatic routines to push information up to be attended to, to pre-allocate attention--and to de-allocate it. There are ways to deliberately ignore colours, shape and movement, and your brain will help out by ignoring things it guesses you want ignored, too. It's a job-share the whole way, between consciousness and automaticity, with attention being parcelled out in a few large and many tiny chunks. The quirks of these heuristics make up much of the material in Mind Hacks, and also comprise my talks (and work) on user interfaces (in short: if it's important, use tricks to bump the information up to conscious attention faster; if it's not, don't). Before I started the book, the brain felt like a device that pulled information from the environment. At the end, I saw it was something that was flooded with information, and would change its position in the environment to get flooded with the right kind of information, and continuously slim down the information load, keeping as much information outside itself as was reliably possible. These methods were so impressive - they seemed to sum up the job of the brain so well - that my co-author and I asked for a quote to open the book (Rael said yes), and here it is, from Theodore Zeldin's An Intimate History of Humanity:
What to do with too much information is the great riddle of our time.
Where else is attenuation exhibited? In discussing the market, Herbert Simon (in The Sciences of the Artificial [my notes]) says that
market processes commend themselves primarily because they avoid placing on a central planning mechanism a burden of calculation that such a mechanism, however well buttressed by the largest computers, could not sustain. How? This is crucial:
Markets appear to conserve information and calculation by assigning decisions to actors who can make them on the basic of information that is available to them locally--that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing. [p34; my emphasis.]
The concept of locality is a key one for information filtering. The www used to be spatial - you could only navigate using deliberately placed links connecting pages together - but then along came search engines, which collapsed the whole information space into a universe only a few keywords in diameter. Different kinds of distance, however, can be invented: Distance is the measure over which meaning (or signal) attenuates, and this measure can be geographic distance, or cultural distance, or software abstraction layers if you're - say - talking about the meaning of a symbol. By reintroducing some form of distance dimension into an information space, the concept of locality is introduced, and locality is the place in which all signal is shared--the signal gets weaker and eventually disappears at any non-local places. By allowing people to position themselves in a space as they choose, they can simultaneously gain access to their chosen signal, and - by their position - provide information to other people who are making similar choices.
In Web 2.0, these distances are found in social networks, tag networks, relatedness, and regular linking. You establish a position in these networks just by having a presence (by acting), and you encounter relevant information. This is the good side of the echo chamber. You don't need to read 100 blogs if the people near you do. Or, in old media, it's what tv channels are for.
Anyhow. Locality, arising from an established measure of distance (of whatever kind), is a reasonably well-understood way to allow fairly passive attenuation, and we can use fairly traditional design tricks to help out: landmarks, instant feedback, reinforcement, hiding the overview (on the platform, when you're right up close to a Tube station, you only see the current line - that part of the network which is local - and not the whole map). The novel part is establishing the distance measures.
Another mechanism for attenuation is what Harold Morowitz calls selection algorithms or pruning rules (The Emergence of Everything). Out of colossal possibilities at every layer of emergence, somehow only a few things happen. The Pauli Exclusion Principle forces the universe to have structure by preventing two things from being in the same place at the same time. And natural selection is the pruning rule at the level of living species: Evolution isn't computed, it's the mechanism of selection itself (gosh, sounds a bit like attention).
Says Morowitz (p131):
We have from time to time mentioned the concept of "niche." This ecological construct is a part of the Principle of Competition Exclusion. And (p134):
Competitive exclusion is a major pruning rule in the emergence of biological taxa. --which you can see everywhere. The evolution of open source code takes just this route: There's often no long-distance planning, just a hundred growth points directed by the heuristics of a large number of individuals, who thereby impede the growth of similar features. And it creates platforms: If the Arabic numbering system didn't act as a pruning rule (via competition) on other numbering systems, we wouldn't be able to rest so much on it as a platform. Competitive exclusion is platformisation, or, if you like, your usual network effect. A good way to attenuate information is to move to a just-in-time approach for anything that can be reliably close at hand (anything which is a platform). For example, if you always have a watch on your wrist, you as-good-as know the time as you've expanded your memory into the environment. This is what's termed extelligence.
On the www, why bother reading all the news sites when you can depend on blogs, as a platform, to filter the best for you--or indeed, a site to pick out the best from those blogs? The social attention in a particular news area will then point you to the best news story. But competitive exclusion is only one of the pruning rules that Morowitz mentions. How about the others?
In my talk on The 3 Steps: The Future of Computing, I suggested using reverse unit testing in programming: a function call would state what answer it expected to get, and methods that didn't match that pattern would be pruned away. It would be a way of growing a large code-base without having to deliberately build the network of calls. How about in databases, to follow the way proteins find each other? Look for answers that bind to a particular shape, instead of issuing a specific query. How would that look, on the www? I'd like to use a real-time monitoring service like PubSub, only not to catch keywords or URLs. Instead I'd like to catch particular patterns of inter-website conversation, like clusters of three or more posts above a certain connectedness level... and then find only the most popular links in those. That'd be a list not of the most linked to (the most popular, we call it), but the most provoking sites.
So, attenuation. We use it in: filtering information; providing distance and locality for user control over what they receive, and feeding that back into the system; providing semi-automatic mechanisms to bubble up information; distributed processes to give integral results over large data-sets (the stock-market), or no results at all (other markets); emergent selection algorithms of exclusion, or of form. We have many places to look for inspiration, and we can design to operate with familiar patterns, since there is already human use of attenuation.
I'll add one last one, because it's something that is especially human: Implicature. Conversational implicature is when you prune (and adapt) what you say, according to what you know your conversational partner already understands about you. They'll assume you're following certain maxims, and because of that platform of understanding, you can be much more meaningful. For example, if I say I have a dog, that's essentially meaningless unless you assume I'm following the maxim of relevance--that is, I'm saying it for a reason. Only by presuming I'm being meaningful - that the statement passed a certain threshold before I uttered it - can you understand it as something important, or surprising, or silly. Only by presuming I'm being meaningful does me giving you an mp3 mean it's a gift, not a so-called viral plant from a marketing drone. Mutual implicature allows ever greater flow of meaning, and it's why apparently genuine comments left as marketing, not as gifts are so poisonous.
Implicature (more links) isn't possible in a system of one-off exchanges. It requires conversation, which itself requires repeated, fluid interaction, identity, and all the attributes of social systems we take for granted: visibility, shared values (at least locally shared, if that isn't a tautology since we redefined locality), and ways of ensuring conversations don't break down, like plausible deniability through noise. Social software ideas, really.
In a www where information is published and consumed in separate steps, the implicature that made the early constellation of blogs so compelling (because conversations weren't formalised into these two parts) has ebbed away. Conversations don't have to be literal words between two people, but they do have to include some kind of directedness of the publishing (or utterance), and some acknowledgement and visibility to the consumption (the listening). As good as the social networks and tags in sites like del.icio.us and Flickr are, we need more vias and more back-at-yas to shape the flow and promote the conversation, and therefore the implicature, that is necessary for the personal, meaningful, helpful attenuation we rely on in everyday life.
The 8 latest posts are named
How any of the Big 3 could own connected products, Pricing hardware and changing business models, Orbits and hardware, BERG Cloud press, Testing, Facebook should make a camera, Instagram for webpages, and Ze Frank on ugly.
2013 May. 2012 July, May, April, March, February, January. 2011 May, March, February, January. 2010 December, January. 2009 February. 2008 December, November, September, August, July, June, May, April, March, February, January. 2007 December, November, October, September, July, June, May, March, February, January. 2006 December, November, October, September, August, July, June, May, April, March, February, January. 2005 December, November, October, September, August, July, June, May, April, March, February, January. 2004 December, November, October, September, August, July, June, May, April. 2003 December, November, October, September, August, July, June, May, April, March, February, January. 2002 December, November, October, September, August, July, June, May, April, March, February, January. 2001 December, November, October, September, August, July, June, May, April, March, February, January. 2000 December, November, October, September, August, July, June, May, April, March, February.
Interconnected is copyright 2000—2013 Matt Webb.