Wrapping up 2007: As Borges wrote reviews of non-existent books, I have notes for essays I'll never write. Here I've collected what's been on my mind the last couple of months.
1 #
The common theme of Web 2.0 Expo Berlin was surfaces, which I picked up primarily from a talk on microformats as nanotech by Jeremy Keith and a conversation with Terry Jones.
In short: the surface of the Web is currently pages - these are the atoms - which are interlinked (Tom Coates talks about how to be native to the Web of data). Search engines index the text, and our browsers make local deductions from the raw source to show rendered pages.
What microformats and other forms of structure do is increase the resolution of the Web: each page becomes a complex surface of many kinds of wrinkles, and by looking at many pages next to each other it becomes apparent that certain of these wrinkles are repeated patterns. These are microformats, lists, blog archives, and any other repeating elements. Now this reminds me of proteins, which have surfaces, part of which have characteristics shared between proteins. And that in turn takes me back to Jaron Lanier and phenotropics, which is his approach to programming based on pattern recognition (I've looked into this before).
So what does phenotropics mean for the Web? Firstly it means that our browsers should become pattern recognition machines. They should look at the structure of every page they render, and develop artificial proteins to bind to common features. Once features are found (say, an hCalendar microformat), scripting can occur. And other features will be deduced: plain text dates 'upgraded' to microformats on the fly. By giving the browser better senses - say, a copy of WordNet and the capability of term extraction - other structures can be detected and bound to (I've talked about what kind of structures before).
While browsers look for patterns inside pages, search engines would look for inter-page, structural features: the research search engine companies should be doing now is into what kind of linking goes on. Just as there is a zoology of traffic, and the exploration into the world of cellular automata resembles a biologist's hacking into a rainforest rather than the scientific method, I want a typology of the fine structure of the Web: dense pockets that persist over time; starbursts; ping-pong conversations with a slowly changing list of references. What animals are these and how do I search them? Here's an example of the kind of search I want to do: 'conversations that have arisen in a small, pre-existing group of friends that converge on referencing projects identified by the wider Web as physical computing.' Or the same search but for conferences, and then have my browser scoot over the pages, and deduce event microformats from them.
The technological future of the Web is in micro and macro structure. The approach to the micro is akin to proteins and surface binding--or, to put it another way, phenotropics and pattern matching. Massively parallel agents need to be evolved to discover how to bind onto something that looks like a blog post; a crumb-trail; a right-hand nav; a top 10 list; a review; an event description; search boxes. Functionality can be bound to the pattern matchers: Technorati becomes a text search over everything that has a blog post matcher bound to it; a site with a search matcher bound to it can have extra functionality offered in the browser, for site-wide search offered in a consistent way for every site on the Web (ditto crumb-trails and site maps).
The macro investigation is like chemistry. If pages are atoms, what are the molecules to which they belong? What kind of molecules are there? How do they interact over time? We need a recombinant chemistry of web pages, where we can see multiple conversation molecules, with chemical bonds via their blog post pattern matchers, stringing together into larger scale filaments. What are the long-chain hydrocarbons of the Web? I want Google, Yahoo and Microsoft to be mining the Web for these molecules, discovering and name them.
The most important thing I think we've learned about the Web in the last 10 years is this: there is not infinite variety. That means the problem of finding the patterns in pages and inter-page structure is a tractable one. It might be large and difficult, but it can be done: you make a first approximation that finds patterns in most of the Web, and then another to find patterns in most of the rest, then most of the rest, then most of the rest, and then special case it. Then you'll be done. It'll take years. Whatever.
Micro/macro structure is the first of the challenges that faces the Web.
2 #
Incidentally, I'm flicking through my notebook looking for more notes on this topic and I've run across a reminder to look into the three ways ultrastable institutions adapt to changing circumstances in order to continue being "machines for surviving," as identified by Stafford Beer in his book Platform for Change. Beer talks about social institutions such as 'schooling,' 'the general practice of medicine' and 'penal establishments' (his examples). These are self-organising and self-regulating systems. As their environment changes, how do they not collapse? How are they not sensitive to shock?
Beer says that an ultrastable social institution will do one of three things in response to change:
- It will change internally and still survive (I guess this is like scouting or soccer, both institutions that have changed minimally).
- The institution's internal form will change, but its relationships to other institutions will remain. Perhaps this is like prisons, which have the same relationship to the population, police, courts and government... but operate internally very differently.
- Dramatic change occurs. This makes me think of the Church: it has changed enormously internally and in its external relations over the last millennium, yet it's still the Church.
As simple as this typology is, I like it because it's a springboard to start discussing how things change. The metaphor that's all to often used for how things change is the Uncertainty Principle from quantum mechanics: when you look at (make a measurement of) one parameter of a system, another parameter flips so you know less. If you know where it is, you can't know how fast it's moving. And yes, it's true, measuring something changes it. But using this metaphor denies us more knowledge: it stops us asking how it changes, and how much, and why.
The metaphor Beer uses, and the one I remember my friend Andrew using, is Le Chatelier's principle. This is from chemistry. It says that a system in equilibrium which is given a prod will change internally to counteract that prod and move into a new equilibrium. So let's say we're attempting to measure domestic violence by putting cameras in houses. Of course this method will change the nature and frequency of the violence, but rather than stopping at "it changes," Le Chatelier leaves open the possibility of figuring out what sort of change has occurred. What happens in other situations where we've used cameras?, we would ask. How have reports changed over time, to reach this new equilibrium? By acting on the system and looking at the responses, we can learn about its internal mechanics, which is a form of science I'm much happier with than the very Zen but not particularly helpful Uncertainty Principle.
3 #
This in turn makes me think of Tainter's Collapse of Complex Societies (which I've talked about before, in the context of New Puritans). States form for several reasons, one of which is external conflict or pressure; another is internal management. Fundamentally, though, Tainter casts the continued existence (or not) of a complex society as a economic decision by the population (or a privileged elite who can exert the necessary power): it is cheaper to have centralised complexity than have multiple simpler villages. The whole system essentially 'rolls downhill' into increasing order. (This is similar to how I rationalise takeoff of aeroplanes to myself: the wings are structured such that, at a certain speed, the system of plane and wings and air finds it easier and more stable to fall upwards, carried in a basket of potential energy.)
Collapse, in Tainter's view, only looks like collapse because we privilege the institutions of complexity: collapse is a sensible economic decision on the part of the people in the society, to refactor complexity into simple units, and do away with the cost of the managerial organisation. If building of monuments - capacitors of work capacity, essentially - stops, everyone is a little bit richer (though: no more pyramids).
But marry this with Beer and ultrastable social institutions. The ultrastable in this case is the state itself, which contains the whole population. It will change internally, and change its relationship with ultrastables around it, in order to survive. Tainter is reductionist to not regard institutions as actors in their own right. From this perspective, we could see the complexity of the USA as the outcome of a sensible economic decision (paying for safety at the cost of simplicity) during the Cold War period of the 1950s to 1980s. As the threat of nuclear war receded, the complexity of the US (the West in general, really) could also have reduced. It is insupportable without external pressure, like a sandcastle moat on the beach where the walls are thinning. So, being ultrastable, the US state survival machine generates new threats to keep it's own continued existence being a sensible economic decision. It adjusts internally and externally to create ground force to loft itself upwards, to keep itself in the air. That's how I read the War on Terror, anyhow.
4 #
I mentioned refactoring earlier, so an additional point on that: I have a feeling that refactoring code is not a good thing. I am not in favour of deleting code. If there are problems with code the way it is written, there should be mechanisms to code over it gradually, and leave the old code there. If it becomes too complicated, we need more convenience functions and not less. A codebase should be its own source repository: seeing what the code was like a year ago shouldn't be a check-out from source control, but archeology.
I'm not entirely sure why I believe this, but here's a tangential reason. The reason code is refactored is because when code gets complicated, it's hard to understand and maintain. But keeping code simple limits the type of problems we can tackle to ones which have solutions comprehensible to the human brain. Yes, breaking problems down into nuggets that are human brain sizes is one strategy that works. But I think we'll find it's as small a part of the problem space as linear equations model only a small part of all dynamic systems. This may mean we need new ways of approaching programming. Instead of simplicity, we need better tools and languages. Maybe a string class will have to be a gigabyte of raw code. Who cares. Maybe HTML parsing libraries evolve inside themselves and compete. Maybe what my code does, instead of calling a method on an object, is give a list of examples of the kind of responses I know I'd like, and use that to dynamically select from a huge number of pattern matching code libraries that live in the cloud. And maybe if it doesn't work quite right, we don't call that a bug but we call it an inadequate model and write yet more code to make it work better.
Refactoring code means we say there are certain behaviours that are important, which are those to be kept, and other there aren't. I say, who are we to say what's important. The days we can ignore side-effects are gone.
5 #
If we were a rich world, we'd innovate our way out of this environmental crisis. We'd breed algae that piss petrol, and albatross that inhale hydrocarbons and then soar upwards to breach into the stratosphere, exhaling ozone. But we aren't a rich world, so we're tightening our belts and hoping something will turn up. Nothing will turn up though, because we're all there is to do the turning up. Don't be a Consie.
Refactoring our code to accommodate the width of brain passed by a human female's pelvis is similarly stupid. We need to wallow in code, and have the wings to fly in it, to navigate it and garden it to make it self organise.
Beer's ultrastables, Tainter's societies that loft themselves into order (falling upwards), and my recent reading about complexity (including Waldrop's Complexity) makes me think of the Second Second Law of Thermodynamics. The (first) Second Law of Thermodynamics is the one that says that a room tends towards untidyness, and scrambled eggs will never spontaneously un-. The Second Second Law of Thermodynamics knows that this view is reductionist, because - with scale - small ultrastable units might emerge in the entropic soup: call them autocatalytic loops or machines for surviving. And these loops might grow, and do whatever they need to turn into ultrastables which can persist over time - a dynamic system that never converges or diverges - perhaps even developing ways of predicting the future or seeing over a distance: people, in other words. Given enough untidy rooms, life will inevitably occur, and a person will evolve who will tidy that room.
You know, seeing is just like predicting the future. Seeing is predicting the far away, because we can never know for sure.
All intelligence must have models of what it needs to predict: aspects of the external world refactored into mini replicas that preserve the behaviour we care about. If you cut open my brain, there is a model of my house in it. Evolution refactored reality into the brain at the beginning of the Holocene, and the favoured behaviours to preserve then aren't the same ones we need now.
David Deutsch, in The Fabric of Reality, brings up the Second Second Law of Thermodynamics in the context of astrophysics. The Sun will be dying in some billion years. If humans are still around, we'll have to fix it then or before then. Therefore the presence of life cannot be discounted in stellar evolution. Said stellar evolution can be modelled, in physics, and compared to the observed universe using the Hertzsprung-Russell diagram. If there is a divergence, either there's something wrong with the astrophysics or life - as predicted by the Second Second Law of Thermodynamics - has played a part and must be included in the equations. Or all life dies or moves beyond the physical universe before it gets to a point where it wants or needs to manipulate stars. Side effects are important. They mount up. Then survival machines emerge.
6 #
I was wondering how to model traffic, once we have flocking cars.
Flocking cars will come, first for motorways. Cats eyes are being replaced by solar powered LEDs--it won't be long before they have RFIDs in them, initially for inventory management, but then as a handy augmentation to GPS. So that'll be lane position sorted.
And cars already have systems to make sure the brake is being applied enough, and I don't think it'll be long until the steering wheel starts providing resistance is the cars senses another car behind and to the side. Just a little helping hand, you know. There's already automatic parking. I heard about a VR system which only has a screen ahead and to the sides of the user, but simulates being about to look all the way around: the scene it projects moves twice as fast as the head. So when you look 90 degrees to your right, the scene shifts 180 degrees. You get used to it really quickly, apparently. I think we'll have something like that for cars, too, to deal with the information overload.
Then cars will begin to help out with driving a little more: a toggle switch to flick left and right between lanes, just as easily as changing gear. Think of it as an automatic clutch for changing lanes on freeways. Then cruise control will use the cars ahead and behind as cues, and by that time we'll have mesh networks everywhere so the cars will share information too. And then the insurance companies will lower the premiums if you opt not to drive manually on main roads, and that'll be that.
So what are the emergent properties of flocking cars? I think we'll need a certain kind of maths to model it, and that is what I was thinking about. It'll be a bit like signal processing: we'll look at the distances between successive cars, and monitor the pressure waves that move along that pipe. There will be standing waves causes by junctions, and the police will introduce special cars to act as signal dampers or oscillation providers, used to break up clots of cars. Having all the cars on the same rule system worries me, because the failure modes of homogeneous populations are nasty. We'll need a way of parameterising the rules so that different cars all behave slightly differently, so that traffic itself becomes an ultrastable system, responding to an accident by simply shifting the internal population of rules to change the equilibrium and flock around the grit instead of locking up entirely. That'll mean we'll have a statistical mechanics of traffic instead, and next to the weather reports in the newspaper we'll have pressure- and temperature-equivalent forecasts being reported for the road system. Heat is just the aggregate of the vibrations of a mass of particles, after all; the analogy works.
We'll use the same reports to buy and sell futures in car insurance, hedging the additional commute risk by promising to take a safe trip to the grocery store in the evening.
7 #
I look forward to the day of flocking cars, because change is good. The more change there is, the more likely it is that some of the changes will be positive and form survival machines that persist. One way to bring forward that day is to deliberately drive badly. The more effort we put into driving well manually, the less need there is for robots. So we don't get flocking cars and we have to work harder. That's the story of the 20th century, if you ask me.
The communists were dangerous for two reasons: firstly they were internationalists, valuing actors whose behaviour they admired highly even if the actors were not in the local nation state. At any moment their motivations were potentially treasonous. Secondly they knew class revolution was inevitable, but attempted to bring it forward by promoting conflict, making riots out of protests, attacking the police and so on.
Driving badly, burning tyres on the roof and refusing to delete code are all aspects of bringing forward the revolution by promoting conflict. I am an internationalist of progress.
8 #
I was going to talk about groups and motivations on the Web, but first I want to briefly mention the Magna Carta.
Monarchy used to be a force of nature. It wasn't something you'd notice. Yes there were limits of the power of monarchy, but they were like gravity: since the king couldn't fly, he wouldn't try. He can't vaporise the barons with his mind, so why should he want to? The monarch has the power that a queen bee has in the hive: in dynamic equilibrium with the population. But equally the monarch isn't totally defined by the relationships to the population around it: if the monarchy was so defined, the person wouldn't be required because the shape of the hole would do.
Necessarily, the monarch is sometimes unpredictable (if it's known what the king would say in any given situation, the monarch wouldn't be required). So in the situations where the king is predictable, the situation unfolds without the monarch needing to be present as an actor: the king is like gravity. And when the opinion isn't known, the monarch is the final answer--the answer of the king can't be deduced without asking the king. There are no shortcuts.
Note that this is like the present day legal system. In most situations between two companies, a contract will exist but never be used: because each company can see what a contract says about a decision, the contract will inflect that decision without being run. In some situations a contract is ambiguous, and the legal system will compile and run it. In that case it cannot be known ahead of time what the result will be, otherwise there would be no point in running the legal system over the problem.
It is not possible to know what the king will say without asking the king; the king is only asked for a decision in cases where the answer cannot be known ahead of time. The king is the only source of unknowns in the society. In our world, it is the free market which is the only source of unknowns.
What the Magna Carta did - or rather, what the process that the Magna Carta was part of did - was turn the king into a thing. The thing-king is the king revealed. The important feature of the document isn't the constraints put on the king, but rather the fact that it is possible to bind to the king at all. It's like suddenly leaping above the ocean and realising that the ocean is there at all... and therefore tides! and therefore currents! and therefore the possibility of other bodies of water that aren't this ocean! Or it's like the way that air pollution revealed the atmosphere as something that could have varying parameters over its volume. Once the king is a thing, it is possible to bring the king into society, and constrain and rule him. That was the big deal.
So when I look at the economic imperatives of the free market, and the way the invisible hand is swiping us into situations that are frankly fucked up, I wonder: is it possible to make the free market a thing? Rather than complain about it and make laws about it (which of course won't work, because they're inside a system in which the only source of 'truth' is the free market (and by 'truth' I mean those facts which can't be known without computing them)), is it possible to describe the market to such a degree that the next steps will naturally be to internalise and constrain it?
Because whenever we describe something, we consume it. Look at the universe and physics. Or people, and psychology and mental hygiene. Or Africa.
To promote the end of the free market, if that's what you hate, discuss it--that's my advice. Discuss it and find the whorls and jetstreams of it. Find the laws of thermodynamics of it. Categorise the behaviours, without making the models that economists do. Taxonomy first, models next. Turn the market into a surface ripe for binding. As thermodynamics described steam and brought about the Industrial Revolution, describe the market.
This means we'll have metamarkets, in the end. Mini free markets captured and tuned to perform particular tasks, inside a society we can't currently grasp, just as China held Hong Kong in a bubble to propel it into orbit, and the Large Hadron Collider intends to create new zones of particular kinds of physics in order to perform scientific experiments.
9 #
Another thing the Web needs to do better is groups. A lot of social software takes non-social, individual activities and puts them into a social context: bookmarking, photos, even accountancy. Then what we've done is ported in existing social concepts (used by people to model and interact with other people in social settings) like identity, reputation and sharing, and created a substrate in which social patterns that emerge in the meat world can come about online too: groups, factions, and power relationships (I would say these exist more independently from people than the earlier concepts).
I want to think about social software in reverse. Can we take activities that are already group-based and irreducibly social in the real world, and make software that is good for them? I suspect that software for a running group or book club would not succeed by merely introducing ideas like friends lists and social sharing: these "features" were never absent. Instead new kinds of social functionality will need to be created. To be honest, I've no idea what this social functionality will be.
But to begin with, I do know that making social software for a pre-existing group activity is so different from the "individuals socialising" model we have at the moment that our usual references (Goffman's presentation of self, and human motivations from psychology) will fall down, and our normal feature set (signing in, and messaging with inboxes) won't apply.
The challenge I was thinking about was this: how would you design a sign-in system for a book club? Having them share a username and password doesn't seem elegant somehow: although the information they keep online they want to keep in common, in the meat world telling one person a username and password doesn't guarantee that knowledge passes to others in the group. So is there knowledge they do hold in common?
Perhaps the login system could be based around questions: 'what is a name of a blonde person in your group?' And let's say, to sign in, you answer three questions: two which are known by the system and one which isn't. The one which isn't known is asked several times and the answers correlated. This becomes another known fact the system can use in the official part of the sign-in process. The problem I see here is that people from outside the group could also sign in, and this is also the problem with traditional passwords: with my weblog, it's not the random stranger I want to prevent logging in--it's the potentially malicious people I might meet, who are the people most likely to guess my password (except that I use a strong password, but you get my drift). Somehow the group sign-in system has got to make use of the fact that a solid group exists out there in the world, and make use of a trivia that only an official group member would know: 'did you have red or white wine at the last birthday dinner you went out for?' I don't know. I do know that it's a challenge to develop a group sign-in pattern, and once developed that pattern will encourage a thousand websites to bloom, just as a known login/registration pattern helped along both developers and users of the current batch of websites.
10 #
It also makes me wonder what analogues groups have to individuals. There is still the concept of forgetting: multiple members of a group might leave, and some of the group memory will be lost. And there will still be a need to change the identification method on demand.
In terms of what functionality we'd offer to groups, I was thinking about what an analogue to a blogging system would be to a small group of people. I don't think it's just a blog with multiple authors. Wikis are better, but they don't leave a contrail through time like blogs do. There's no publication.
Maybe a wiki for a making a zine would be better. On the wiki, which you'd be able to edit by answering the group login questions. There would be a certain amount of structure: page numbers, sidebars, links and a contents page. There would be different page formats: long form, big pictures, and intro material (there would be templates for these). There would also be a big clock at the top right: 'going to press in 30 days!' it'd say. And it'd gradually count down. On the final day, the wiki would freeze its content, and allow only small changes for a further day or two, at which point the entire thing would be automatically compiled into a zine, and a PDF of the entire thing generated and published. Old-schoolers will recognise this as what Organizine could have become but never did.
11 #
Since "Incidentally, I'm flicking through my notebook," I've been writing whatever seemed to follow on. That was page 6 of my notebook, and this notebook started in October 2007. I've just thought about looking down at my notebook again and come to page 7, which - happily - is another comment about the Web and interactive systems more generally.
Flickr is playful, and it is structured to bend the trajectory of users back into it. In a way, it's a true massively multiplayer game: with most games and ARGs the player is basically playing against the game developers and designers, who are the ones generating the rulespace. With Flickr you get the feeling that you are playing in partnership with the developers, both of you playing together in the foam of the nature of the Web and photos and social systems at large. You have different abilities, that's all.
To generalise Flickr's attributes, successful interactive systems will bend users back towards them, whether by play or not. A tool like a screwdriver doesn't need to do anything to bend people back towards it because it's driven by necessity: every time you need to deal with a screw, you'll pick up the screwdriver. But websites are abundant. The Web is crusted with functionality. In an ocean of passive particles, the ones that dominate are the ones that learn how to autocatalyse and reproduce (another expression of the Second Second Law of Thermodynamics, or just natural selection).
In order to keep going, the path of a user through a website must be designed to never end. In order for the website to grow, the path of the user must be designed to bring in more users, as in a nuclear chain reaction.
12 #
Computer programmes are something else that have to not halt unintentionally. The way this is done is to model the application as a collection of finite-state machines. Each machine can exist in a number of different states, and for each state there are a number of conditions to pass to one or another of the other states. Each time the clock ticks, the machines sees which conditions are matched, and updates the state accordingly. It is the job of the programmer to make sure the machine never gets into a state out of which there is no exit, or faces a condition for which there is no handling state. There are also more complex failure modes.
Getting Things Done, David Allen, describes a finite-state machine for dealing with tasks. Each task has a state ('in,' 'do it,' 'delegate it,' 'defer it,' 'trash' and more) and actions to perform and conditions to be met to move between the states. The human operator is the clock in this case, providing the ticks. This machine does have exit points, where tasks stop circulating and fall off.
The cleverness of Getting Things Done is to wrap this finite-state machine in another finite-state machine which instead of running on the tasks, runs on the human operator itself, the same operator who provides the ticks. The book is set up to define and launch the state machine which will keep the human in the mode of running the task machine. If they run out of tasks, the GTD machine has a way of looping them back in with tickle files and starting again the next day. If they get into a overwhelmed state, the GTD machine has a way of pruning the tasks. If they get demotivated and stop running the task machine, the GTD machine has ways of dealing with that. Alcoholics Anonymous has to deal with this state too, and it's called getting back on the wagon. The GTD machine even has a machine wrapped around it, one comprising a community to provide external pressure. Getting Things Done is a finite-state machine that runs on people; a network of states connected by motivations, rationale and excuses, comprising a programme whose job it is to run the task machine.
13 #
Websites can also be seen as finite-state machines that run on people. Successful websites must be well-designed machines that run on people, that don't crash, don't halt, and have the side-effect of bringing more people in. Websites that don't do this will disappear.
Instead of a finite-state machine, think of a website as a flowchart of motivations. For every state the user is in, there are motivations: it's fun; it's the next action; it saves money; it's intriguing; I'm in flow; I need to crop the photo and I remember there's a tool to do it on that other page; it's pretty.
If you think about iPhoto as its flowchart of motivations, the diagram has to include cameras, sharing, printers, Flickr, using pictures in documents, pictures online and so on. Apple are pretty good at including iPhoto all over Mac OS X, to fill out the flowchart. But it'd make more sense if I could also see Flickr as a mounted drive on my computer, or in iPhoto as a source library just as I can listen to other people's music on the same LAN in iTunes. This is an experience approach to service design.
Users should always know their next state, how they can reach it, and why they should want to.
If I were to build a radio in this way, it would not have an 'off' button. It would have only a 'mute for X hours' button because it always has to be in a state that will eventually provoke more interaction.
Designing like this means we need new metrics drawn from ecology design. Measurements like closure ratio become important. We'll talk about growth patterns, and how much fertiliser should be applied. We'll look at entropy and population dynamics.
Maybe we'll look at marketing too. Alex Jacobson told me about someone from old-school marketing he met who told him there are four reasons people buy your product: hope, fear, despair and greed. Hope is when your meal out at the restaurant is because it's going to be awesome. Fear is because you'll get flu and lose your job unless you take the pills every day. Despair is needs not wants: buying a doormat, or toilet paper, or a ready-meal for one. Greed gets you more options to do any of the above, like investing. Yeah, perhaps. Typologies aren't true, but they're as true as words, which also aren't true but give us handholds on the world and can springboard us to greater understanding. We can kick the words away from underneath ourselves once we reach enlightenment.
14 #
The lack of suitable motivations is also why we don't have drugs that make us superheroes, and this is also a failure of the free market because obviously these drugs would be cool. It's not like there hasn't been a search for interesting drugs (drugs for something other than pure pleasure or utility). The second half of Phenethylamines I Have Known And Loved documents the Shulgins' subjective experiences with 179 compounds that act like the neurotransmitter, phenethylamine.
"There was a vague awareness of something all afternoon, something that might be called a thinness." (#125)
"I easily crushed a rose, although it had been a thing of beauty." (#157)
PiHKAL was written in 1991. Viagra wasn't patented till 1996. Why hasn't the last 16 years been spent on substitute phenethylamines doing for consciousness doing what Viagra does for penises?
In The Player of Games, Iain M. Banks talks about Sharp Blue, a drug which is "good for games. What seemed complicated became simple; what appeared insoluble became soluble; what had been unknowable became obvious. A utility drug; an abstraction-modifier; not a sensory enhancer or a sexual stimulant or a physiological booster."
I was talking about this with Tom and Alex and some Matts in the pub a few months ago. Why don't we have abstraction modifier drugs now? Why are there no drugs to help me think in hierarchies, or with models, or to make cross connections? Or rather, since drugs exist that do this in a coarse and illegal way now, why haven't these been tuned? I had been busy all day coding highly interlocking finite-state machines.
If I wanted to make such drugs, it would be a waste of my time to develop them myself. My best way forward would be to manipulate the market and society to want the drugs, and leave the rest up to pharma industry. Products are often invented (or revealed) to mediate a risk. The particular risk is chosen (from many) to make a particular product: there is no cause and effect here; it is choreographed between the risk and mediation. For example, the AIDS risk has been revealed as HIV, as there is money to be made in designing drugs to combat this particular molecule. However the AIDS risk is part of the risk of epidemic death and the cause of this is more properly revealed as poverty, not a virus. The product to mitigate and manage AIDS-as-poverty is not as palatable to the world or as tractable for the market: it is expensive and hard and has rewards which are hard to quantify (I'm badly misrepresenting Risk and Technological Culture, Joost van Loon, I'm sure).
Risks are deliberately revealed in small ways too. Dove invented, in advertising, the concept of the un-pretty underarm (no 'armpit' here). Fortunately they have launched products to mitigate this problem.
So first the problem of abstraction difficulty must be revealed as a risk. This can be done in a traditional marketing way, with press releases and spurious research. Then various ways must be tested to mitigate this risk: these need to be simultaneously difficult and expensive, and vastly popular. The intention here is to demonstrate that there is a market for any company who creates a better method, which is profitable enough that some expensive research can be done up-front. For this, imagine popularising a method like Getting Things Done crossed with the creation and value of the diamond industry and the accessibility and mind-share of omega-3 in fish oil. I'm sure the alternative therapy industry would be unwitting, happy partners in this.
Once the risk is revealed and the market visible, it's a matter of providing to the pharma industry the material they need to persuade governments to make this legal. That is, the facts that the ability to abstract is essential to business, and that it's a matter which is outside the scope of government.
From there the invisible hand will take care of everything. A technician will be reading some lab reports and realise that some of the test subjects exhibited the qualities discussed as necessary to the country in a newspaper article they read last week. A research programme will be suggested. The business people will assess the market size, based on published research and the number of people buying games like Brain Training. The government will turn a blind eye because business needs to be competitive. We could have an abstraction drug inside a decade.
One day people will look back on the above paragraphs and see that what was suggested was what happened. It will be my L. Ron Hubbard moment.
15 #
Earlier this year I met a biochemist who is part of a research group that has created an artificial store of energy which is as small and energy-dense as fats, and as quick release as sugars--those being the two ways cells in the body store and retrieve energy. Because every cell uses energy, rats eating food with this new compound are tested to be smarter and have better endurance. Holy shit. The same fellow was wearing a hearing aid with a switch that made him hear better at parties than I do.
I would like to meet more people like him. I am currently working on an idea which may turn into a small product which I am prepared to underwrite, and it requires I meet with people who are decent-ish writers (who would like to write more and are probably teachers) who are interested in the world and typologies, and understand one of these areas at a 16 year-old to first-year undergraduate level: geography and city models; meteorology; the Austro-Hungarian Empire; soil; tectonics; the metabolic cycle; proteins and enzymes; U.S. highway design; dendritic patterns; closed-system ecology like Biosphere 2; farm management.
The kind of person I have in mind is Dr Vaughan Bell, who was introduced to me by a friend to write a hack or two for Mind Hacks. He wrote a whole bunch (he was already a pretty serious Wikipedia contributor, among other public understanding of science activities), communicates incredibly, and has turned the Mind Hacks blog into one of the top 5,000 weblogs globally, increasing general knowledge and interest in psychology and cognitive science immeasurably. Please let me know of anyone of whom you're reminded by these two paragraphs.
16 #
Vending machines on the street sell mixed smoothies. Each machine is populated with a selection of 8 from dozens of base fruit smoothies. There are 10 options on the machine, representing different mixes of the 8 fruit flavours. Genetic algorithms are used to evolve the smoothies towards the optimum flavours for that neighbourhood, based on what sells. Variety is introduced by having wild-card base flavours in that 8th slot. Sometimes you take a detour on the way to work to help out training a machine to produce your favourite cocktail.
Additionally: 'Coke Continuous Change' is a variety of Coca Cola that is different every time you buy it. The company manufactures batches of varying recipes and ships out crates of unique mixes. Each can has a code on it that also represents the recipe. Using feedback from drinkers, Coke can optimise the level of variety and serendipity on a hyperlocal level. The only constant is there is no constant. If you hate the one you're drinking, buy another.
That'll do.
Wrapping up 2007: As Borges wrote reviews of non-existent books, I have notes for essays I'll never write. Here I've collected what's been on my mind the last couple of months.
1 #
The common theme of Web 2.0 Expo Berlin was surfaces, which I picked up primarily from a talk on microformats as nanotech by Jeremy Keith and a conversation with Terry Jones.
In short: the surface of the Web is currently pages - these are the atoms - which are interlinked (Tom Coates talks about how to be native to the Web of data). Search engines index the text, and our browsers make local deductions from the raw source to show rendered pages.
What microformats and other forms of structure do is increase the resolution of the Web: each page becomes a complex surface of many kinds of wrinkles, and by looking at many pages next to each other it becomes apparent that certain of these wrinkles are repeated patterns. These are microformats, lists, blog archives, and any other repeating elements. Now this reminds me of proteins, which have surfaces, part of which have characteristics shared between proteins. And that in turn takes me back to Jaron Lanier and phenotropics, which is his approach to programming based on pattern recognition (I've looked into this before).
So what does phenotropics mean for the Web? Firstly it means that our browsers should become pattern recognition machines. They should look at the structure of every page they render, and develop artificial proteins to bind to common features. Once features are found (say, an hCalendar microformat), scripting can occur. And other features will be deduced: plain text dates 'upgraded' to microformats on the fly. By giving the browser better senses - say, a copy of WordNet and the capability of term extraction - other structures can be detected and bound to (I've talked about what kind of structures before).
While browsers look for patterns inside pages, search engines would look for inter-page, structural features: the research search engine companies should be doing now is into what kind of linking goes on. Just as there is a zoology of traffic, and the exploration into the world of cellular automata resembles a biologist's hacking into a rainforest rather than the scientific method, I want a typology of the fine structure of the Web: dense pockets that persist over time; starbursts; ping-pong conversations with a slowly changing list of references. What animals are these and how do I search them? Here's an example of the kind of search I want to do: 'conversations that have arisen in a small, pre-existing group of friends that converge on referencing projects identified by the wider Web as physical computing.' Or the same search but for conferences, and then have my browser scoot over the pages, and deduce event microformats from them.
The technological future of the Web is in micro and macro structure. The approach to the micro is akin to proteins and surface binding--or, to put it another way, phenotropics and pattern matching. Massively parallel agents need to be evolved to discover how to bind onto something that looks like a blog post; a crumb-trail; a right-hand nav; a top 10 list; a review; an event description; search boxes. Functionality can be bound to the pattern matchers: Technorati becomes a text search over everything that has a blog post matcher bound to it; a site with a search matcher bound to it can have extra functionality offered in the browser, for site-wide search offered in a consistent way for every site on the Web (ditto crumb-trails and site maps).
The macro investigation is like chemistry. If pages are atoms, what are the molecules to which they belong? What kind of molecules are there? How do they interact over time? We need a recombinant chemistry of web pages, where we can see multiple conversation molecules, with chemical bonds via their blog post pattern matchers, stringing together into larger scale filaments. What are the long-chain hydrocarbons of the Web? I want Google, Yahoo and Microsoft to be mining the Web for these molecules, discovering and name them.
The most important thing I think we've learned about the Web in the last 10 years is this: there is not infinite variety. That means the problem of finding the patterns in pages and inter-page structure is a tractable one. It might be large and difficult, but it can be done: you make a first approximation that finds patterns in most of the Web, and then another to find patterns in most of the rest, then most of the rest, then most of the rest, and then special case it. Then you'll be done. It'll take years. Whatever.
Micro/macro structure is the first of the challenges that faces the Web.
2 #
Incidentally, I'm flicking through my notebook looking for more notes on this topic and I've run across a reminder to look into the three ways ultrastable institutions adapt to changing circumstances in order to continue being "machines for surviving," as identified by Stafford Beer in his book Platform for Change. Beer talks about social institutions such as 'schooling,' 'the general practice of medicine' and 'penal establishments' (his examples). These are self-organising and self-regulating systems. As their environment changes, how do they not collapse? How are they not sensitive to shock?
Beer says that an ultrastable social institution will do one of three things in response to change:
As simple as this typology is, I like it because it's a springboard to start discussing how things change. The metaphor that's all to often used for how things change is the Uncertainty Principle from quantum mechanics: when you look at (make a measurement of) one parameter of a system, another parameter flips so you know less. If you know where it is, you can't know how fast it's moving. And yes, it's true, measuring something changes it. But using this metaphor denies us more knowledge: it stops us asking how it changes, and how much, and why.
The metaphor Beer uses, and the one I remember my friend Andrew using, is Le Chatelier's principle. This is from chemistry. It says that a system in equilibrium which is given a prod will change internally to counteract that prod and move into a new equilibrium. So let's say we're attempting to measure domestic violence by putting cameras in houses. Of course this method will change the nature and frequency of the violence, but rather than stopping at "it changes," Le Chatelier leaves open the possibility of figuring out what sort of change has occurred. What happens in other situations where we've used cameras?, we would ask. How have reports changed over time, to reach this new equilibrium? By acting on the system and looking at the responses, we can learn about its internal mechanics, which is a form of science I'm much happier with than the very Zen but not particularly helpful Uncertainty Principle.
3 #
This in turn makes me think of Tainter's Collapse of Complex Societies (which I've talked about before, in the context of New Puritans). States form for several reasons, one of which is external conflict or pressure; another is internal management. Fundamentally, though, Tainter casts the continued existence (or not) of a complex society as a economic decision by the population (or a privileged elite who can exert the necessary power): it is cheaper to have centralised complexity than have multiple simpler villages. The whole system essentially 'rolls downhill' into increasing order. (This is similar to how I rationalise takeoff of aeroplanes to myself: the wings are structured such that, at a certain speed, the system of plane and wings and air finds it easier and more stable to fall upwards, carried in a basket of potential energy.)
Collapse, in Tainter's view, only looks like collapse because we privilege the institutions of complexity: collapse is a sensible economic decision on the part of the people in the society, to refactor complexity into simple units, and do away with the cost of the managerial organisation. If building of monuments - capacitors of work capacity, essentially - stops, everyone is a little bit richer (though: no more pyramids).
But marry this with Beer and ultrastable social institutions. The ultrastable in this case is the state itself, which contains the whole population. It will change internally, and change its relationship with ultrastables around it, in order to survive. Tainter is reductionist to not regard institutions as actors in their own right. From this perspective, we could see the complexity of the USA as the outcome of a sensible economic decision (paying for safety at the cost of simplicity) during the Cold War period of the 1950s to 1980s. As the threat of nuclear war receded, the complexity of the US (the West in general, really) could also have reduced. It is insupportable without external pressure, like a sandcastle moat on the beach where the walls are thinning. So, being ultrastable, the US state survival machine generates new threats to keep it's own continued existence being a sensible economic decision. It adjusts internally and externally to create ground force to loft itself upwards, to keep itself in the air. That's how I read the War on Terror, anyhow.
4 #
I mentioned refactoring earlier, so an additional point on that: I have a feeling that refactoring code is not a good thing. I am not in favour of deleting code. If there are problems with code the way it is written, there should be mechanisms to code over it gradually, and leave the old code there. If it becomes too complicated, we need more convenience functions and not less. A codebase should be its own source repository: seeing what the code was like a year ago shouldn't be a check-out from source control, but archeology.
I'm not entirely sure why I believe this, but here's a tangential reason. The reason code is refactored is because when code gets complicated, it's hard to understand and maintain. But keeping code simple limits the type of problems we can tackle to ones which have solutions comprehensible to the human brain. Yes, breaking problems down into nuggets that are human brain sizes is one strategy that works. But I think we'll find it's as small a part of the problem space as linear equations model only a small part of all dynamic systems. This may mean we need new ways of approaching programming. Instead of simplicity, we need better tools and languages. Maybe a string class will have to be a gigabyte of raw code. Who cares. Maybe HTML parsing libraries evolve inside themselves and compete. Maybe what my code does, instead of calling a method on an object, is give a list of examples of the kind of responses I know I'd like, and use that to dynamically select from a huge number of pattern matching code libraries that live in the cloud. And maybe if it doesn't work quite right, we don't call that a bug but we call it an inadequate model and write yet more code to make it work better.
Refactoring code means we say there are certain behaviours that are important, which are those to be kept, and other there aren't. I say, who are we to say what's important. The days we can ignore side-effects are gone.
5 #
If we were a rich world, we'd innovate our way out of this environmental crisis. We'd breed algae that piss petrol, and albatross that inhale hydrocarbons and then soar upwards to breach into the stratosphere, exhaling ozone. But we aren't a rich world, so we're tightening our belts and hoping something will turn up. Nothing will turn up though, because we're all there is to do the turning up. Don't be a Consie.
Refactoring our code to accommodate the width of brain passed by a human female's pelvis is similarly stupid. We need to wallow in code, and have the wings to fly in it, to navigate it and garden it to make it self organise.
Beer's ultrastables, Tainter's societies that loft themselves into order (falling upwards), and my recent reading about complexity (including Waldrop's Complexity) makes me think of the Second Second Law of Thermodynamics. The (first) Second Law of Thermodynamics is the one that says that a room tends towards untidyness, and scrambled eggs will never spontaneously un-. The Second Second Law of Thermodynamics knows that this view is reductionist, because - with scale - small ultrastable units might emerge in the entropic soup: call them autocatalytic loops or machines for surviving. And these loops might grow, and do whatever they need to turn into ultrastables which can persist over time - a dynamic system that never converges or diverges - perhaps even developing ways of predicting the future or seeing over a distance: people, in other words. Given enough untidy rooms, life will inevitably occur, and a person will evolve who will tidy that room.
You know, seeing is just like predicting the future. Seeing is predicting the far away, because we can never know for sure.
All intelligence must have models of what it needs to predict: aspects of the external world refactored into mini replicas that preserve the behaviour we care about. If you cut open my brain, there is a model of my house in it. Evolution refactored reality into the brain at the beginning of the Holocene, and the favoured behaviours to preserve then aren't the same ones we need now.
David Deutsch, in The Fabric of Reality, brings up the Second Second Law of Thermodynamics in the context of astrophysics. The Sun will be dying in some billion years. If humans are still around, we'll have to fix it then or before then. Therefore the presence of life cannot be discounted in stellar evolution. Said stellar evolution can be modelled, in physics, and compared to the observed universe using the Hertzsprung-Russell diagram. If there is a divergence, either there's something wrong with the astrophysics or life - as predicted by the Second Second Law of Thermodynamics - has played a part and must be included in the equations. Or all life dies or moves beyond the physical universe before it gets to a point where it wants or needs to manipulate stars. Side effects are important. They mount up. Then survival machines emerge.
6 #
I was wondering how to model traffic, once we have flocking cars.
Flocking cars will come, first for motorways. Cats eyes are being replaced by solar powered LEDs--it won't be long before they have RFIDs in them, initially for inventory management, but then as a handy augmentation to GPS. So that'll be lane position sorted.
And cars already have systems to make sure the brake is being applied enough, and I don't think it'll be long until the steering wheel starts providing resistance is the cars senses another car behind and to the side. Just a little helping hand, you know. There's already automatic parking. I heard about a VR system which only has a screen ahead and to the sides of the user, but simulates being about to look all the way around: the scene it projects moves twice as fast as the head. So when you look 90 degrees to your right, the scene shifts 180 degrees. You get used to it really quickly, apparently. I think we'll have something like that for cars, too, to deal with the information overload.
Then cars will begin to help out with driving a little more: a toggle switch to flick left and right between lanes, just as easily as changing gear. Think of it as an automatic clutch for changing lanes on freeways. Then cruise control will use the cars ahead and behind as cues, and by that time we'll have mesh networks everywhere so the cars will share information too. And then the insurance companies will lower the premiums if you opt not to drive manually on main roads, and that'll be that.
So what are the emergent properties of flocking cars? I think we'll need a certain kind of maths to model it, and that is what I was thinking about. It'll be a bit like signal processing: we'll look at the distances between successive cars, and monitor the pressure waves that move along that pipe. There will be standing waves causes by junctions, and the police will introduce special cars to act as signal dampers or oscillation providers, used to break up clots of cars. Having all the cars on the same rule system worries me, because the failure modes of homogeneous populations are nasty. We'll need a way of parameterising the rules so that different cars all behave slightly differently, so that traffic itself becomes an ultrastable system, responding to an accident by simply shifting the internal population of rules to change the equilibrium and flock around the grit instead of locking up entirely. That'll mean we'll have a statistical mechanics of traffic instead, and next to the weather reports in the newspaper we'll have pressure- and temperature-equivalent forecasts being reported for the road system. Heat is just the aggregate of the vibrations of a mass of particles, after all; the analogy works.
We'll use the same reports to buy and sell futures in car insurance, hedging the additional commute risk by promising to take a safe trip to the grocery store in the evening.
7 #
I look forward to the day of flocking cars, because change is good. The more change there is, the more likely it is that some of the changes will be positive and form survival machines that persist. One way to bring forward that day is to deliberately drive badly. The more effort we put into driving well manually, the less need there is for robots. So we don't get flocking cars and we have to work harder. That's the story of the 20th century, if you ask me.
The communists were dangerous for two reasons: firstly they were internationalists, valuing actors whose behaviour they admired highly even if the actors were not in the local nation state. At any moment their motivations were potentially treasonous. Secondly they knew class revolution was inevitable, but attempted to bring it forward by promoting conflict, making riots out of protests, attacking the police and so on.
Driving badly, burning tyres on the roof and refusing to delete code are all aspects of bringing forward the revolution by promoting conflict. I am an internationalist of progress.
8 #
I was going to talk about groups and motivations on the Web, but first I want to briefly mention the Magna Carta.
Monarchy used to be a force of nature. It wasn't something you'd notice. Yes there were limits of the power of monarchy, but they were like gravity: since the king couldn't fly, he wouldn't try. He can't vaporise the barons with his mind, so why should he want to? The monarch has the power that a queen bee has in the hive: in dynamic equilibrium with the population. But equally the monarch isn't totally defined by the relationships to the population around it: if the monarchy was so defined, the person wouldn't be required because the shape of the hole would do.
Necessarily, the monarch is sometimes unpredictable (if it's known what the king would say in any given situation, the monarch wouldn't be required). So in the situations where the king is predictable, the situation unfolds without the monarch needing to be present as an actor: the king is like gravity. And when the opinion isn't known, the monarch is the final answer--the answer of the king can't be deduced without asking the king. There are no shortcuts.
Note that this is like the present day legal system. In most situations between two companies, a contract will exist but never be used: because each company can see what a contract says about a decision, the contract will inflect that decision without being run. In some situations a contract is ambiguous, and the legal system will compile and run it. In that case it cannot be known ahead of time what the result will be, otherwise there would be no point in running the legal system over the problem.
It is not possible to know what the king will say without asking the king; the king is only asked for a decision in cases where the answer cannot be known ahead of time. The king is the only source of unknowns in the society. In our world, it is the free market which is the only source of unknowns.
What the Magna Carta did - or rather, what the process that the Magna Carta was part of did - was turn the king into a thing. The thing-king is the king revealed. The important feature of the document isn't the constraints put on the king, but rather the fact that it is possible to bind to the king at all. It's like suddenly leaping above the ocean and realising that the ocean is there at all... and therefore tides! and therefore currents! and therefore the possibility of other bodies of water that aren't this ocean! Or it's like the way that air pollution revealed the atmosphere as something that could have varying parameters over its volume. Once the king is a thing, it is possible to bring the king into society, and constrain and rule him. That was the big deal.
So when I look at the economic imperatives of the free market, and the way the invisible hand is swiping us into situations that are frankly fucked up, I wonder: is it possible to make the free market a thing? Rather than complain about it and make laws about it (which of course won't work, because they're inside a system in which the only source of 'truth' is the free market (and by 'truth' I mean those facts which can't be known without computing them)), is it possible to describe the market to such a degree that the next steps will naturally be to internalise and constrain it?
Because whenever we describe something, we consume it. Look at the universe and physics. Or people, and psychology and mental hygiene. Or Africa.
To promote the end of the free market, if that's what you hate, discuss it--that's my advice. Discuss it and find the whorls and jetstreams of it. Find the laws of thermodynamics of it. Categorise the behaviours, without making the models that economists do. Taxonomy first, models next. Turn the market into a surface ripe for binding. As thermodynamics described steam and brought about the Industrial Revolution, describe the market.
This means we'll have metamarkets, in the end. Mini free markets captured and tuned to perform particular tasks, inside a society we can't currently grasp, just as China held Hong Kong in a bubble to propel it into orbit, and the Large Hadron Collider intends to create new zones of particular kinds of physics in order to perform scientific experiments.
9 #
Another thing the Web needs to do better is groups. A lot of social software takes non-social, individual activities and puts them into a social context: bookmarking, photos, even accountancy. Then what we've done is ported in existing social concepts (used by people to model and interact with other people in social settings) like identity, reputation and sharing, and created a substrate in which social patterns that emerge in the meat world can come about online too: groups, factions, and power relationships (I would say these exist more independently from people than the earlier concepts).
I want to think about social software in reverse. Can we take activities that are already group-based and irreducibly social in the real world, and make software that is good for them? I suspect that software for a running group or book club would not succeed by merely introducing ideas like friends lists and social sharing: these "features" were never absent. Instead new kinds of social functionality will need to be created. To be honest, I've no idea what this social functionality will be.
But to begin with, I do know that making social software for a pre-existing group activity is so different from the "individuals socialising" model we have at the moment that our usual references (Goffman's presentation of self, and human motivations from psychology) will fall down, and our normal feature set (signing in, and messaging with inboxes) won't apply.
The challenge I was thinking about was this: how would you design a sign-in system for a book club? Having them share a username and password doesn't seem elegant somehow: although the information they keep online they want to keep in common, in the meat world telling one person a username and password doesn't guarantee that knowledge passes to others in the group. So is there knowledge they do hold in common?
Perhaps the login system could be based around questions: 'what is a name of a blonde person in your group?' And let's say, to sign in, you answer three questions: two which are known by the system and one which isn't. The one which isn't known is asked several times and the answers correlated. This becomes another known fact the system can use in the official part of the sign-in process. The problem I see here is that people from outside the group could also sign in, and this is also the problem with traditional passwords: with my weblog, it's not the random stranger I want to prevent logging in--it's the potentially malicious people I might meet, who are the people most likely to guess my password (except that I use a strong password, but you get my drift). Somehow the group sign-in system has got to make use of the fact that a solid group exists out there in the world, and make use of a trivia that only an official group member would know: 'did you have red or white wine at the last birthday dinner you went out for?' I don't know. I do know that it's a challenge to develop a group sign-in pattern, and once developed that pattern will encourage a thousand websites to bloom, just as a known login/registration pattern helped along both developers and users of the current batch of websites.
10 #
It also makes me wonder what analogues groups have to individuals. There is still the concept of forgetting: multiple members of a group might leave, and some of the group memory will be lost. And there will still be a need to change the identification method on demand.
In terms of what functionality we'd offer to groups, I was thinking about what an analogue to a blogging system would be to a small group of people. I don't think it's just a blog with multiple authors. Wikis are better, but they don't leave a contrail through time like blogs do. There's no publication.
Maybe a wiki for a making a zine would be better. On the wiki, which you'd be able to edit by answering the group login questions. There would be a certain amount of structure: page numbers, sidebars, links and a contents page. There would be different page formats: long form, big pictures, and intro material (there would be templates for these). There would also be a big clock at the top right: 'going to press in 30 days!' it'd say. And it'd gradually count down. On the final day, the wiki would freeze its content, and allow only small changes for a further day or two, at which point the entire thing would be automatically compiled into a zine, and a PDF of the entire thing generated and published. Old-schoolers will recognise this as what Organizine could have become but never did.
11 #
Since "Incidentally, I'm flicking through my notebook," I've been writing whatever seemed to follow on. That was page 6 of my notebook, and this notebook started in October 2007. I've just thought about looking down at my notebook again and come to page 7, which - happily - is another comment about the Web and interactive systems more generally.
Flickr is playful, and it is structured to bend the trajectory of users back into it. In a way, it's a true massively multiplayer game: with most games and ARGs the player is basically playing against the game developers and designers, who are the ones generating the rulespace. With Flickr you get the feeling that you are playing in partnership with the developers, both of you playing together in the foam of the nature of the Web and photos and social systems at large. You have different abilities, that's all.
To generalise Flickr's attributes, successful interactive systems will bend users back towards them, whether by play or not. A tool like a screwdriver doesn't need to do anything to bend people back towards it because it's driven by necessity: every time you need to deal with a screw, you'll pick up the screwdriver. But websites are abundant. The Web is crusted with functionality. In an ocean of passive particles, the ones that dominate are the ones that learn how to autocatalyse and reproduce (another expression of the Second Second Law of Thermodynamics, or just natural selection).
In order to keep going, the path of a user through a website must be designed to never end. In order for the website to grow, the path of the user must be designed to bring in more users, as in a nuclear chain reaction.
12 #
Computer programmes are something else that have to not halt unintentionally. The way this is done is to model the application as a collection of finite-state machines. Each machine can exist in a number of different states, and for each state there are a number of conditions to pass to one or another of the other states. Each time the clock ticks, the machines sees which conditions are matched, and updates the state accordingly. It is the job of the programmer to make sure the machine never gets into a state out of which there is no exit, or faces a condition for which there is no handling state. There are also more complex failure modes.
Getting Things Done, David Allen, describes a finite-state machine for dealing with tasks. Each task has a state ('in,' 'do it,' 'delegate it,' 'defer it,' 'trash' and more) and actions to perform and conditions to be met to move between the states. The human operator is the clock in this case, providing the ticks. This machine does have exit points, where tasks stop circulating and fall off.
The cleverness of Getting Things Done is to wrap this finite-state machine in another finite-state machine which instead of running on the tasks, runs on the human operator itself, the same operator who provides the ticks. The book is set up to define and launch the state machine which will keep the human in the mode of running the task machine. If they run out of tasks, the GTD machine has a way of looping them back in with tickle files and starting again the next day. If they get into a overwhelmed state, the GTD machine has a way of pruning the tasks. If they get demotivated and stop running the task machine, the GTD machine has ways of dealing with that. Alcoholics Anonymous has to deal with this state too, and it's called getting back on the wagon. The GTD machine even has a machine wrapped around it, one comprising a community to provide external pressure. Getting Things Done is a finite-state machine that runs on people; a network of states connected by motivations, rationale and excuses, comprising a programme whose job it is to run the task machine.
13 #
Websites can also be seen as finite-state machines that run on people. Successful websites must be well-designed machines that run on people, that don't crash, don't halt, and have the side-effect of bringing more people in. Websites that don't do this will disappear.
Instead of a finite-state machine, think of a website as a flowchart of motivations. For every state the user is in, there are motivations: it's fun; it's the next action; it saves money; it's intriguing; I'm in flow; I need to crop the photo and I remember there's a tool to do it on that other page; it's pretty.
If you think about iPhoto as its flowchart of motivations, the diagram has to include cameras, sharing, printers, Flickr, using pictures in documents, pictures online and so on. Apple are pretty good at including iPhoto all over Mac OS X, to fill out the flowchart. But it'd make more sense if I could also see Flickr as a mounted drive on my computer, or in iPhoto as a source library just as I can listen to other people's music on the same LAN in iTunes. This is an experience approach to service design.
Users should always know their next state, how they can reach it, and why they should want to.
If I were to build a radio in this way, it would not have an 'off' button. It would have only a 'mute for X hours' button because it always has to be in a state that will eventually provoke more interaction.
Designing like this means we need new metrics drawn from ecology design. Measurements like closure ratio become important. We'll talk about growth patterns, and how much fertiliser should be applied. We'll look at entropy and population dynamics.
Maybe we'll look at marketing too. Alex Jacobson told me about someone from old-school marketing he met who told him there are four reasons people buy your product: hope, fear, despair and greed. Hope is when your meal out at the restaurant is because it's going to be awesome. Fear is because you'll get flu and lose your job unless you take the pills every day. Despair is needs not wants: buying a doormat, or toilet paper, or a ready-meal for one. Greed gets you more options to do any of the above, like investing. Yeah, perhaps. Typologies aren't true, but they're as true as words, which also aren't true but give us handholds on the world and can springboard us to greater understanding. We can kick the words away from underneath ourselves once we reach enlightenment.
14 #
The lack of suitable motivations is also why we don't have drugs that make us superheroes, and this is also a failure of the free market because obviously these drugs would be cool. It's not like there hasn't been a search for interesting drugs (drugs for something other than pure pleasure or utility). The second half of Phenethylamines I Have Known And Loved documents the Shulgins' subjective experiences with 179 compounds that act like the neurotransmitter, phenethylamine.
"There was a vague awareness of something all afternoon, something that might be called a thinness." (#125)
"I easily crushed a rose, although it had been a thing of beauty." (#157)
PiHKAL was written in 1991. Viagra wasn't patented till 1996. Why hasn't the last 16 years been spent on substitute phenethylamines doing for consciousness doing what Viagra does for penises?
In The Player of Games, Iain M. Banks talks about Sharp Blue, a drug which is "good for games. What seemed complicated became simple; what appeared insoluble became soluble; what had been unknowable became obvious. A utility drug; an abstraction-modifier; not a sensory enhancer or a sexual stimulant or a physiological booster."
I was talking about this with Tom and Alex and some Matts in the pub a few months ago. Why don't we have abstraction modifier drugs now? Why are there no drugs to help me think in hierarchies, or with models, or to make cross connections? Or rather, since drugs exist that do this in a coarse and illegal way now, why haven't these been tuned? I had been busy all day coding highly interlocking finite-state machines.
If I wanted to make such drugs, it would be a waste of my time to develop them myself. My best way forward would be to manipulate the market and society to want the drugs, and leave the rest up to pharma industry. Products are often invented (or revealed) to mediate a risk. The particular risk is chosen (from many) to make a particular product: there is no cause and effect here; it is choreographed between the risk and mediation. For example, the AIDS risk has been revealed as HIV, as there is money to be made in designing drugs to combat this particular molecule. However the AIDS risk is part of the risk of epidemic death and the cause of this is more properly revealed as poverty, not a virus. The product to mitigate and manage AIDS-as-poverty is not as palatable to the world or as tractable for the market: it is expensive and hard and has rewards which are hard to quantify (I'm badly misrepresenting Risk and Technological Culture, Joost van Loon, I'm sure).
Risks are deliberately revealed in small ways too. Dove invented, in advertising, the concept of the un-pretty underarm (no 'armpit' here). Fortunately they have launched products to mitigate this problem.
So first the problem of abstraction difficulty must be revealed as a risk. This can be done in a traditional marketing way, with press releases and spurious research. Then various ways must be tested to mitigate this risk: these need to be simultaneously difficult and expensive, and vastly popular. The intention here is to demonstrate that there is a market for any company who creates a better method, which is profitable enough that some expensive research can be done up-front. For this, imagine popularising a method like Getting Things Done crossed with the creation and value of the diamond industry and the accessibility and mind-share of omega-3 in fish oil. I'm sure the alternative therapy industry would be unwitting, happy partners in this.
Once the risk is revealed and the market visible, it's a matter of providing to the pharma industry the material they need to persuade governments to make this legal. That is, the facts that the ability to abstract is essential to business, and that it's a matter which is outside the scope of government.
From there the invisible hand will take care of everything. A technician will be reading some lab reports and realise that some of the test subjects exhibited the qualities discussed as necessary to the country in a newspaper article they read last week. A research programme will be suggested. The business people will assess the market size, based on published research and the number of people buying games like Brain Training. The government will turn a blind eye because business needs to be competitive. We could have an abstraction drug inside a decade.
One day people will look back on the above paragraphs and see that what was suggested was what happened. It will be my L. Ron Hubbard moment.
15 #
Earlier this year I met a biochemist who is part of a research group that has created an artificial store of energy which is as small and energy-dense as fats, and as quick release as sugars--those being the two ways cells in the body store and retrieve energy. Because every cell uses energy, rats eating food with this new compound are tested to be smarter and have better endurance. Holy shit. The same fellow was wearing a hearing aid with a switch that made him hear better at parties than I do.
I would like to meet more people like him. I am currently working on an idea which may turn into a small product which I am prepared to underwrite, and it requires I meet with people who are decent-ish writers (who would like to write more and are probably teachers) who are interested in the world and typologies, and understand one of these areas at a 16 year-old to first-year undergraduate level: geography and city models; meteorology; the Austro-Hungarian Empire; soil; tectonics; the metabolic cycle; proteins and enzymes; U.S. highway design; dendritic patterns; closed-system ecology like Biosphere 2; farm management.
The kind of person I have in mind is Dr Vaughan Bell, who was introduced to me by a friend to write a hack or two for Mind Hacks. He wrote a whole bunch (he was already a pretty serious Wikipedia contributor, among other public understanding of science activities), communicates incredibly, and has turned the Mind Hacks blog into one of the top 5,000 weblogs globally, increasing general knowledge and interest in psychology and cognitive science immeasurably. Please let me know of anyone of whom you're reminded by these two paragraphs.
16 #
Vending machines on the street sell mixed smoothies. Each machine is populated with a selection of 8 from dozens of base fruit smoothies. There are 10 options on the machine, representing different mixes of the 8 fruit flavours. Genetic algorithms are used to evolve the smoothies towards the optimum flavours for that neighbourhood, based on what sells. Variety is introduced by having wild-card base flavours in that 8th slot. Sometimes you take a detour on the way to work to help out training a machine to produce your favourite cocktail.
Additionally: 'Coke Continuous Change' is a variety of Coca Cola that is different every time you buy it. The company manufactures batches of varying recipes and ships out crates of unique mixes. Each can has a code on it that also represents the recipe. Using feedback from drinkers, Coke can optimise the level of variety and serendipity on a hyperlocal level. The only constant is there is no constant. If you hate the one you're drinking, buy another.
That'll do.