v1.0 19may2002 by Matt Webb
When I'm thinking about what I got out of the Emerging Tech Conference, there are two main strands and I'm thinking they're pretty much shared (which is another way of saying generic and unfalsifiable, but if that's the price of being right I'm willing to pay it).
First: Ideas. It seemed that every other sentence was triggering me to new combinations, new directions of thinking. There's a stack of new ways of building applications, areas to research, metaphors to explore a mile long in my notes. Looking around in the keynotes and sesisons, people were typing feverishly at parts that were pretty linear for me, so I guess it was happening also round.
And the conference was structured to increase the incidence of serendipity too. Jason Kottke said that there were ideas he'd been spinning around for the past nine months and now here they were and everyone was talking about them. That too I think is the case for everyone who was at the conference. Internal models were being shared, enhanced and textured all around. Well, this piece isn't going to accidentally trigger any revelations. You had to be there I think.
It would be interesting to consider how the presence of like-minded people and the continual exposure to structured argument both familar and new can help to move the mind to new layers of abstraction and to recast old concepts. But this isn't the place I'm afraid. Another day.
I'd like to get down a little of the second strand.
For me, there was something at the centre of the conference that it wasn't possible to put my finger on, at least not to begin with. As a member of the conference committee, Cory Doctorow told me this was something felt during the organisation too. That the sessions, the keynotes, and indeed the attendees were circling a certain unarticulated idea, of that common theme in the emerging technologies.
Three days after, I feel I have a more solid idea of what was being explored and it's only by constructing a kind of narrative over the ideas expressed during the four days that I can get at it.
(There are a few notable exceptions to this piece. Book and link recommendations from all the people at the conference and conversations not directly about the conference aren't mentioned. The various tie-ins with conversational user interfaces (CUIs) in general, and Activebuddy (who we met) in particular aren't mentioned. Sessions I didn't attend, and even some of the ones I did aren't in here, and there are enough talking points in any of them - mentioned or not - so write this much so everything is truncated. Also missing is checked spelling and any kind of structure.)
Emerging Technology as a concept wasn't ever defined and why this technology is being developed never pinned down, despite Matt Jones insisting in his Birds of a Feather session that we need to define "What problem are we trying to solve?" If anything the problems aren't technological, they're to do with user interface (more than would be guessed by the small number of designers at the conference) and more importantly the problems are social. The social aspects of the technology I'll come back to.
But the term is something of a pun. Yes, we're thinking about "emerging" as in "new". Radical new ways of conceiving of computing, Web services and weblogs all came up (although not instant messaging, despite some good efforts).
It's mainly about technology and its emergent properties, as in Steven Johnson's definition. Last year's conference was about P2P and that subject unlay everything that was talking about. The kind of technology being discussed was inherantly networked, distributed, available to end-users (only a little infrastructure was mentioned). That kind of arrangement is perfect for emergent phenomena to arise.
That emergent features are so prominent in what the technologists are creating, and that they weren't expected to begin with, leads to a fascination. They are, in fact, exactly a new feature, in the software sense of the word, and a feature that needs exploring: How do these phenomena arise? What are they? What properties of the technology have caused them? How do we make them ourselves? That the emergence tends to happen in the social world provides bridge.
"Small pieces loosely joined" (the title of David Weinberger's new book) was a phrase I heard a lot. In Johnson's Wednesday keynote he isolated this property in both cities and in weblog space ("blogistan", as Cory says). Or more precisely, he looked at how cities clustered and tended towards the correct density in clusters, and how they provided space and diversity. Then he looked at how weblogs don't have these, despite their social nature: "the problem with blogspace is there are no neighbourhoods". We'd have to wait until Clay Shirky's Livejournal session on Thursday afternoon to find solutions to this.
The first time I heard about small pieces was in the .NET and Mac OS X Web services tutorial on Monday. Sam Ruby, after James Duncan Davidson has finished his talk and demonstration, was discussing when to use SOAP and when to use some more optimised protocol. Most RPCs are efficient for tightly coupled systems, when you control server and client for both ends of the wire. SOAP isn't like that, and although that's a disadvantage from the REST perspective (because the server, in the architecture sense, can choose whether or not to make use of the message, as opposed to accepting it like a resource), this means that SOAP allows more developers to take part. Loosely coupled systems, Ruby said, of many layers. In fact these properties of SOAP, together with its non-optimal nature, bode well for its future and I'll be coming back to them later.
Johnson's keynote was also the first to start searching for how systems, these loosely coupled pieces, started emerging. This set the scene for the conference as a whole: what are these secret properties, and what features do these cause? By the end of the conference, and because the features are primarily social, we'd be looking deeper still.
For cities, he identified a number of properties: Bottom up interactions, strollers, passive organisations, and the swerve. Of these, the swerve is what seems most notable and is what I heard being mentioned later. The swerve is that distraction between A and B that causes new areas to be explored, new shops to be built, and neighbourhoods to change. Over many, many swerves over time, the swerve builds up into an aggregate picture of how the low-level interactions are occurring. It's the feedback loop that provides the neighbourhoods with knowledge about individuals.
When it comes to weblogs, Johnson found that there were problems forming higher level groups, that there was little passive organisation, that readers lack input, and that everything was governed by what he called the "tyranny of time". The feedback loops to self-organise simply aren't present in the environment of weblog space. How to build in these feedback loops was something that this keynote and the rest of the conference would keep coming back to.
I'll continually make a distinction between two types of technology properties: public and secret. Standard, public properties are ones we can point at. Certain technologies are popular, for instance; they enable collaboration; they convert a member of the audience into a participant; they produce a form of journalism; it's easy to find your files.
Less easy to identify are the secret properties. Secret properties are what make these public properties possible. In attempting to write the second generation of software, we're having to identify what made the first generation popular in the first place. WorldWideWeb wasn't the only hypertext software package out there. Napster wasn't the only way of sharing and finding new music online. There were alternative ways of communicating electronically. To build successful new technology, we have to reverse engineer these accidentally successful projects: What are the secret properties? This is with the intention of building them in.
By identifying the secret properties of weblogs, we're hoping to build those into weblogs. What secret properties do we need to make an analogue of the swerve before? That kind of feedback loop is very important to self organisation, and self organisation is very important to emergent journalism.
Which brings me to why technologists care about journalism and toppling the New York Times anyway, and this is a point under dispute. As far as I can see there are three possible reasons. It could be that the group of technologists are self-selecting for free information advocates, so Lessig's keynote on the final day about the Future of Ideas and patent reform was interesting directly. This sort of person would want to make use of the software to distribute ideas and change the world to move ideas around more freely.
Alternatively, it could be that new technology just happens to be the area of knowledge currently pushing against patent and copyright law and this has caused commotion and subsequent interest in those laws and the fields around it.
The third possible reason is that emergence from technology is being treated like a feature to be explored, understood and built in. That this new features happens to be social means that a large number of people whose natural reaction to problems and blockages is to fix them or route around them means that computation habits are being applied to the real world. And technology has been highly politiced in recent years, Open Source and the DMCA being just two such issues.
This leads us some way to the subject of the conference. Reading technology that exhibits emergence, what are these properties, what properties of the technology causes this, and what secret properties are shared across these causes?
Richard Rashid, head of the Microsoft Technology Group, identified more common secret properties, this time in experimental operating systems. He presented a keynote on some associative and adaptive interfaces to personal information, and covered some novel interfaces to personal information, and some of the work involved in answered questions like "what is related to this?" such as semantic networks an a relational database filesystem.
The basics of the OS won't surprise anybody who has looked at this field before, and he started with a few new versus old divisions: Task vs Program; Query vs Hierarchy; Event driven vs Explicit command.
This new models mirrors very closely what's happened in the networked world. Web services are inherently event driven, and the applications around them are standing information requests or filters on always changing data. This is true for the next generation of operating systems too, a view familiar from David Gelernter's Lifestreams model and the concept of "views" (dynamic searches on information that appear as regular folders) filtering their way in to current applications.
Query vs hierarchy mirrors another way we regard the world, and can also be phrases as Association vs Location. Why should documents be stored in folders on the computers rather than be found by task-driven requests, asks Rashid. Again, see Web services. A task driven application can join together many of these query-driven services.
On this desktop, this means a conversational interface that can anticipate and present content. One slide of an experimental desktop showed a text field at the top of the screen much like a command line, but actually for entering natural language queries. This would both do away with the unique location file hierarchy of current computers, and provide a single task-driven interface to writing a letter, or browsing the web, or working on a project. The query cs hierarchy concept would return again and again.
Matt Jones and I visited Google on Wednesday. They're extremely query focussed, basing their interface on a concept they call "Onebox". Onebox says that whatever the user puts in the search box, Google should present the answer. If the user is looking for a map, a map should be presented with the search results. It's a radical statement: The search box puts the user equidistant from everything Google has to offer. From a few words with no syntax (they don't want to force the user to learn an interface landscape), the most relevent slice of information on the internet is presented. I feel that if Google could present everything on one page without asking, having nothing on the front page but the search box, and the user be happy that is what they'd do. No custom search would be there.
Google also implement a form of the swerve. By where the user goes from a search result, including whether they come back and go somewhere else immediately, the search engine can make use of the knowledge of domain experts and improve the top results for those who don't know the field so well.
It's interesting that such a successful and well-liked company should be implementing these two secret properties of new technologies. That they share these properties may be something to do with the fact that Google.com is a member of the next layer of the web -- that which operates on the web itself (rather than that which is a combination of the real and the virtual, simply putting information online). As well as using this property internally, Google embody it externally for the web as a whole: they are the associative query to the location hierarchy of the URI system.
However, the URI and addressibility secret property of the internet is extremely important and the single feature that most made possible two public properties: participation and recombination. Partipation because items can be referenced and commented in your own space (as well as because pages can be viewed and copied). Recombination because without URIs it wouldn't be possible to rewrite taxonomies, to pull information. It's the overlooking of the "old" location property that leads to three things: SOAP vs REST, zoomable user interfaces (ZUIs), and my objections to the Microsoft research, which themselv segue into the problems with identifying secret properties in this manner.
SOAP is being widely adapted as a way of making procedure calls across the internet, and more generally as the standard protocol of Web services. It's not the only protocol in the game however, and it's causing some major objections, primarily from an architectural viewpoint called REST, from a thesis by Roy Fielding. REST holds that the standard HTTP verbs of GET, PUT, POST and DELETE are sufficient, that by building distributed applications in this manner (and optionally using XML to encode resources) and using URIs to address resource locations, applications are more true to the spirit of the web, semantically meaningful, and more robust.
Proponants of the other side hold that the rules to define a RESTful architecture are too loose, and that developer support and client toolkits are what matters. A low barrier entry is a secret property for a widely adapted technology afterall. The fact that client toolkits are widely available and developers don't need to manually parse XML (or even see it at all) was the reason NelsonÊMinar gave to why Google use SOAP, in his session "Deploying the Google Web APIs Service".
For RESTful applications, locations (in the form of URIs) are massively important and provide semantic meaning. This provides direct access to the deep structure of the web. SOAP applications are more like automated web searches: a number of services joined by association, a possible surfice structure. This deep versus surface structure is yet another way of viewing the location versus assocation, or hierarchy versus query debate (phrasing it that way round to have consistant terms on the left and right).
Rohit Khare, in the SOAP vs REST session finished by saying that the URI is 1-dimensional syntactic sugar for passing a bunch of parameters in. This means that for a certain subset of SOAP and a certain subset of REST, applications are architected identically, but I don't think the dispute will end there. In the end I believe SOAP will win because of "apps, apps, apps" (as Dave Winer put it in the same session), the public property of two secret ones: low barrier to entry because of the client toolkits, and another property Cory Doctorow called sub-optimality in his talk. I'll come back to that.
Zoomable user interfaces were presented by John Ko of Cincro Communications Corporation. They attempted to solve the problem of hierarchy, which is not always appropriate in multi user applications, with an interface that from the beginning would work well with associatively structured data and collaboration (public properties both).
It wasn't explicitly stated what problem this was trying to solve and unfortunately the case for a zoomable user interface wasn't made in a session that really focussed on the technology behind the collaboration features. Zoomable user interfaces themselves are something I feel are still at the level of text documents, and that's the fundamental interface object they're trying to replace. A hypertext system build in these still isn't mature enough, and they're orthogonal problems anyway: you can have the web within a ZUI, and an alternative hypertext (say, with transpublishing and collaboration) without the zooming. Putting the two together is confusing.
I get the feeling however that in trying to build in the public properties, the secret ones have been forgotten. The ZUI neglects to have obviously addressable components, and an XML interface appears to be an afterthought. Furthermore, the location aspect of location versus association is almost completely neglected. Without location it's not easy to build alternative hierarchies, a key property of the web (as in linking). Association, in ZUIs, is all.
This problem is even clearer in the research Microsoft are doing. They take as the premise that my physical desktop is messy (it is, they're right). I remember where documents are by association, not location. On my computer desktop, they say, documents should also be organised by hierarchy.
This missed two major points. Firstly, the documents on the desk do have location, in the real world. This is their fixed deep structure, and I could express the location of each one in how far I have to lean and how far I have to reach forward. The associations are stored in my brain. The second problem is that by externalising the associations it's both removing the associations from somewhere they can be stored and cross-referenced to the rest of my life (my mind), and it's removing the location.
This, fundamentally, is why I'm using the word "secret" in secret properties. We're looking so hard to solve the public problems using public properties that the secret properties, which are so obvious and widely adapted they're overlooked, are forgotten. The reason the technologies are widely adapted is precisely because they have these secret properties.
This conference is partially about identifying the secret properties necessary in the next generation of technology, especially technology which allows emergence, and in this case both location and the possibility to have and to share associations (memory and the externalisation of memory into ontology and linking) are required.
An additional problem of secret properties is, by their nature, they're very hard to identify and they're not necessarily going to be part of emerging technologies. They feed into ad hoc processes. A basic motto of mine is to never replace a human process with an automatic one; as a corrolory this means you can never build an ad hoc process into a computer. You can, however, make an automatic process or technology loose enough so that it can be used in an ad hoc way, and this is something Cory Doctorow talking about in his session "Fault-Tolerant Realpolitik: Abandoning Reliability Online" which earlier I said I'd come back to: sub-optimal technology.
A historical example of a sub-optimal technology is the Turing Machine. A specially designed component could do any automatic job more efficiently and better than a computer, but computers caught on.
Cory covered a number of properties of successful technology. We don't need high quality content (MP3s will do, over CDs if we can share them over the internet). We don't need mission critical (it doesn't need to be online all the time, as long as it's available today and not in several years time). And we optimisation closes the avenues to using the technology in other ways.
It's using in other ways that is interesting. Sub-optimal doesn't mean generic, but it does mean not closing the hooks that let developers recombine it in unexpected ways. It's this that hooks in the popular technology in the ecosystem, and this that allows ad hoc uses.
SOAP is sub-optimal for any given task, and Livejournal is sub-optimal for any given website, but they both share the more-is-better feature. If there was a toolkit for your exact Web service task, it wouldn't be as robust or as widespread as the SOAP toolkits. If there was a content management system for your precise purpose, it wouldn't have the features, bells and whistles that Livejournal has, because these have only been built because of the large number of users.
Technologies like Unix and Web services, and weblogs composed of a CMS from one tool provider, stats from another, and blogrolling from another have a common public feature: users can hook in to the data flow at any point. This is the property "small pieces loosely joined" identifies, and is why that phrase can be applied to so many popular technologies. It's a way of identifying sub-optimality, a successful strategy.
But what is this strategy exactly? It's not a secret property because on its own is doesn't necessary create any important public ones.
It's half way between a secret property and what we could term the rich environment. A technology has to grow within an environment in which feedback loops and secret properties arise. Given we don't know what the ad hoc processes are, or how developers will choose how to recombine, or whether addressibility will be the important factor this time round.
What we can do is develop a rich environment for all this to grow in, a "loam" as Matt Jones put it, or "a good social substrate" as Clay Shirky put it. And if anything, identifying how to enrich the environment in order to produce emergent social phenomona from technology was what ETCon was all about. The social effects and how they occurred were covered in sessions about ownership and patent law, journalism, and weblogs.
The sessions that covered how this rich environment could be built were the ones that left people's heads spinning.
JC Herz in "Networked Experience Design" found another way of viewing this environment. Games. She named four types of player: Achievers who max out in the rule system and attempt high scores; explorers whose motivation and success state is to understand and model the system; socialisers who use the game as a stage for relationships; and spoilers who want to make life miserable for others, either inside the system or on top of it.
She said you need multiple win states to involve all types of games, and a game in this case includes technologies. "Replace the word 'user' with the word 'player'" she said. Games echo the transition we're making in technology.
Most web experience focusses on one user plus the whole wide world. Social software, and games, focusses on the users, and the relationship between the user and the group. Another quote: "There's a super-sumative amount of value involved when lots of people share an experience".
Groups require certain kinds of glue and currency. Reciprocal acknowledgement, blogrolling, strangers. How can technology provide ways of measuring, and sharing, these, self-expression and status? Human nature was made on the savannah in groups only as large as the grooming network can grow. Social software has to have pathways for this.
Herz was identifying the secret properties of social software, and the outcome of this was something Dan Gillmore picked up on in "Journalism 3.0". Many more people can participate now. There's a real-time filter and collaboration environment which could one day rival the New York Times. This is an emergent property of the webs, and of weblogs. And another is this involvement, something Gillmore called (and Cory picked up on) the "former audience".
We can't deliberately build software to do this. What we can do is identify the secret properties that made it possible, and build an environment to make those and more as-yet-unidentified ones even more efficient. Gaming (the learning curve, the acknowledgement currency) is one secret property provided for by the linking and addressibility, which also makes filtering and commenting possible. Participation is possible because of View Source and the technologies that enable weblog publishing. And blogrolling provides another kind of currency, for recommendation and reputation transfer. It was the outcome of these properties amplified in a more efficient way that Clay Shirky covered in "User Patterns on LiveJournal".
The emergent properties of the Livejournal application are legion. It builds in blogrolling very early on so relationships across the network are very formalised and easy to analyse. Interestingly, it counters the incentive to add many friends with a social cost: Each user (or should I say player?) has a page that shows the most recent posts of their friends. A person will not want to see posts of people who aren't really their friends.
The application also adds a feature which Shirky calls "the privacy of the mall". The pages are all protected from search engines, they're private except for browsing site by site.
There's also no pretence at equality (unlike the weblogging world). Users are happy in small groups with their close friends; they're also happy to perform a different role as a public service with friends who are basically human subscriptions.
These features produce a piece of software that works at many different scales for many different types of groups. It's sub-optimal in other words. But precisely because it's useful for so many people, it gets those features that only popular technologies get: a large audience (if you want it), well supported software. As Shirky said: "Livejournal is good social substrate". Good loam in other words, it's a rich environment.
Another session covering how to build this rich environment was EricÊBonabeau's "Swarm intelligence". He demonstrated how simulated ants running on the travelling salesman problem, computationally and traditionally extremely hard, could produce results comparably to the top algorithm.
The ants would carry and drop pheromone along their routes, taking account of previous pheremone trails to decide on route decisions. By evaporating the pheromone from the system, this becomes a very good way of deciding changing routes for dynamic systems such as packet routing on networks.
You can see the parallels between this and Steven Johnson talking about weblogging. Webloggers are the ants, carrying a "historical narrative" (as Nick Sweeney put it), navigating the web. From this it's possible to see Johnson's swerve being an effective algorithm -- ants record and aggregate the swerve and it's how they find food afterall.
The ants are small pieces: reasonably complex, strongly encapsulated. They're loosely joined by very simple information: the pheromone.
Geoff Cohen in his session on biological computing outlined ways of architecting systems and designing a programming language along these same lines. He used the telephone operator analogy. When dial phones were released, there was an objection that we'd all have to be operators. Well, we are all operators now but in a much simpler way. Cohen would like to see many programmers in a similar kind of way, but where the definition has changed so much we wouldn't view it as the same profession.
He introduced a metaphor called "emit and accept": highly encapsulated components emit data. Other components accept, and maybe accept only approximately. But these two sides aren't tightly coupled. In his hypothesised programming language, nothing would even have names, to encourage this development.
In a way, this is what Web services are approaching: emit and accept. It's a good way of building a rich environment where no avenues are closed. It's small pieces, loosely joined. It's the ants. And it's robost.
Tim O'Reilly quoted John Postels robustness principle, "Be conservative in what you do, be liberal in what you accept from others". Which is a good rule for good loam, as well as how to build sub-optimal social software I'd suggest.
I'll say one more thing that occurred to me during this conference, and that's how to change the system. This arose out of Lawrence Lessig's keynote where he said that we need new distribution models that act against superstars -- but Cory's realpolitik says that we need to acknowledge legacy systems. So what secret property is it of systems that acts to create superstars, and how can we change it?
Clarifying here, I believe that systems or rich environments consist of a number of secret properties which act as incentive fields pushing people towards certain behaviours (these are sources and sinks in chaos theory parlance). However, following the more-is-better principle, and principles from ecosystems, we know that there's feedback because the positions of people with the field change the incentive field and the system itself. Therefore by playing yourself in the system in a particular place or niche, you can change the incentive field in such a way that the system can change (the incentive points can move) dramatically.
Stable systems ensure that the incentive flow points away from places where someone operating there would change the system, otherwise it wouldn't be there (obviously). Putting yourself in this niche would be unusual to begin with, and it will entail analysis of the construction of the system, but if you can identify that point you can change it.
This is one of the reasons we study the social emergent properties and how systems behave, and also how to build in secret properties. Because all of us at the Emerging Tech conference wanted to fix the system in some way, and often that system was social, so it entailed exploring and analysis.
Building the rich environment and knowing how emergence occurs is exactly that.
Conference homepage; portal of all commentary; my own notes; this document.