Interconnected

Quick definition of terms:

  • Push is a way of syndicating content across the www, in that the server sends content to the client when it's updated for the client to view later.
  • Lazy evaluation is a computing method where you don't do anything computationally intensive until you really have to. For example, you could have an expression like x times y and then later the result of that gets divided by y. With lazy evaluation you'd do a little bit of optimisation and end up with x, not having to do all the maths inbetween.

So how about lazy evaluation of push?

Let's pretend that we have a push network: Weblog posts are packaged up as individual resources (probably xml documents) and stored on the server as normal. From there they're sent along a route which could involve upstreaming to a cloud (Radio Userland), or going through the centralised blog posting server (eg Blogger), and from there to a central post clearing house. Registered services would pick up either the entire resource or just the titles or links, and use it for various purposes: Linking ranking engines (Blogdex; Daypop); search engines; clustering and analysis tools; research; novel UIs; asynchronous RPCs; aggregation and favourites syndication; up-flow publishing; topic sorted weblog-driven magazines. The whole flow. End to end.

The problem is, this is quite inefficient. I'm not objecting to the centralised nature of bits of this, that's quite alright, but just to the amount of information being sent around.

How about, instead, each time a resource would flow on to the next stage, the resource stays where it way originally and we pretend that it happened? Instead of sending the actual resource, you send an expression, which if evaluated, would result in that resource. For most posts, this would be a URI plus an XQuery [or similar] on that location. These are upstreamed, aggregated, and so on, until the final step where something actual needs to happen.

At that point, the topic sorted weblog-driven magazine performs its series of custom XQueries on the resources. In the push (content syndication) world, those resources would have to be held locally. But in this world, the expressions are downstreamed back towards the resource, and each step in the stream aggregates and optimises the queries, storing the results of the expressions as they're streamed back, until finally the resource itself is fetched back to the final system.

I think I've just had a glimpse of what Level 2 of the www is going to be about. And why TBL said "where, in general, you won't be able to expect to get an answer in finite time".

(So something that comes along with this is routing of messages/queries in a similar way to SOAP -- but there's the resource, www-as-database, four-basic-verbs, XML aspects of REST. And the REST/SOAP debates are fascinating. But a different story entirely.)