Every so often I find myself returning to the ideas from the ETCON Biological Computing session. There's a particular snippet in my notes: "a language where you can't label anything else. all you can do is emit and accept, so you emit stuff, and if something roughly can deal with it, it accepts it. wow!"
Emit and accept. To force the paradigm, Geoff Cohen was talking about a computer language where names and labels were outlawed. Instead processes would listen for data, and emit when they needed to. In much the same way as enzymes have protein shaped holes in them actually. On a very simple level, this method of dealing with data is good because it removes the dependancy of an earlier programme having to know about a later one to put it in the pipeline.
Two more becauses, while I think of them. A. Pipelines are what people do: that's what the industrial process is, chunking and containing, ditto science and writing (as opposed to speach). But nature doesn't work with pipelines, or nor need computers. B, and if I can get back on to queue theory for a moment, ah, hang on:
BEGIN:TANGENT From the safety of a clearly labeled tangent, back on to queue theory. It strikes me that push and pull queues are a really fundamental aspect of artificial versus natural systems. Push feels difficult to point at because it's completely coherant with how our society works. Resources are dug up then they're assembled then they're put in shops they're sold to people. Push is advertising, is inventories and warehouses, is running out of oil, money. Pull on the other hand... pull is feedback, of making use of what's available and competing, a society. It's ascetic, operating on available resources. Push is rules; Pull is incentive fields. Pull is the robustness principle. But because artificial systems can't afford to have undirected development, push is the only way to go. The maxim is: go one step, look where you need to go, do another step. The pull maxim would be: see what's around (all of it), and do something with it; repeat. Not very directed. With expensive raw materials, push is the only way to make a car. But when the parts are duplicable for zero cost, and when there isn't an end product, pull is the way to go. Whereas... Data is free to copy. And technological progress doesn't have an end-state. Pull is the future. END:TANGENT
The ideas of biological computing make a lot of sense where lots of things need to happen to data. My email is, in order: weeded for spam once, filed and/or archived, collected, spamchecked again, read, responded to. It's a tedious pipeline to set up. What if, instead, all the bots to do the work had email shaped indentations in them, to make the email stick so the bot could go to work? Now Ben Hammersley has a much better example of a system that really needs emit and collect (we were talking offline on Thursday), but I'll let him talk about that when he's ready.
But what I really started all of this to say is that the conversation moved on to: what would the abstraction layers of an emit-and-collect paradigm www browser be? Tough. You'd have to type something in on the keyboard, and the string would go into the datapool. All the various processes with URI-specific glue would copy the string, do their bit, then autorelease. One of these processes would say, Hey it's a URI, then make a network connection, GET the resource at the URI and emit a copy of that back out into the pool. The HTML glue of the parser would stick to the resource while it was understood and transformed into a version that can be rendered to the screen. And a browser window (or whatever) would pick up the rendered page, and display it. But that needn't be all -- other URI processes could copy the string for a history list, or an intelligent proxying system. And no bit of the system needs to know about any other. Emit and collect.
Various Natural Language Parsing methods work in a similar way. Even if systems aren't rewritten like this, it's an interesting way to imagine the network. A fun exercise.
Every so often I find myself returning to the ideas from the ETCON Biological Computing session. There's a particular snippet in my notes: "a language where you can't label anything else. all you can do is emit and accept, so you emit stuff, and if something roughly can deal with it, it accepts it. wow!"
Emit and accept. To force the paradigm, Geoff Cohen was talking about a computer language where names and labels were outlawed. Instead processes would listen for data, and emit when they needed to. In much the same way as enzymes have protein shaped holes in them actually. On a very simple level, this method of dealing with data is good because it removes the dependancy of an earlier programme having to know about a later one to put it in the pipeline.
Two more becauses, while I think of them. A. Pipelines are what people do: that's what the industrial process is, chunking and containing, ditto science and writing (as opposed to speach). But nature doesn't work with pipelines, or nor need computers. B, and if I can get back on to queue theory for a moment, ah, hang on:
BEGIN:TANGENT From the safety of a clearly labeled tangent, back on to queue theory. It strikes me that push and pull queues are a really fundamental aspect of artificial versus natural systems. Push feels difficult to point at because it's completely coherant with how our society works. Resources are dug up then they're assembled then they're put in shops they're sold to people. Push is advertising, is inventories and warehouses, is running out of oil, money. Pull on the other hand... pull is feedback, of making use of what's available and competing, a society. It's ascetic, operating on available resources. Push is rules; Pull is incentive fields. Pull is the robustness principle. But because artificial systems can't afford to have undirected development, push is the only way to go. The maxim is: go one step, look where you need to go, do another step. The pull maxim would be: see what's around (all of it), and do something with it; repeat. Not very directed. With expensive raw materials, push is the only way to make a car. But when the parts are duplicable for zero cost, and when there isn't an end product, pull is the way to go. Whereas... Data is free to copy. And technological progress doesn't have an end-state. Pull is the future. END:TANGENT
The ideas of biological computing make a lot of sense where lots of things need to happen to data. My email is, in order: weeded for spam once, filed and/or archived, collected, spamchecked again, read, responded to. It's a tedious pipeline to set up. What if, instead, all the bots to do the work had email shaped indentations in them, to make the email stick so the bot could go to work? Now Ben Hammersley has a much better example of a system that really needs emit and collect (we were talking offline on Thursday), but I'll let him talk about that when he's ready.
But what I really started all of this to say is that the conversation moved on to: what would the abstraction layers of an emit-and-collect paradigm www browser be? Tough. You'd have to type something in on the keyboard, and the string would go into the datapool. All the various processes with URI-specific glue would copy the string, do their bit, then autorelease. One of these processes would say, Hey it's a URI, then make a network connection, GET the resource at the URI and emit a copy of that back out into the pool. The HTML glue of the parser would stick to the resource while it was understood and transformed into a version that can be rendered to the screen. And a browser window (or whatever) would pick up the rendered page, and display it. But that needn't be all -- other URI processes could copy the string for a history list, or an intelligent proxying system. And no bit of the system needs to know about any other. Emit and collect.
Various Natural Language Parsing methods work in a similar way. Even if systems aren't rewritten like this, it's an interesting way to imagine the network. A fun exercise.