"The desktop is dead," says David Gelertner in The Next Computer Interface [via blackbeltjones]. Various other concepts are covered, all coming from the basic idea that the desktop/filing-system metaphor is limiting and outdated. Two of the products mentioned are: Gelertner's Scopeware, a browser-based piece of software based on the Lifestreams model; and, Star Tree, a way of navigating/viewing the web using nodes and arcs.

Naturally, I've been thinking about this.

Although I think the filing cabinet model is dated and not suitable to the way we store documents on computers, from a UI point of view I believe it's extremely strong due to its rigid map component. The brain is very good at analysing and remembering terrain. I've noticed this particularly moving from Mac OS 9 (strong spatial desktop UI) to Mac OS X (weak spatial desktop UI) and it's a significant difference.

So, let's decouple the two parts of the filing system: The storage system is nonsense. We've got a computer -- the computer should be the secretary and do the filing. Finding documents/information is a human job and we should keep the interface to do that, namely the spatial desktop. The human should influence/construct the filing layout; the computer should file information appropriately based on past behaviour which can be derived from either which documents are where, and what the folders mean. Oh, of course: folders are dynamic search results, not just buckets. What I'm saying is that like documents should accrete near like documents, in the corners of your file system, in the same way my pens "naturally" tend to end up on the right hand side of my desk (because I'm right handed). Whenever I want a pen, there's one to hand because I used one there last time! That's important.

The other big point is something that both these systems are reaching towards but I don't think has been explicity mentioned. A computer monitor is replacing all our senses when interacting with the universe on the other side of the LCD. We're left with sight (in most cases) and a very crippled form of it; real vision doesn't work by having everything you see commanding equal attention. Computers aren't even as good as books here -- you haven't even got that contextual "how many pages are left" sense. We're building from the ground up.

What is needed is peripheral vision. This may or may not couple with zoomable interfaces, but it does enforce one technology constraint and one UI constraint. Firstly, that the computer needs to be capable of finding contextual and associating information and displaying it in a way that making it clear said information is secondary. Maybe when looking at an email, associated emails glow or the folders they're in appear "well trodden". Maybe you can overhear IM conversations [cheers Phil G]. Maybe the bottom left of your screen is dedicated to your contact-information-memory, and contextual information about everyone mentioned on the screen is displayed in that portion. Remember everything has to be rebuilt here: your reality associations layer, tunnel vision, concentration.

The second point: To make this metaphor clear, we have to lose action-at-a-distance. Menu bars don't work in this model. Seeing bold text, you should look closer and see buttons, levers and handles on it that affect its weight, or typeface. Or maybe you can pick up a tool to influence the text, and carry it with you, but if the cursor is your point of concentration, your centre-point of tunnel vision, you shouldn't be affecting items miles away by an action in a menu option. It's a rigidly enforced box model we need. Boxes can choose what boxes they contain, and boxes have their own behaviour and can decide what to do based on their context. But boxes can't dictate what happens outside them. Generally, that's not how the world works.

We need high-level concepts about how the universe works here, to emulate them on the screen. Systems are systems afterall, and I think our metaphors so far have been slightly too parochial.