Loosely I can see two visions for the future of how we interact with computers: cyborgs and rooms.
The first is where the industry is going today; I’m more interested in the latter.
Cyborgs
Near-term, cyborgs means wearables.
The original definition of cyborg by Clynes and Kline in 1960 was of a human adapting its body to fit a new environment (as previously discussed).
Apple AirPods are cyborg enhancements: transparency mode helps you hear better.
Meta AI glasses augment you with better memory and the knowledge of the internet – you mutter your questions and the answer is returned in audio, side-loaded into your working memory. Cognitively this feels just like thinking hard to remember something.
I can see a future being built out where I have a smart watch that gives me a sense of direction, a smart ring for biofeedback, smart earphones and glasses for perfect recall and anticipation… Andy Clark’s Natural Born Cyborgs (2003) lays out why this is perfectly impedance-matched to how our brains work already.
Long term? I’ve joked before about a transcranial magnetic stimulation helmet that would walk my legs to work and this is the cyborg direction of travel: nootropics, CRISPR gene therapy, body modification and slicing open your fingertips to insert magnets for an electric field sixth sense.
But you can see the cyborg paradigm in action with hardware startups today trying to make the AI-native form factor of the future: lapel pins, lanyards, rings, Neuralink and other brain-computer interfaces…
When tech companies think about the Third Device - the mythical device that comes after the PC and the smartphone - this is what they reach for: the future of the personal computer is to turn the person into the computer.
Rooms
Contrast augmented users with augmented environments. Notably:
- Dynamicland (2018) – Bret Victor’s vision of
a computer that is a place,
a programmable room
- Put-that-there (1980) – MIT research into room-scale, multimodal (voice and gesture) conversational computing
- Project Cybersyn (1971) – Stafford Beer’s room-sized cybernetic brain for the economy of Chile
- SAGE (as previously discussed) (1958–) – the pinnacle of computing before the PC, group computing out of the Cold War.
And innumerable other HCI projects…
The vision of room-scale computing has always had factions.
Is it ubiquitous computing (ubicomp), in which computing power is embedded in everything around us, culminating in smart dust? It is ambient computing, which also supposes that computing will be invisible? Or calm computing, which is more of a design stance that computing must mesh appropriately with our cognitive systems instead of chasing attention?
So there’s no good word for this paradigm, which is why I call it simply room-scale, which is the scale that I can act as a user.
I would put smart speakers in the room-scale/augmented environments bucket: Amazon Alexa, Google Home, all the various smart home systems like Matter, and really the whole internet of things movement – ultimately it’s a Star Trek Holodeck/Computer…
way of seeing the future of computer interaction.
And robotics too. Roomba, humanoid robots that do our washing up, and tabletop paper robots that act as avatars for your mates, all part of this room-scale paradigm.
Rather than “cyborg”, I like sci-fi author Becky Chambers’ concept of somaforming (as previously discussed), the same concept but gentler.
Somaforming vs terraforming, changing ourselves to adapt to a new environment, or changing the environment to adapt to us.
Both cyborgs and rooms are decent North Stars for our collective computing futures, you know?
Both can be done in good ways and ugly ways. Both can make equal use of AI.
Personally I’m more interested in room-scale computing and where that goes. Multi-actor and multi-modal. We live in the real world and together with other people, that’s where computing should be too. Computers you can walk into… and walk away from.
So it’s an interesting question: while everyone else is building glasses, AR, and AI-enabled cyborg prosthetics that hang round your neck, what should we build irl, for the rooms where we live and work? What are the core enabling technologies?
It has been overlooked I think.
Loosely I can see two visions for the future of how we interact with computers: cyborgs and rooms.
The first is where the industry is going today; I’m more interested in the latter.
Cyborgs
Near-term, cyborgs means wearables.
The original definition of cyborg by Clynes and Kline in 1960 was of a human adapting its body to fit a new environment (as previously discussed).
Apple AirPods are cyborg enhancements: transparency mode helps you hear better.
Meta AI glasses augment you with better memory and the knowledge of the internet – you mutter your questions and the answer is returned in audio, side-loaded into your working memory. Cognitively this feels just like thinking hard to remember something.
I can see a future being built out where I have a smart watch that gives me a sense of direction, a smart ring for biofeedback, smart earphones and glasses for perfect recall and anticipation… Andy Clark’s Natural Born Cyborgs (2003) lays out why this is perfectly impedance-matched to how our brains work already.
Long term? I’ve joked before about a transcranial magnetic stimulation helmet that would walk my legs to work and this is the cyborg direction of travel: nootropics, CRISPR gene therapy, body modification and slicing open your fingertips to insert magnets for an electric field sixth sense.
But you can see the cyborg paradigm in action with hardware startups today trying to make the AI-native form factor of the future: lapel pins, lanyards, rings, Neuralink and other brain-computer interfaces…
When tech companies think about the Third Device - the mythical device that comes after the PC and the smartphone - this is what they reach for: the future of the personal computer is to turn the person into the computer.
Rooms
Contrast augmented users with augmented environments. Notably:
And innumerable other HCI projects…
The vision of room-scale computing has always had factions.
Is it ubiquitous computing (ubicomp), in which computing power is embedded in everything around us, culminating in smart dust? It is ambient computing, which also supposes that computing will be invisible? Or calm computing, which is more of a design stance that computing must mesh appropriately with our cognitive systems instead of chasing attention?
So there’s no good word for this paradigm, which is why I call it simply room-scale, which is the scale that I can act as a user.
I would put smart speakers in the room-scale/augmented environments bucket: Amazon Alexa, Google Home, all the various smart home systems like Matter, and really the whole internet of things movement – ultimately it’s a Star Trek Holodeck/ way of seeing the future of computer interaction.
And robotics too. Roomba, humanoid robots that do our washing up, and tabletop paper robots that act as avatars for your mates, all part of this room-scale paradigm.
Rather than “cyborg”, I like sci-fi author Becky Chambers’ concept of somaforming (as previously discussed), the same concept but gentler.
Somaforming vs terraforming, changing ourselves to adapt to a new environment, or changing the environment to adapt to us.
Both cyborgs and rooms are decent North Stars for our collective computing futures, you know?
Both can be done in good ways and ugly ways. Both can make equal use of AI.
Personally I’m more interested in room-scale computing and where that goes. Multi-actor and multi-modal. We live in the real world and together with other people, that’s where computing should be too. Computers you can walk into… and walk away from.
So it’s an interesting question: while everyone else is building glasses, AR, and AI-enabled cyborg prosthetics that hang round your neck, what should we build irl, for the rooms where we live and work? What are the core enabling technologies?
It has been overlooked I think.