This is #3 in an occasional series of highly speculative, almost entirely unfounded hunches about AI.
Previously: state-sponsored IQ erosion attacks.
These two ideas about micro-apps, prompts, software, and hardware feel connected somehow. So I’ve put them together.
If the future is ephemeral AI-created micro apps, then what’s on my home screen?
Back in June, Anthropic released Artifacts for their Claude chatbot:
When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation.
Here’s the twist:
- the Artifacts can be HTML and Javascript
- you can run them, right there, as micro apps
- you can keep iterating till you’re done
- and then you can export and share them.
For example: Simon Willison made a utility to write CSS for box shadows (you can try it here).
So no app studio is going to make that and sell it. It’s not worth it, commercially.
It’s useful to Simon. But even he probably won’t hang onto it. If he can’t find it when he needs it next, he’ll just make another one. The prompt to create the micro-app is about the same length as the search query to find it from last time.
It is overall an interesting pattern! It feels clear that Artifacts will be a major interaction pattern for AI… albeit the pattern is incomplete.
How is it incomplete?
Well – imagine this is indeed the future of apps. We throw away the App Store (currently required by economics), and we throw away the idea of “installing” (currently required by computers). What then?
There are still lot of other jobs performed by my phone home screen, and by the brand of app icons, and the fact that apps exist in a social context.
Like, how do we deal with
- discovery – learning that I could have a tool for this task at all
- re-discovery – coming back to an app that I’ve previously used and liked
- sharing – telling somebody about an app such that they can google it and get it themselves
- trust – via social proof or by the hostage capital of reputation (a.k.a. brand), trusting that an app won’t exfiltrate my data and that its algorithms work correctly, even if I can’t verify those facts myself.
I’ve written about part of this before: Who will build new search engines for new personal AI agents? (2024, which links to a patent describing the necessary parts).
But here I mean more the whole operating system…
A lot to figure out.
Excel almost got there.
Excel is almost a platform for micro-apps. People write sheets to perform tasks to do their jobs, and to capture and manipulate data. They distribute spreadsheets for non-experts to customise.
All it really needed was Excel-hub so you could import other people’s trusted, locked, versioned sheets.
Imagine being able to import a mortgage calculator sheet and integrate it into your home budget spreadsheet, seamlessly updating the sheet when your mortgage broken adds features.
Or, in a firm, being the business intelligence team, and being able to export queryable sheets, safe in the knowledge that people will be able to query them without screwing them up.
The minimum viable app is a checklist.
I read The Checklist Manifesto (2011) by Atul Gawande a few years back.
It’s about how the World Health Organisation utilises checklists, even for simple procedures, and it how introducing them had a massive impact on health. Airline pilots also use checklists extensively – when you’re busy you can miss even basic steps. It increases reliability and safety. It’s a great book.
I know people whose jobs absolutely require high amounts of training and smarts, 20% of the time. But 80% they can run using checklists… and often do. They write and share checklists among themselves.
And maybe there’s something in that, grabbing a copy of a known checklist made by a peer when you want to perform a task. Not quite an app. More like a doc.
An analogy is form pads from the Xerox Star (1981), the first commercially available PC with a GUI.
The Star was document-centric. It had docs not apps.
You used a set of standard commands, each of which had a dedicated key on the keyboard:
MOVE, COPY, DELETE, SHOW PROPERTIES, COPY PROPERTIES, AGAIN, UNDO, and HELP.
To create a new document, you would COPY an existing one, keeping blank documents for that purpose:
Several commonly used icons appear across the top of the [desktop], including documents to serve as “form-pad” sources for letters, memos, and blank paper.
The Apple Lisa (1983) picked up this metaphor:
For example: Documents are created by “tearing” a piece of “paper” off of a “stationery pad”. That is, double-clicking the stationary icon creates a new document icon. There are also no “applications”, only “tools” that must be present for you to work with the documents.
And, you know, this feature makes its way through to MacOS, 40 years later.
From the MacOS user guide: Create document templates on Mac. Check “Stationery pad” in the info pane for any document, even today, and when you double-click it, it acts not as a regular document but as a template.
So: micro-app stationery pads?
Tear off a new one from the top when you need it? Share pads with friends?
Or maybe you have a Pokedex for micro-apps? Could I capture them and trade them?
Hey I would prefer that actually.
For all you kids who weren’t lucky enough to encounter the strong file metaphor… read this:
Golems, smart objects, and the file metaphor (2021).
(That’s a post I wrote some years ago.)
Files were the embodiment of interop. Save in one app, open in another.
Files were boundary objects,
meaningful to users and meaningful to machines. You could manipulate an icon and tell the computer what to do at a really low level. The files you see as projects in Figma or docs in Google Drive are thin shadows by comparison.
That post is also about the mythological animated beings golems, which I will come back to shortly.
When my simple home appliances have intelligence too cheap to meter, how will I instruct them?
Another hunch!
As previously discussed, we have intelligence too cheap to meter (2023).
So how will we interact with our intelligent light switches, intelligent standing fans, intelligent digital clock radios, and so on, with embedded GPT-4+ intelligence for pennies?
We still want our smart things to present like appliances. I think having a home full of single-purpose AI elves hiding behind my light switches would simply take up too much room in my head.
But, as an alternative, do we really want to use an app on a smartphone as a controller, or a smart speaker, as we do today? It seems very indirect.
No, I want to interact directly.
Now, this is already in theory possible – but the hardware overhead is too high.
That is: Programming my standing fan would require a touchscreen, and that means a visual UI to design, possibly somewhere to plug in a keyboard. Too much.
But if I could simply speak out loud, looking at it, and say: hey standing fan, turn on when the air is getting stuffy, and whenever there are lots of people here, but never at night or the weekends.
It would be easy to have those instructions interpreted by a little AI on a chip with a mic and a presence sensor and so on. It will be cheap to add this component.
The question for me is how will is look when even our basic home appliances are programmable? How will we tell?
This is where I come back to golems. From my post about files and golems, linked above:
Golems were activated with instructions, a spell, the shem, the prompt, and
the shem was written on a piece of paper and inserted in the mouth
So what I imagine is that my standing fan will have a small AMOLED screen on it, nothing special, where the buttons are currently, and in smallish text (because it’s as a reference, not as interface), the fan-as-golem’s instructions a.k.a. prompt a.k.a. micro-app will be printed there.
All my other devices would work the same.
That’s the lo-fi sci-fi future I want.
This is #3 in an occasional series of state-sponsored IQ erosion attacks.
Previously:These two ideas about micro-apps, prompts, software, and hardware feel connected somehow. So I’ve put them together.
If the future is ephemeral AI-created micro apps, then what’s on my home screen?
Back in June, Anthropic released Artifacts for their Claude chatbot:
Here’s the twist:
For example: Simon Willison made a utility to write CSS for box shadows (you can try it here).
So no app studio is going to make that and sell it. It’s not worth it, commercially.
It’s useful to Simon. But even he probably won’t hang onto it. If he can’t find it when he needs it next, he’ll just make another one. The prompt to create the micro-app is about the same length as the search query to find it from last time.
It is overall an interesting pattern! It feels clear that Artifacts will be a major interaction pattern for AI… albeit the pattern is incomplete.
How is it incomplete?
Well – imagine this is indeed the future of apps. We throw away the App Store (currently required by economics), and we throw away the idea of “installing” (currently required by computers). What then?
There are still lot of other jobs performed by my phone home screen, and by the brand of app icons, and the fact that apps exist in a social context.
Like, how do we deal with
I’ve written about part of this before: Who will build new search engines for new personal AI agents? (2024, which links to a patent describing the necessary parts).
But here I mean more the whole operating system…
A lot to figure out.
Excel almost got there.
Excel is almost a platform for micro-apps. People write sheets to perform tasks to do their jobs, and to capture and manipulate data. They distribute spreadsheets for non-experts to customise.
All it really needed was Excel-hub so you could import other people’s trusted, locked, versioned sheets.
Imagine being able to import a mortgage calculator sheet and integrate it into your home budget spreadsheet, seamlessly updating the sheet when your mortgage broken adds features.
Or, in a firm, being the business intelligence team, and being able to export queryable sheets, safe in the knowledge that people will be able to query them without screwing them up.
The minimum viable app is a checklist.
I read The Checklist Manifesto (2011) by Atul Gawande a few years back.
It’s about how the World Health Organisation utilises checklists, even for simple procedures, and it how introducing them had a massive impact on health. Airline pilots also use checklists extensively – when you’re busy you can miss even basic steps. It increases reliability and safety. It’s a great book.
I know people whose jobs absolutely require high amounts of training and smarts, 20% of the time. But 80% they can run using checklists… and often do. They write and share checklists among themselves.
And maybe there’s something in that, grabbing a copy of a known checklist made by a peer when you want to perform a task. Not quite an app. More like a doc.
An analogy is form pads from the Xerox Star (1981), the first commercially available PC with a GUI.
The Star was document-centric. It had docs not apps.
You used a set of standard commands, each of which had a dedicated key on the keyboard:
To create a new document, you would COPY an existing one, keeping blank documents for that purpose:
The Apple Lisa (1983) picked up this metaphor:
And, you know, this feature makes its way through to MacOS, 40 years later.
From the MacOS user guide: Create document templates on Mac. Check “Stationery pad” in the info pane for any document, even today, and when you double-click it, it acts not as a regular document but as a template.
So: micro-app stationery pads?
Tear off a new one from the top when you need it? Share pads with friends?
Or maybe you have a Pokedex for micro-apps? Could I capture them and trade them?
Hey I would prefer that actually.
For all you kids who weren’t lucky enough to encounter the strong file metaphor… read this:
Golems, smart objects, and the file metaphor (2021).
(That’s a post I wrote some years ago.)
Files were the embodiment of interop. Save in one app, open in another.
Files were
meaningful to users and meaningful to machines. You could manipulate an icon and tell the computer what to do at a really low level. The files you see as projects in Figma or docs in Google Drive are thin shadows by comparison.That post is also about the mythological animated beings golems, which I will come back to shortly.
When my simple home appliances have intelligence too cheap to meter, how will I instruct them?
Another hunch!
As previously discussed, we have intelligence too cheap to meter (2023).
So how will we interact with our intelligent light switches, intelligent standing fans, intelligent digital clock radios, and so on, with embedded GPT-4+ intelligence for pennies?
We still want our smart things to present like appliances. I think having a home full of single-purpose AI elves hiding behind my light switches would simply take up too much room in my head.
But, as an alternative, do we really want to use an app on a smartphone as a controller, or a smart speaker, as we do today? It seems very indirect.
No, I want to interact directly.
Now, this is already in theory possible – but the hardware overhead is too high.
That is: Programming my standing fan would require a touchscreen, and that means a visual UI to design, possibly somewhere to plug in a keyboard. Too much.
But if I could simply speak out loud, looking at it, and say: hey standing fan, turn on when the air is getting stuffy, and whenever there are lots of people here, but never at night or the weekends.
It would be easy to have those instructions interpreted by a little AI on a chip with a mic and a presence sensor and so on. It will be cheap to add this component.
The question for me is how will is look when even our basic home appliances are programmable? How will we tell?
This is where I come back to golems. From my post about files and golems, linked above:
Golems were activated with instructions, a spell, the shem, the prompt, and
So what I imagine is that my standing fan will have a small AMOLED screen on it, nothing special, where the buttons are currently, and in smallish text (because it’s as a reference, not as interface), the fan-as-golem’s instructions a.k.a. prompt a.k.a. micro-app will be printed there.
All my other devices would work the same.
That’s the lo-fi sci-fi future I want.