1.
The Heider and Simmel animation (1944) (YouTube).
It’s a big triangle, a little triangle and circle moving around a box – and part an early demonstration that people infer complex social intentions and story even though the geometric shapes look nothing like humans.
All to do with motion.
Worth a watch. See what story you infer.
Ref.
Heider, F., & Simmel, M. (1944). An Experimental Study of Apparent Behavior. The American Journal of Psychology, 57(2), 243-259. https://doi.org/10.2307/1416950
Full paper here (PDF).
2.
Luxo Jr., Pixar (1986) (YouTube).
It’s a big lamp and a little lamp and a ball. (SPOILERS: which gets stomped on.)
Luxo Jr. is Pixar’s mascot, the hopping lamp in the logo at the beginning of every movie.
This 2 minute movie by John Lasseter with Ed Catmull was Pixar’s very first release, a demonstration of its photorealistic rendering software, but also proof that computer animation could convey character and emotion.
From Wikipedia on its debut at SIGGRAPH: As soon as the lamp moved, people started going crazy. And then the ball came in, and they were going nuts.
3.
ELEGNT: Expressive and Functional Movement Design for Non-Anthropomorphic Robot (Apple Machine Learning Research).
It’s an articulated robot lamp, like the Pixar one, but for real life.
There’s an embedded video; here it is mirrored on YouTube.
When the researcher in the video plays music, the “Expressive” robot lamp dances with her; when she asks about the weather, it looks outside first; when she’s working on an intricate project, it follows her movements to shed light more helpfully; when it reminds her to drink water, it pushes the glass toward her. When she tells it it can’t come out on a hike with her, it hangs its head in faux sadness.
Gorgeous!
The full paper is great. Check out Figure 5. Illustration of the design space for expressive robot movements, including kinesics and proxemics movement primitives.
Ref.
Hu, Y., Huang, P., Sivapurapu, M., & Zhang, J. (2025). ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot (arXiv:2501.12493). arXiv. https://doi.org/10.48550/arXiv.2501.12493
4.
Sally–Anne test (Wikipedia) by Daniel Dennett, to measure a person’s social cognitive ability to attribute false beliefs to others.
Or rather: belief perspective. Or rather: independent theory of mind.
Sally takes a marble and hides it in her basket. She then “leaves” the room and goes for a walk. While she is away, Anne takes the marble out of Sally’s basket and puts it in her own box. Sally is then reintroduced and the child is asked the key question, the Belief Question: “Where will Sally look for her marble?”
The independent theory of mind bit:
For a participant to pass this test, they must answer the Belief Question correctly by indicating that Sally believes that the marble is in her own basket. This answer is continuous with Sally’s perspective, but not with the participant’s own.
The test is performed with dolls.
Dolls, of course, do not have a mind for us to have a theory about. And yet! We can’t help ourselves. Minds everywhere we look!
(Oh: Eye tracking of chimpanzees, bonobos, and orangutans suggests that all three anticipate the false beliefs of a subject in a King Kong suit, and pass the Sally-Anne test.
)
5.
How are interfaces understood?
Although vision is certainly the means by which we perceive text and numbers on computer screens or hard copy, the primary activity is not sensory or perceptual but cognitive.(4)
(4) Terence McKenna holds that reading is not actually a visual activity at all, especially in “plain-vanilla” text environments. We don’t “see” the symbols, we “read” them [McKenna, personal communication].
The action is in our own heads.
That quote from P202 of the first edition (1993).
I recently gave the opening keynote at Thingscon 2024 in Amsterdam.
My talk was called Context Window and it’s five perspectives on what it’ll be like to work and create in our gen-AI future.
One of the perspectives I called A world of personality
and it was about how we might work alongside our AI teammates in the same docs, sharing (to borrow a phrase from Brenda Laurel) a common ground.
i.e. it was about my AI NPC multiplayer cursor experiments on PartyKit and tldraw (accompanying blog post).
Why might we want to design our AIs as characters?
And what do you need for minimum viable identity anyhow?
Here’s what I said:
And yes, I know it’s simple, but it makes me think that we have consider our “theory of mind” of all of our devices. And also a kind of proxemics… like, a cursor that comes right up close will appear to be way more certain than one that dances away.
All the new devices will have such complex functionality, and intelligence will be everywhere. Memory too. So perhaps all of this will be more intuitive if it comes with minimal viable identity, a kind of new skeuomorphism, and the ability to proactively say “hey I can help with that” when there’s an opportunity to do so.
I don’t think we need to anthropomorphise, just recognise the same entity from one interaction to the next.
I think that’s what motion does - in animation and in puppetry - it provides continuity. And that is all you need for the rest to fall into place: theory of mind, intentionality and capabilities, affordances and memory of affordances, and so on and so on.
It’s not hard. But powerful.
Here’s the full talk:
Context Window: Matt Webb at TH/NGS 2024 (YouTube).
Btw I keep a list of all my speaking gigs and podcast appearances. I don’t have anything lined up yet for 2025.
1.
The Heider and Simmel animation (1944) (YouTube).
It’s a big triangle, a little triangle and circle moving around a box – and part an early demonstration that people infer complex social intentions and story even though the geometric shapes look nothing like humans.
All to do with motion.
Worth a watch. See what story you infer.
Ref.
Heider, F., & Simmel, M. (1944). An Experimental Study of Apparent Behavior. The American Journal of Psychology, 57(2), 243-259. https://doi.org/10.2307/1416950
Full paper here (PDF).
2.
Luxo Jr., Pixar (1986) (YouTube).
It’s a big lamp and a little lamp and a ball. (SPOILERS: which gets stomped on.)
Luxo Jr. is Pixar’s mascot, the hopping lamp in the logo at the beginning of every movie.
This 2 minute movie by John Lasseter with Ed Catmull was Pixar’s very first release, a demonstration of its photorealistic rendering software, but also proof that computer animation could convey character and emotion.
From Wikipedia on its debut at SIGGRAPH:
3.
ELEGNT: Expressive and Functional Movement Design for Non-Anthropomorphic Robot (Apple Machine Learning Research).
It’s an articulated robot lamp, like the Pixar one, but for real life.
There’s an embedded video; here it is mirrored on YouTube.
Gorgeous!
The full paper is great. Check out Figure 5. Illustration of the design space for expressive robot movements, including kinesics and proxemics movement primitives.
Ref.
Hu, Y., Huang, P., Sivapurapu, M., & Zhang, J. (2025). ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot (arXiv:2501.12493). arXiv. https://doi.org/10.48550/arXiv.2501.12493
4.
Sally–Anne test (Wikipedia) by Daniel Dennett,
Or rather: belief perspective. Or rather: independent theory of mind.
The independent theory of mind bit:
The test is performed with dolls.
Dolls, of course, do not have a mind for us to have a theory about. And yet! We can’t help ourselves. Minds everywhere we look!
(Oh:
)5.
How are interfaces understood?
The action is in our own heads.
That quote from P202 of the first edition (1993).
I recently gave the opening keynote at Thingscon 2024 in Amsterdam.
My talk was called Context Window and it’s five perspectives on what it’ll be like to work and create in our gen-AI future.
One of the perspectives I called
and it was about how we might work alongside our AI teammates in the same docs, sharing (to borrow a phrase from Brenda Laurel) a common ground.i.e. it was about my AI NPC multiplayer cursor experiments on PartyKit and tldraw (accompanying blog post).
Why might we want to design our AIs as characters?
And what do you need for minimum viable identity anyhow?
Here’s what I said:
I think that’s what motion does - in animation and in puppetry - it provides continuity. And that is all you need for the rest to fall into place: theory of mind, intentionality and capabilities, affordances and memory of affordances, and so on and so on.
It’s not hard. But powerful.
Here’s the full talk:
Context Window: Matt Webb at TH/NGS 2024 (YouTube).
Btw I keep a list of all my speaking gigs and podcast appearances. I don’t have anything lined up yet for 2025.