2004-11-11 Design Engaged neuroscience talk * Neuroscience and interaction design. I've spent most of this summer working on a book about the perceptual and cognitive capabilities of the brain. It's nothing too high level, but in the same way that knowing the parameters of the hand can help you build a better pencil, maybe knowing some of the parameters of the brain can inform a better interface. I'll show some off some of my favourite bits of neuroscience and psychology, and (if there's time) talk about the implications of this kind of knowledge. -> the question is, i guess, is the brain some kind of invisible thing that we're built on top of, and everything is up for grabs, or not? in terms of full disclosure, it feels to me like thinking and acting is some kind of collaborative exercise: the brain as body, and the emergent thing from brain as body, assuming society and other people and all the rest. - shape from shading demo - the main thing I want to illustrate here is that we make use of information to understand [shape] before other information is integrated [gravity] - the visual system has many stages - different bits have the same effect - eg: you shift your attention both to a sudden bright light, and a moving object that you're looking out for, but they're more or less voluntary - susan kare, windows 3.1: more important than we realise here's another one: - the simon effect, game with traffic tapping knees - [the slide should be a fuzzy or jpeg'd picture with white text over it] - we make assumptions: - sun comes from top *left* - if we're stimulated to respond, we respond in that direction - this is important for dialog boxes, say - matching the position of the keys for shortcuts - the different in these things is measured in times less than 100ms, and with probabilities usually 50%. - but then raskin uses times of about 40ms to prefer mouse or keyboard reponses, so it's the same order of magnitude. one more thing about how we're effected by what we see: - cartoons of eyes one way or the other - this is where neuroscience comes in - is this a learned behavior - we always see eyes, from birth - is it something special? (eyes processed early) - how early is it? - we're really good at distinguishing emotions, for example, but would you be as good distinguishing cars? - neuroscience can help answer this by monitoring which bits of the brain are active when doing different tasks. it's new, last few years, and mostly it's confirming what we already know from psychology (just measuring reaction times). our models have mostly been correct. but there are times when we speculate (like that there's a specific face recognition model, separate from the main object one) and we can check that. it's another information source, happening quickly, and something to keep on top of. - other than psychology we knew by monitoring brain injured patients. eg, patients who could tell a fork by touch but not by vision, but could reach right for it. things to do: - wrapup (implications) - what is the important of neuroscience here, why do we need to image the brain to know this, or why can you not just know it as a design rule of thumb? - add the coloring of the abacus, as a favourite example of using subitizing * other things i haven't mentioned: - there are neurons that code for specific things: - you see a certain object, like a chair or a cup - there's something at the tip of your hand, within reach, one lights up - but what also lights up is what you can *do* with the object - grasp it - and it *also* lights up when you observe that action - see somebody else making a grasping gesture the thing is, we've already seen how the concept of something in the brain ("left") can influence what happens next (it's harder to do something which involves "right"). so seeing a grasp makes it more likely the grasp will actually happen. for instance, you make a picture of a coffee mug red or blue and they have to pull a lever, red or blue for left or right. depending on which way the handle is pointing, that hand will move faster to the lever, because it's "prepared" (ie, already slightly activated). the big question is for interaction design, how much of this is learnable? or rather, how much of this is learned because of the society we're in and that we should take advantage of? secondly, how mutable is this? the brain changes when you use a tool (positions close to a point of light - a cursor - following your hand on a screen are treated as if they're close to your hand; a rake will extend the range those neurons light up for). this happens in only 5 minutes. how long before it gets fixed? is it unlearnable again? this i don't know. but because the experiments have been done, it's going to be possible to find out the answers to those questions (rather than not know whether it's a true learning thing, or the person it catching on to something else, like wrinkles in the bit of paper used for the experiment perhaps). but there are things which are harder set than others, and things which are more fluid. by being aware of them we can take advantage of them. * actually, a more interesting route might be to say that the way skills improve is by extending the neural map dedicated to it. cf the monkey and the rake [maravita04, tool to reach extended object extends what is "close" to that bit of the body map]. it's not that you just don't notice the nuances, it's that you *can't*. -> also not mentioned: the brain throws information away. (the great riddle of our time.) the brain makes assumptions to do this. - the brain throws information away. we can represent a lot, compare some things, consciously consider a few, and do pretty much only one thing at a time. - one of the strategies we have is to throw away stuff that looks unimportant, figure out what we need as soon as possible to not bother processing it - another is to store information externally. - the monkey thing is body shape, we don't bother remembering the shape of the body because it's just there the whole time. the isn't sitting apart from the body or the universe, it's intrinsically involved. so there are these two extreme positions people take: the brain as a fluid thing which is self-contained, an information processing unit, almost determined; the brain as a totally mutable thing which is shaped by society and the body, really just a kind of memory that binds us with time. - both are wrong. (as always when somebody sets up a dichotomy.) just as the body has constraits (abilities of the eye, size of the hand), the brain has ways of moving, strategies that are more congealed. the rapids. it's worth remembering where these are, DESIGN AS A MATTER OF FLYING THE JETSTREAMS. - we use the eye instead of remembering things. a bird, which has a cheaper cost to moving the head, moves its head a lot more. -> why this is useful to think about: because analogies, even though we can't *prove* they're useful, exist. the brain has got a good strategy to do a particular thing, maybe we do it in a similar way. slides: - shape from shading flat - shape from shading, shaded - visual system picture (from that flash place?) - susan kare windows 3.1 - picture of traffic with rules over it - picture of dialog box with picture of keyboard overlaid - cartoon picture of eyes - cartoon picture of robot eyes - some kind of slide for implications - and i need to think of something to say here - picture of a russian abacus, incase we have time # I said: the representation of a thing is the same as almost doing it. given we think of affordances, literally represented in the brain, that means visual affordances are really important, because seeing them makes them more likely to be performed.