Homo floresiensi has been discovered, an 18,000 year old specimen of a new branch on the human evolutionary tree. Liang Bua 1 is only a metre tall, but an adult.
LB1 shared its island with a pony-sized dwarf elephant called Stegodon, a golden retriever-sized rat, giant tortoises and huge lizards - including Komodo dragons. A local branch of Homo erectus apparently.
It's been dubbed the hobbit. Now, let's see: There's William's Condition, which gives children a pixie- or elfin-like appearance (and, despite cognitive disadvantages, a rich and complex grammatical and musical ability, with
striking conversation and richly expressive story telling skills). And there's Charles Bonnet syndrome which can involve
hallucinations of extended landscape scenes and small figures in costumes with hats; [and] hallucinations of grotesque, disembodied and distorted faces with prominent eyes and teeth. I'm not saying anything, just free associating.
We pretend, in this industrial world of our, that we're more-or-less interchangable. We can do more-or-less the same tasks, with training, we're all people afterall. It's almost heresy to assert fundamental differences, but that's because there's a measuring tape involves when we say difference and it always translates into worth, and importance. How well or otherwise we perform. Efficiency at a task.
It strikes me sometimes how different we are in our private lives, how incredibly different we are when we're on our own--what we do in lonely moments, how we think or occupy ourselves, that some of us talk out loud, and some of us think and think and think, coming up with permutations and combinations, or whatever. Then as soon as we meet with other people we pretend these big differences in the way we operate aren't there--or rather we do acknowledge them, but we dismiss them as personality quirks, just rough edges to the same basic shape.
In a pre-industrial society, where interchangable consumers and cogs weren't needed, could we operate like this all the time? In a world like this, where distributed cognition is more important than being a step on a production line or a part in a machine, maybe we'd see how certain attributes had uses. A person with a great memory who talked all the time would be a wonderful contextual serendipity component to the distributed cognition machine-mesh. A rearticulator would be handy to avoid being stuck on local adaptive peaks. Once we've internalised the social network concepts, can we get on and start talking about what kind of organs (heh, branes) would be necessary in a dist.cog. world, before we had to keep up standards, when people could be really different, when it was okay to become the material because you took up the craft that spoke your internal language, so quiet and gruff was a blacksmith, and there were witches, and yes, hobbits and elves and the little people too. (The fairies, on the other hand, and their fairy rings--that was just a warning about drinking too much cider and wasting your life under an apple tree. Step into the fairy ring (which grows under apple trees) and you'll come to 20 years later, after a life wasted on scrumpy. Myth, reality, whatever.)
Other thing about H. floresiensis.
(And how very appropriate I should have tangented onto networks from this species, the name of which reminds me of Flores and these conversational meshes. Acausal interconnectedness! Can we confabulate an analogy? Phonosysthesia/words/sentences--it all blends there, but what are the organs of language? Hm.)
Oh yes, other thing about H. floresiensis, the exciting bit:
Even more intriguing is the fact that Flores' inhabitants have incredibly detailed legends about the existence of little people on the island they call Ebu Gogo. The islanders describe Ebu Gogo as being about one metre tall, hairy and prone to "murmuring" to each other in some form of language. They were also able to repeat what islanders said to them in a parrot-like fashion. "There have always been myths about small people - Ireland has its leprechauns and Australia has the Yowies. I suppose there's some feeling that this is an oral history going back to the survival of these small people into recent times," said co-discoverer Peter Brown, an associate professor of archaeology at New England.
[...] The myths say Ebu Gogo were alive when Dutch explorers arrived a few hundred years ago and the very last legend featuring the mythical creatures dates to 100 years ago. But Henry Gee, senior editor at Nature magazine, goes further. He speculates that species like H.floresiensis might still exist, somewhere in the unexplored tropical forest of Indonesia.
The ethical possibilities here are wonderful. Let's say they exist, still. Are they human, or animal? Can we take one to bring up in Homo sapien society, to see if it can learn language? I feel wonderfully decentred even now, not sure whether I can say human or not to talk about our own species. If they're dying out, can we give them medicine, or should we stay out? Should we start a reserve, withdraw from a portion of the Earth's surface to let them live? How do we treat them? If they're like us but without the propensity to religion, do they have souls? How about numbers, or property rights? Do they live in a smell-space? Fantastic.
When Mead was fooled by the Arapesh it was because she was seeing past a surface truth to one further beyond: You don't need a kitchenette and a white skin to be human. A big step at the time. She could only communicate the enormity of this by being fooled by the claim that the family unit and monogomous mating weren't also human but constructs [I know, I know]. We should listen to her lesson every time we see counter-cultural activity (the 1960s) and see it dismissed as less than human. How many of the human universals go before you stop being human? We need to remember her lesson, and find some meaningful way to judge (or really, escape the question-answer) which isn't centred on ourselves.
Ebu Gogo. Let a thousand floresiensis bloom.
Taming the Wild was just on, an informative and clever slice through the history of Britain and human impact on the land. I've snagged o one segment, about burial mounds (called burrows). Ancestors were buried in the burrows (being: a couple of rocks weighing tonnes, moved several miles, with thousands of tonnes of rubble behind it, in a heap, with tombs in tunnels underneath), for many generations. But they weren't just spiritual, and this was the bit I liked.
Monuments were beacons in the landscape, saying "this land is taken." These symbols of ownership cropped up wherever fields replaced woodland.
Aha, and that's why the ancestor stuff. Ancestor worship is important--it's a very human thing (story telling at old age, use of grandparents). There's a coevolution: we operate in groups to do well; attributes of our behaviour don't work unless there a group structure around [learning, social structures]. Families are important because they cohere better than a random group (relationship quotients).
So given that, you can take advantage of the fact there is this unarguable fact - ancestor worship - and build things off it. In this case, family ownership.
A previously smooth landscape is pucked because ancestor worship (virtually immovable) is extruded into the physical immovable (a combination of land + thousands of tonnes of rubble + time). The tomb is a spacetime knot that binds the family to the landscape.
At that point, if someone wants to take the land, you go "no, and we'll fight to the death for it because our ancestors are there." And they know you aren't bluffing because, well, they'd do the same, and they know you've backed yourself into a corner.
It's the same as emotions: If you display you're genuinely, truly furious, your opponent knows you can't be reasoned out of a violent and damaging response. If someone threatens your kid, you are genuinely and utterly furious, and this is displayed as a genuine expression, and at that point you've painted yourself into that same corner: Back off, or there's no telling what I'll do as I'll stop at nothing.
It's also the same as mutual assured destruction, and the announcement that nuclear missiles are computer controlled and outside human control: If you bomb us, we must bomb back, we can't do otherwise. (Or winning at playing chicken by discarding control, cf the beginning of Flashdance.)
A common pattern then, which I imagine occurs because all of these systems involved multiplicities and time binding. Multiplicities means that there can be two families, one with an ancestral mound bonded to the land, and another without. Or two nations, one that can go the MAD route, and another that can appear persuadable to appeasement. Or early humans, with more or less expressionful faces*. The time-binding means situations can be tested against once another, the better bonded ancestral mount will win, and evolvability can occur, which is the important factor. So are these two features associated with every instance of evolvability, or is this just one way it can arise?
I wonder whether we can extend the concept of property, and our understanding of it, into the non-human, like this, as entropy/ethics can be. A philosophy of the non-human.
* Again, here's a mark that something physical and immovable (chemical/hormonal changes in the brain and body, either being used for group coordination or activity readiness) can get translated into something virtual (facial expression) so long as the physical/virtual transformation is unfakable without lots of cost**. There's a pattern here in how the physical becomes virtual, and the layers of virtual--communication by facial expression is virtual from the perspective of hormones, so maybe the virtual is subjective, or at least based on non-human pov. So the real is transformed into the virtual [I'm insisting on using 'real' here, to mean 'the reality where we are' rather than some objective reality, so there can be many different reals], and the virtual is defined by some qualities which involve patterns/patterning/surfaces (? perhaps the virtual is defined by it's task: to act as a symbol/sign/index to the real?). Then the join between the two breaks, they diverge, act independently, which means the virtual merges back with the real (or they combine to create a new real, which is the same thing), then the process begins again. Still considering this, but it's a use of virtual I haven't considered yet, and possibly another way into the s9y.
** That's one way the physical immovable can go (get used by being translated to the virtual). Another is that is can be used for lies--there's quite an advantage in pretending that you're furious beyond control, after all, to appear to lock in a decision. What I suspect happens is that the physical immovable - the face - is gamed, at a cost, then a new immovable constructed, which (after some time) is gamed, and so on, until eventually the cost of lying is so great that the only possibility is the physical-virtual join. (It's not really possible to lie about the time it takes to construct a mound weighing thousands of tonnes, and the ancestral feeling bound up in that.)
Normalized data is for sissies [good thread at Kottke] said Cal [pdf]. I don't buy the technical reasons given in the comments pro normalisation--Cal's right, and solid tech will get around the problems. Unnormalised data is an emulation of view tables anyway. Anyway, the best reason for normalisation I can think of is to do with the social structure of the development team and how it changes over time, coupled with the ways the db is accessed.
Half of software architecture is making sure that somebody can fix a bug in a hurry, add features without breaking it, and be lazy without doing the wrong thing. A lot of that depends on whether you're training developers, how big the codebase is (and whether you should expect people to hold all of it in mind when adding new features), etc.
If you use the db pretty raw, without wrappers to take care of accesses, if the schema changes a fair amount, if it's heavily interdependent, if it's pretty big, if the database schema needs to grow quickly, or if you want developers to work on a small bit of the system without risking weird bugs, then you should normalise the data.
Unnormalised data means you've got the potential of changing one bit and leaving a bit that depends on that data - or replicates it - inconsistent. That's action at a distance, something to be avoided in software architecture. The world doesn't work like that, and people don't think counterintuitively when they're in a hurry.
Mind you, a small team, good developers, experienced developers, a site which is fluid in other ways (a decent staging environment, nodes which are easy to pull out in an emergency), code which is easy to fix + a fast rollout system--all of those make flying close to the wire much more tempting and pretty much harmless. Given the team over at Flickr, they can do pretty much anything they want. I would be interested to know how their infrastructure (especially the db) influences the adaptiveness of the code. Doesn't it mean that adding similar-sized features will take more and more work, as the code base grows? This drive towards just-do-it, go back and refactor later, in code--it seems to me like it'd cause potential earthquakes later, where a small change turns into a cascade of refactoring, and it could be any small change, completely unpredictable. Again, with a small company and no ship schedule, not a problem. Another way the architecture is bound up with the social structure of the developers and the business model.
The new iPod Photo isn't about photos. The iPod is still about music, but the next obvious step is sharing, via wifi or dump to thumbdrive or near field communication or an ethernet jack and a built-in webserver or whatever. (Your iPod is your last.fm profile.) But you can't built that kind of functionality into a music device because the record industry will go ballistic.
But, as Steve Jobs said in the announcement, people own their family photos*. You can legitimately build all kinds of iPod-to-iPod functionality in, if the iPod is seen as a platform for media that doesn't have property rights attached. Bingo. And we know people like to share photos, bookmarks, playlists etc in their social networks. But once sharing and social networking [not explicitly] can be built in to the iPod, why not add the radio functionality, the sharing of playlists, the same functionality that's already in iTunes? (It's easier to manage, from a rights perspective, because nobody else can write apps for the iPod.) Real-life Rendezvous anyone? The iPod as a physical avatar-hub for the ad hoc network--why bother stating it: your network is who you can see. Sit it in the middle of a table, switch on wifi, it runs as a shared drive so your meeting can all work on the same documents, run groupware etc (hey, corporate angle). Put it in your pocket, run a radio station for the people near your desk, or the people you pass on the street. Swap pictures, run Pokemon trading card games. But the music sharing, because really it's a music device, has to start with photos, that's the big excuse. A long term plan.
* Not that it'll be used for family photos, it'll be used for downloaded and homemade porn. Tell me again why digital cameras took off. Because they were eight times the price? Because the quality was terrible? Or because you didn't have to show a processing company what you'd been photographing in order to obtain a permanent image of some moist crevice or another?
Go to news.bbc.co.uk, pick a few keywords, and try to knit a confabulation around then. See what comes out. news.bbc_cutup.txt. Not very good, but there's part of the basis of a story or another in there.
One eskimo speaking to another eskimo. The first eskimo says, "You'll never guess what. Those social software people have three hundred words for 'friend'."
Found this morning, alone in an unsaved, open text document on my computer:
I have come to a compromise with the leopard. Last night I came home, checked my mail, ate cheese, wrote that and went to bed. What was I thinking? What does it mean?
The leopard is a symbol of nature, but not the green grass and brown cows kind of nature. There's the That's How The Universe Works kind of nature, How Things Happen. Tendencies. Inherent potentials of becoming. The way things fall. I'm reluctant to say processes and I'm reluctant even to say lines of flight, because I'm not talking about a constructed production line, and I'm not just about the individual tendency of an individual thing. There's a kind of commonality between different happenings. Analogies are real things. Okay, here's an example: The edges of X tend to get repurposed by use, the systems expand, and they recentre. This pattern is evident in any X which exhibits evolvability, so we have heat-trapping feathers being used for flying, and exaptive features in public spaces (a wall becomes an ad hoc seat, and is replaced by a real bench). Or another one: long feedback loops tend to get shorter, or to disappear. So you can see that in software and hardware where an single application or computer will be chunked up and commoditised, or get more tightly coupled. Some things are stable, or unstable. There are patterns like loops which generate (metabolic cycle, puffers in Conway's Life), and the generated stuff builds up very slowly and provides a surface (eventually) for something new (chewed up coral by fish becomes sand, eventually becomes an island; oxygen dumped into the air makes an atmosphere). Side-effects don't disappear, in other words. Then there's digestion, another one, the transformation between two meshworks entails a linearisation and dehumanisation in the interim step, which isn't actually a step because it's a period of adding addressibility (whether that's with tags or receptors shapes in the metaphorical stomach or language parsing modules). Entropy is another.
Whatever. Patterns and patterning. The way of the world, from those abstract happenings and tendencies, to human interactions, to very specific things (a small plate so I don't have to wash the teaspoon so often; the fridge door that swings closed because your hands tend to be full; the ability to distribute intelligence leading to something else). That's what the leopard represents. Flow.
We pretend the leopard doesn't exist, so often, that we can make a world of steps and processes and manufacture, edges and borders, industry, abstractions. We think that if we have a map, it's just a matter of walking there. It ain't so; beware the leopard.
The trick's to find the middleground. Don't go with the flow, passively, but garden it. The flow is part of the self. Who was I talking to who told me how fish swim? They use the water around themselves, the vortices and turbulence, they use the properties of the water to propel themselves. They get more than 100% mechanical efficiency because they don't swim through, they just swim.
I think I was wrong last night, because it's not a compromise. It just is. It's a matter of stopping pretending there is a you and there is the leopard, but it's also not pretending that you and the leopard are the same. Things may be distributed, intertwingled, and blurred into one, but that doesn't mean that things individually don't exist (they're still pointable). Being, living, is a matter of doing and thinking against, with and of. Fighting but not fighting. A fish is of the water, and where it goes is a collaboration between the fish and the water, but there's still fish, and there's still water. I'm trying to find that place, and the first step is trying to let go of the right things, and trying to understand. I think that's what it means.
I have come to a compromise with the leopard.
Teaching Melville and style: a catalogue of selected rhetorical devices:
To increase my students' sensitivity to Melville's language, therefore, I assembled and distributed an alphabetical catalogue of rhetorical devices with definitions and exemplifications from the Melville oeuvre. The catalogue continued to grow even after the course was finished; what follows is a considerably abbreviated version that nevertheless covers twenty-six grammatical, linguistic, and (mostly) rhetorical terms--with mini-essays.
I liked this:
The difference is not a matter of subject, theme, character, or genre; it is those more "microcosmic" somethings that one artist can do so much more successfully than the other (f acial expression, musculature, sculpted veins, anatomical proportion). In literature, those "microcosmic somethings" are manifested at the sentence level.
Currently half-way (not quite) through David Mitchell's Cloud Atlas. The whole appears to be a giant left-branching sentence, hirmus (
characterized by the suspension of the completion of sense until its end), but more so: The fly the spider the cat the dog caught caught caught was swallowed by the old lady. Audacious, too, to continually suspend and open a new clause as the last was getting gripping; I've only noticed the joins once, every other time it's taken me by surprise (specifically, the penultimate first chapter, about two pages from the end, takes a sudden turn).
Here be spoilers. What's more interesting is the way each chapter is embedded in, and dismissed as fiction by, the next. It reminds me of a device Katherine Hayles used when I heard her talk (notes on a similar talk). Using Egan's Permuation City and the concept of the universe-as-a-computer, she said: Is the Sims a simulation? [Yes.] Is a game of chess, being played in a computer game, a simulation, or the simulation of a simulation? [Just a simulation.] If we're running on a computer, the computer being the universe, what's the difference between us, and a simulation running inside our program? [Given that, none.]
She proceeded to support this assertation using the various levels of simulation inside Permutation City, essentially proving a point about life (a simulation, in her hypothesis) using fiction (a simulation of a simulation, flattened to a simulation with the same weight). Beautifully circular, an argument so elegant I think my notes are covered with exclamation marks. Shockingly, I've lost them.
Hayle's How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics is absolutely cracking, by the way, a literary journey through the history and philosophy of cybernetics.
Ramble ramble ramble. I'm sure I mp3d the Tate Modern talks too, but those are missing along with my notes.