Computers can be trained to see. But they don't necessarily fixate on the features humans see.
Adversarial Machine Learning is a technique to change an image to be recognised as something else, without looking any different to humans.
For example: a panda that - with the right fuzz of pixels added to it - looks to the computer 99.3% like a gibbon.
A hack: adversarial stop signs.
the team was able to create a stop sign that just looks splotchy or faded to human eyes but that was consistently classified by a computer vision system as a Speed Limit 45 sign.
Examples are given.
Ontology is the philosophical study of existence. Object-oriented ontology:
puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally -- plumbers, cotton, bonobos, DVD players, and sandstone, for example.
Things from their own perspective.
A desk telephone, from its own perspective, is constructed to entice (a curve of a handle, buttons that want to be pushed) to feed on sound. To be nourished by sound. And with that consumed energy, to reach out across the world and touch - out of an infinity of destinations and through the tangle - one other. And to breath in relief at this connection, a sigh: another voice.
The Ethics of Mars Exploration, an interview with Lucianne Walkowicz:
it remains a fact that Mars is a place unto its own that has its own history, and what respect do we owe to that history? What rights does that history have?
Which makes me ask this:
Yes I believe there's a human imperative to go to Mars; yes I believe it has to be done in an inclusive way; yes space mustn't be about resource exploitation, a cosmic Gestell; yes potential life on Mars must be preserved.
But also, what Walkowicz said, the land, the land, the land.
I hike, and the land has an intrinsic right to be itself. But I also believe in the human experience of the land, that this is a component of meaning: so, paths? When you walk the trails of the American south west, you come to understand that the trail-makers are poets, giving the land a voice to sing through human experience: effort, surprise, endurance, revelation, breathlessness.
So there should be trails on Mars too.
Which makes me think this:
Who is working to understand this interplay of the subjectivity of the land, and the human gaze, right now? Not necessarily on Mars.
Landscape artists - landscape photographers - do this well.
And that's a process that, for Mars, could start today.
There is Mars exploration via rover right now. The rovers, of course, have cameras. Do they have landscape photographers on the team? Are those artists given reign to look, be, and create?
Why Hasn’t David Hockney Been Given The Keys To The Mars Rover Yet.
A list of interstellar radio messages. That is, ones we've transmitted, not ones we've received.
The first one, from 1962, in Morse code:
MIR LENIN SSSR Sent to Venus.
A more recent one, A Simple Response to an Elemental Message, was transmitted in October 2016 and comprised 3,755 crowdsourced responses to the question
How will our present, environmental interactions shape the future? It was transmitted towards Polaris and will take 434 years to arrive. (Then another 434 years to hear back.)
The Golden Record is not a radio transmission but a physical item, copies of which were placed on Voyagers 1 and 2 in 1977, includes pictures, sounds, music, and greetings in 55 languages including, in Amoy, spoken in southern China, these words:
Friends of space, how are you all? Have you eaten yet? Come visit us if you have time.
Which I hope desperately isn't misinterpreted as offering humanity up for lunch.
Voyager 1 will make a flyby of a star in 40,000 years. Star AC +79 3888 is 17.6 lightyears away, so the earliest we will receive a radio message back is in 40,017.6 years. We should remember to listen out for that. Year 42,034. June.
Over the weekend I heard it asked:
Who is keeping an archive of all the messages we send into space, and how will that archive be maintained? We won't receive an answer from the stars, if any, for hundreds or maybe tens of thousands of years.
If, when, we receive a reply saying
YES then how will we know what it's a YES about?
I spent the weekend at Kickstarter HQ in Brooklyn for PWL Camp 2017 -- a 48 hour, 200 person unconference where
the agenda is created by the attendees at the beginning of the meeting. Anyone who wants to initiate a discussion on a topic can claim a time and a space.
Tons of great conversations. A very open, generous, and talented crowd. My notebook is full but mostly incomprehensible. The above are four things that came up. I'm grateful for having been invited.
My Dearest Droogs,
Let's have a hardware-ish coffee morning! Soon!
Thursday 19 October, 9.30am for a couple of hours, at the Book Club, 100 Leonard St.
I'll be back from my travels, moderately jetlagged, and in no state to conduct linear conversations. So it will be especially important to (a) talk to everyone else who comes (they're always really friendly); and, (b) poke me in the ribs if you see me nodding off.
Usual rules: we don't do intros; everyone talks to everyone else; you order coffee from the counter and please don't forget to pay otherwise the staff get confused; bring a prototype if you have one; actually working with hardware IS NOT A requirement, you just have to be curious. Here's what happened last time.
Might be 5 people, might be 25. If you're a startup and want to ask me about the new R/GA IoT Venture Studio, I am happy to chat.
(Also posted to the coffee morning announce list to which you should subscribe for future updates.)
This is an amazing long essay, well illustrated, about someone who builds an heat sensitive camera. It is peppered with poetic descriptions of what the camera sees.
the air itself glowing
And, looking outside,
the vegetation is not as reflective, so you get the "blackness of space" sky with regular-ish landscapes. It's almost like being on the airless, derelict Earth - preserved under the void after whatever disaster befell it.
I'm Google by Dina Kelberman.
an ongoing tumblr blog in which batches of images and videos that I cull from the internet are compiled into a long stream-of-consciousness. The batches move seamlessly from one subject to the next based on similarities in form, composition, color, and theme. This results visually in a colorful grid that slowly changes as the viewer scrolls through it. Images of houses being demolished transition into images of buildings on fire, to forest fires, to billowing smoke, to geysers, to bursting fire hydrants, to fire hoses, to spools of thread.
Does what it says on the tin.
Here's a system using artificial intelligence to generate human faces.
Worth it for:
illegalto see what the system does when it's asked to generate faces from inputs outside the regular range. The faces are weird patchworks, a computer-native cubism
See also: WaveNet, which makes realistic speech audio also using A.I. It's incredibly realistic, but search for
babbling and listen to what the system produces in the absence of any text to process. It's a mess of clicks, hums, and wet mouth noises -- horribly human but with an absence of intelligence. Uncanny.
Imperfectly real. (Not quite sure when the real got relegated.)
Two possibilities for this shift:
Legitimacy in the age of conversation is not communicated via iconic images. I've covered legitimacy previously, in the context of the media:
"People trust us because we've spent years developing a relationship with them. We have been scrutinized and found not evil. Our legitimacy comes from honesty, not from cultural signals or institutions."
Second possibility is that this is the age of photoshop and everything mediated is manipulated. Hard to build trust.
It is also the age of marketing where "greed is good" and "might is right" have been joined by another tyranny: truth is what you can get people to believe.
So there's space for an approach that doesn't (appear to) dress up and doesn't (appear to) convince.
See also: the Instagram trend called the plandid,
the planned candid -- where you look totally natural in your posing, like you've been caught in the act and just so happen to look triple-digit-Insta-likes amazing.
Examples are given.
I grew up in the waning years of the Cold War, those happy days where apocalypse was total but distant, rather than continuous, partial, and immediate. The word "DEFCON" is engraved on my soul. Turns out each of the five levels has a code word associated with it too.
From DEFCON on Wikipedia.
The Triumphant Rise of the Shitpic, the patina that comes from cycles of screencapping and upload-compression as a picture is shared and shared again,
the first non-numeric indicator of viral dissemination.
Wonder how long it'll take for Domino's to adopt this.
Wonder which version of the iPhone will have a computational photography mode to create pre-distressed selfies, for that already-shared look.
See also: this video of the LaserSharp Denim HD Abrasion System which creates identical pre-distressed jeans.
See also: Gudak, the disposable camera app. You get only 24 photos at a time; a roll of film takes three days to develop; the photos are grainy and the light that leaks over them is the colour of summer days that never ended, when you were still young and you still laughed and your life stretched out ahead of you and you could still be anything.
Fun app. Five stars.
This oral history of the CGI visual effects in Terminator 2 is an awesome long read. So much of the use of computers was new, then.
Also awesome for this photo of Robert Patrick, almost naked, covered in a Sharpie grid, being filmed for motion capture.
Robert Patrick played the T-1000, the liquid metal morphing Terminator from the future.
Also, also awesome for the terminology of the engineers and artists:
So, we had what we called RP1 through to RP5. Robert Patrick - RP - that was the actual naming convention.
RP1 is the blob, an amorphous blob. RP2 is a humanoid smooth shape kinda like Silver Surfer. RP3 is a soft, sandblasted guy in a police uniform made out of metal, and RP4 is the sharp detail of the metallic liquid metal police guy, and then RP5 is live action.
Robert Patrick, the actor, the actual dude, gets relegated from his own name.
RP5. Fade Out.
I've been at a retreat the last few days, 20 of us at a gorgeous hotel in Norway nattering about artificial intelligence. Here's a photo of how insanely beautiful Norway is. And here's a list of who was there, plus some more background on the retreat from the organisers.
I collected book recommendations as I've done regularly at conferences. The question I ask is always the same: What 3 books should I read this year? I don't want to hear your best-ever books, nor the books that will make everyone believe you're a super-genius, just... if we were speaking face to face, knowing what you know about me, what are the 3 books you would recommend for me right now? Here's a pic of how the question is posed. Putting it that way gets some cracking suggestions.
Anyway, I've ended up with 40 recommendations from a dozen-plus folks. So here they are. All links go to a physical edition at Amazon UK.
Adrian Zumbrunnen, @azumbrunnen_:
Amber Case, @caseorganic:
Ben Sauer, @bensauer:
Bill Thompson, @billt:
Chris Noessel, @chrisnoessel:
Dan Harvey, @dancharvey:
Dan Hon, @hondanhon:
(I'm tempted to say that recommending a whole series is cheating...)
Josh Clark, @bigmediumjosh:
Karen Kaushanskyn, @kjkausha:
Kate Devlin, @drkatedevlin:
Me, Matt, a.k.a @genmon:
Warren Ellis, @warrenellis:
Recommendation made with no name attached:
Thanks all! Transcription corrections welcome.
This video of a robot sorting system for a warehouse.
The dance of those little orange cushions charging from source to destination, giving each other room... lovely.
Packet switching. It used to be that the pipes were visible, and the packets were dumb but had addresses. The junctions were smart and did the work. We call them routers. Here there are no routers and there are no pipes. But instead, autonomous packets.
I wonder if the internet could work like this: not dumb packets with addresses, but each packet with a tiny bit of software to choose its own next destination. I'd be interested to hear of any work in this direction.
Woffice, a property service company in Handan province, China, has a monthly relaxation day for its staff. One month it was "No Face" day.
Luckily for these employees in China, they've been given a day off to go "faceless". [Staff] wore masks on Tuesday so they didn't have to fake their facial expression throughout the day.
Most workers chose to go with the "No Face" mask to relax their smiling muscles and remain anonymous in front of customers
Identity is work.
The caption to the bottom photo is brutal:
This way no one can see me cry.
A robot monk in Longquan Temple, China. Named Xian'er.
Through a touch screen held on his tummy, he can answer voice commands and up to 100 questions on Buddhism.
A robotic priest in Wittenberg, Germany. Named BlessU-2.
The machine delivers various blessings in eight languages.
Does a benediction from a robot have an effect? Is sentience necessary to facilitate the presence of the divine?
Rings that look like fingers. Earrings that look like ears.
The memetic history of medieval elephants:
After the fall of the Roman Empire, elephants virtually disappeared from Western Europe. Since there was no real knowledge of how this animal actually looked, illustrators had to rely on oral and written transmissions to morphologically reconstruct the elephant, thus reinventing an actual existing creature.
I am in love.
If cancer can strike any cell, then why don't larger animals (with more cells) get cancer more than smaller ones? Peto's paradox:
the incidence of cancer in humans is much higher than the incidence of cancer in whales. This is despite the fact that a whale has many more cells than a human.
Why? One possibility: hypertumors.
A novel hypothesis resolving Peto's paradox: since cancer cells are predisposed to be aggressive, maybe mutant cancers appear in the cancers
that then grow as a tumor on their parent tumor, creating a hypertumor that damages or destroys the original.
In larger organisms, tumors need more time to reach lethal size, so hypertumors have more time to evolve.
In smaller animals, hypertumors don't have time to emerge, so cancer incidence is higher.
News from 1929:
[Professor] Wever and [research assistant] Bray took an unconscious, but alive, cat and transformed it into a working telephone to test how sound is perceived by the auditory nerve.
The cat telephone.
A telephone wire was attached to the nerve and the other end of the wire was connected to a telephone receiver. Bray spoke into the cat's ears; Wever listened from a soundproofed room 60 feet away.
The original paper from 1930 states that
speech was transmitted with great fidelity. Alas no clue on the first words spoken over the cat telephone.
(Even more alas for the cat, who didn't come through the procedure alive.)
The first words spoken over the Chappe telegraph system, which later covered Napoleonic France with over 500 stations, on March 2, 1791:
If you succeed, you will bask in glory.
Trotify. A device that attaches to your bicycle and makes it sound like a horse.
Ice age Eurasia was not a human world. Cave bears and the Upper Paleolithic:
The longest war ever fought by humans was not fought against other humans, but against another species -- Ursus spelaeus, the Cave Bear.
Unlike human beings, cave bears probably could not have survived elsewhere ... The caves of ice age Eurasia were their world, and they spent enough time in these shelters that the walls of caves have a distinctive sheen that is called "Bärenschliffe"
The "Bärenschliffe" are smooth, polished and often shining surfaces, thought to be caused by passing bears, rubbing their fur along the walls. These surfaces do not only occur in narrow passages, where the bear would come into contact with the walls, but also at corners or rocks in wider passages.
For thousands of years in cultures all over the world the magic mushroom or psilocybe cubensis has been used by humans.
It is possible the psilocybe mushrooms evolved their ability to interface with animal consciousness to give them a unique look at all the information their brains typically disregard. The mushroom can inspire higher thought and evolution.
What if - hear me out on this - what if
it is possible the mushroom originated somewhere else in the universe forming symbiotic relationships with other species. Species all over the universe may find common ground in this higher consciousness symbiotically obtained from the same mushrooms. Maybe these alien species leave behind spores all over the universe, or perhaps the spores traverse space themselves.
Always Coming Home by Ursula Le Guin is an
archaeology of the future. This is an excellent review.
It’s a compendium of poems, linguistic studies, personal narrative and religious observations (with an original cosmology) about the Kesh, a society in far-future California living a kind of new Bronze Age utopia.
Anyway, much poetry.
And buried right in the middle of this book is the revelation that the Earth is also populated by a network of post-singularity artificial intelligences, Yaivkach, the City of Mind:
Some eleven thousand sites all over the planet were occupied by independent, self-contained, self-regulating communities of cybernetic devices or beings -- computers with mechanical extensions. This network of intercommunicating centers formed a single entity, the City of Mind. ... It appears that an ever-increasing number were located on other planets or bodies of the solar system, in satellites, or in probes voyaging in deep space.
Its observable activity was entirely related to the collection, storage, and collation of data
Which is what it does.
They seem not to have interfered in any way with any other species.
There’s a kind of information exchange, mediated by special sites called Exchanges.
Le Guin has put the chapter about the City of Mind online. It’s short and an interesting read, one view of what it might be to cohabit our planet with an intelligence that no longer cares about us. Here: Yaivkach: The City of Mind.
Going through some of my old notes, I found this paragraph from the Extended Phenotype by Richard Dawkins:
Janzen (1977) faces up to the same difficulty, suggesting that a clone of dandelions should be regarded as one 'evolutionary individual' (Harper's genet), equivalent to a single tree although spread out along the ground rather than raised up on the air on a trunk, and although divided up into separate physical 'plants' (Harper's remets). According to this view, there may be as few as four individual dandelions competing with each other for the territory of the whole of North America.
It's been a rough week for business and the Internet of Things.
On the industrial IoT end of things, GE - which has bet on "digital industrial" in a big way - has scaled back its target revenue in this space from $15 billion in 2020 to $12 billion. With industrial IoT we’re talking applications like remote monitoring of wind turbines, improved construction equipment utilisation, and smart power grids.
Even GE’s adjusted numbers are massive, but as Stacey Higgenbotham's analysis explains, the adjustment shows that
industrial IoT isn't a problem that can be tackled as a horizontal platform play. She gives a couple of related examples, including
Samsara, a startup that formed in 2015, aimed to build a wide-scale industrial IoT platform that started with generic sensors. It has since narrowed its focus to fleet monitoring and cold-chain assurance, which is how some of the earliest users of its product used it.
For me, this is a healthy shift. The technology behind the sharp, physical end of the Internet of Things is stabilising but still in flux. And I mean everything: data centres, connectivity, monitoring tools, security, provisioning standards, and so on. For a company like GE, building platforms in a fast-changing platform ecosystem is a long way from core competency, and not a good place to be.
Instead, as I've said before, focus on applications. Provide real business value with whatever platform tools are at hand, and leave room to hop technology as and when.
Widely mocked startup Juicero is shutting down. Juicero raised $120MM to sell a $400 home juicer. Not any fruit; only proprietary Juicero packets. Using IoT technology to keep the consumer channel open, the projected lifetime value must have been enticing to investors. But the product made a number of missteps: a little too keen to tap that recurring revenue, it wouldn't work without wi-fi.
Despite this news, I remain convinced that
However, we can take some lessons.
If the Juicero juicer is really a channel, not a product per se, shouldn't it have been managed by a brand strategist -- someone sensitive to the latent meaning of the product features (and anti-features) for the audience, and their impact? My hunch is that, with just a couple of small changes, Juicero would have felt high-value rather than money-grubbing.
(And if you need to be convinced, read Russell Davies on the iPhone TV ads and his concept of pre-experience design.)
With my startup hat on, I can see the reason to charge for the machine. For the consumer however it's simply paying to have the privilege of paying more. There's an equitable balance to be found, I'm sure, but maybe this the best it gets for business models in consumer IoT: there are nice businesses to be built (maybe even at scale like Nespresso), but they will always be a hard slog and never have the margins of a pure software play.
But but but. I remain positive:
As the Reverend of Revenue says, profitability means you can own your own destiny. Could Juicero have sought to build a business that worked small and allowed it to fund its own growth? And on that platform, for hardware startups, could there be discovered the scale of the non-hardware Silicon Valley-style startups? That was the Amazon playbook, after all.
The ideal business model for consumer IoT remains elusive.
I ran the R/GA IoT Venture Studio earlier this year centred around what we dubbed Enterprise IoT, that sweet-spot which offers real business value like industrial IoT, but with the productised scalability - and faster route to market - of consumer.
The native business model of Enterprise IoT is hardware-enabled SaaS. The software-as-a-service mindset is cribbed from the online world, and it's not just a pricing model but a whole set of techniques about marketing, pricing, metrics, and growth. It's neat because it means recurring revenue, and that matches the cadence of the recurring operating costs necessary for these kind of server-heavy data businesses.
What "hardware-enabled" means is that although the hardware is necessary (it's a sensor, or a camera, or whatever), it's not core. It can be commodity. To take two examples from the recent Venture Studio, we worked with Winnow which is enabled with a smart food waste bin in the commercial kitchen, but provides ongoing value (and charges monthly for) the intelligence that produces. And Hoxton Analytics which monitors pedestrian footfall using machine learning. It uses commodity web-connected cameras (from Cisco) but, again, is primarily a data play providing ongoing value.
I’ve seen close-up how these hardware startups are able to focus on their true differentiation -- which isn’t the hardware.
Another benefit of this model is that these startups have customer retention literally bolted to the wall, yet they’re able to sidestep the friction and risk of custom hardware development and batch production.
So if hardware-enabled SaaS is the model for Enterprise IoT, could there be a similar flip for consumer?
My instinct is that there's a freemium-like model to be found. Popularised by LinkedIn, freemium was the realisation that - with a digital service - 5% paying of a massive customer base is better than 100% of a tiny one.
This wouldn't quite the same for consumer, but imagine a fictional Juicero (to stick with that example) that was a great juicer for any fruit -- and also the ability to "upgrade" to a hassle-free monthly subscription of more exotic juice packets.
Of course LinkedIn innovated on both revenue and distribution simultaneously. It wouldn't have worked without the viral traversing of your address book. Consumer IoT hasn't yet discovered its virality, and that's a challenge.
Conspicuous setbacks like those above damage confidence in the Internet of Things, but they're part of the process and it's important to learn from them.
IoT is an enabler, not a feature. Like machine learning, it's an interoperating set of technologies and approaches that opens doors in all kinds of sectors. For IoT, the immediate value is in bringing the dividends of the 50 year digital boom right into the real world.
This is a challenge for the business world (for corporates, for investors, and for founders) because there's no guarantee that (a) existing business practices will remain intact; or, (b) lessons learnt about the Internet of Things in one sector will translate to a second.
So what to do if you're in that world? Watch, learn, experiment, and share. It’s how we get through the idea maze together.
Everyone has a pet theory about Blade Runner, and I want to tell you mine. Spoiler: Blade Runner is about Blade Runner. Or rather, it's about creating Blade Runner. I reckon many films and books make more sense seen this way: creatives are narcissists, and creative works are commentaries on the act of creation.
Ok. Let's start with an easy one. In Star Wars, what is the Force? This 2005 article in Slate hits the nail on the head:
the characters come to understand that there is another agent, external to themselves, that is dictating the action. Within the films' fiction, that force is called ... er, "the Force." It's the Force that makes Anakin win the pod race so that he can get off Tatooine and become a Jedi and set all the other events in all of the other films in motion. We learn that Anakin's birth, fall, redemption, and death are required to "bring balance to the Force" and, not coincidentally, to give the story its dramatic shape.
There's a tension for an author between doing what the characters and internal logic of the universe demand, and doing what the reader or viewer demands: moving the story forward, keeping attention through cliffhangers and long story arcs, surprising but not subverting the genre, and so on. It's a balance.
At its worst, when plot beats sense, blunders are easily observed as called out as "deus ex machina" and MacGuffins. At best, the story feels completely natural.
I've read that Pixar consider three foundational elements, and each has to make sense in the context of the previous: the world, then the characters, then the narrative. If there is trouble resolving the story, the characters (or even the world) may have to change. This loopback is how the eventual whole feels so complete, immersive and organic.
That Star Wars article continues:
The Force is, in other words, a metaphor for, or figuration of, the demands of narrative. The Force is the power of plot.
The Force is another way of bridging the needs of the world and the needs of the narrative: it's an in-fiction concretisation of the gap itself. The relationship between the characters and the Force - that is, the prophecies and the balance - is an examination by the author into this gap.
The monolith in 2001 is, like the Force, a catalytic agent: it turns the apes into humans, and takes modern day humans through another evolution and brings about the Star Child.
As has been pointed out, the monolith is the cinema screen, and this idea has been well explored. The proportions are the same; it transforms the in-fiction characters just as it mysteriously transforms the audience.
So in the films opening and during the intermission, we are not looking at an empty black screen at all. We are looking directly at the surface of the monolith! The monolith is the film screen and it is singing directly at its audience in the same way that the apes and astronauts are entranced by its heavenly voice, not realising that they are being communicated with directly
But for me, 2001 (the movie) is an exploration of the relationship between the director and the audience, with the in-world characters making the examination by glimpsing, from their side, this boundary: the screen/monolith.
There's the famous shot of the aligned planets: this conjunction only makes sense from the perspective of the viewer, but there's no viewer present in space at this point... except, suddenly, the audience. So the audience is forcibly inserted; given a location in the in-world universe.
The boundaries are blurred again when a shot on the Moon brings the monolith (as Tycho Magnetic Anomaly One) - black, indistinguishable from the dark room of the cinema - from the edge of the screen, again pulling the audience's environment into the film. An equivalent is made between the audience's world and the agent of change in the in-fiction world.
Which is of course true: the fiction-world only lives while the film plays, while the literal film is projected. The characters reaction to the embodiment of that (the monolith) is as spiritual and ineffable as ours would be, encountering our own agent of reality.
Sticking with science fiction, Arrival (2016) - which is a gorgeous, beautifully paced movie, and you should definitely see it - gets into playing with time.
Spoilers, obviously, so let me summarise: aliens land, and their language is somehow outside time. They apprehend the past and future as one, fitting together into a cohesive whole. A human - a woman - learning their language, finds she can now do the same.
As a film this makes a cracking story. As the short story on which it was based (Story of Your Life, by Ted Chiang) it's a classic. The story of the title is both the in-fiction story of the woman's daughter, and the short story in the reader's hand. The alien's ability to apprehend all of time at once (but also be within it, yet without the capacity to change what happens) is the reader's perspective too.
Chiang is using his protagonist as an agent to examine whether it's possible to break through from the inner reality of the fiction to the outer reality of the reader.
This section is kinda obscure, so feel free to skip. But before you do: you should read these Egan novels because otherwise you'll be missing some of the best, most robust hard sci-fi of the late 1990s/early 2000s.
Greg Egan is an Australian author and computer programmer. The kind of author who, when he invents in a story a game called quantum soccer where the players move a ball which is a quantum mechanical probabilistic wave function, and scoring a goal means manipulating the probability of the "ball" such that it is (probably) in one of the goals, he then goes ahead and builds a simulation of the game playable on his website. The kind of author who works out the equations for a rock in orbit around a black hole, and then has to invent new words to describe new directions because space gets all mixed up under the extreme regime of general relativity.
Three of his early novels are investigations of what it means to be human, and how human-ness is conserved across greater and greater extreme translations from the flesh and the everyday. For me these three sit together as a trilogy: Permutation City, Schild's Ladder, and Diaspora. They're surprisingly easy reading, and have that magical characteristic of boiling frog gentle escalation where every single step makes individual sense but you look behind you at the end and all you can say is "holy shit how did we end up here." (Like Apocalypse Now where you get to the end and all you can think was, hang on weren’t we just surfing.)
This is only going to make sense if you've read them, but my contention is that each book is about the characters of the inner reality probing and attempting to understand the outer reality. And the outer reality, in this case, is not only the reader's world, but the actual physical book in the reader's hands, paper pages and all.
If our own universe was actually a book, that was written, isn't this how we would attempt to understand the outer reality -- piecemeal, and never completely? In fact, with our enormous particle colliders and speculation about the universe being a holographic projection of a pattern on a bubble surface, and trying to find ways we might test that, isn't that what's happening now?
In fiction, there are three times. The time of the inner reality, of the fiction, of the characters. The time of the reader or audience. And the time of the author. These times don't only vary in pace, but may be ordered differently. They may repeat, or not. They have differing agency over what is real.
This is fertile ground for exploration.
Tom Stoppard's play Rosencrantz and Guildenstern Are Dead follows two minor characters from Shakespeare's Hamlet, between scenes and interleaving the original Hamlet itself.
It opens with the two characters asking themselves whether they are doomed to have the same conversation again and again. Well yes, they do in the play. But they do in another sense, in the outer reality, because the play has a nightly performance.
They ask each other whether they remember what happened before. Was there a before? For the character, kinda: the character has a memory and a backstory, but if the audience didn't see it, did it really happen? And there is definitely a "before" for the actor playing the character.
We'll come back to Ros and Guild. They're replicants.
So Stoppard's play is a play exploring what it means to be a play. It's built on good source material: Shakespeare was exploring the same ideas with Hamlet.
First, yes, the famous play within a play at the heart of Hamlet. A recursion like the monolith representing the cinema screen being shown on the screen.
Secondly, and mainly, the ghost.
Hamlet is a clever, wonderful, tightly told, and above all realistic play. The story unfolds from the internal drives of, and feelings between, the characters. There are few coincidences, no deus ex machina. It's insightful and subtle, and derives from details in the depths of the human condition. It feels true.
But at the beginning - the domino that kicks off the whole sequence of events - there is the ghost of Hamlet's father. You what? This isn't just Prince Hamlet's wild imagination. The guards see the ghost too. This is, right upfront in an obstinately real story, the presence of the supernatural, driving the narrative.
Sounds like the Force.
And, get this:
According to oral tradition, the Ghost was originally played by Shakespeare himself.
How's that for a statement on how the inner reality relates to the author from the outer reality!
The ambiguity about Blade Runner is whether Deckard, the replicant hunter, is himself a replicant. Are his memories real, or has he been instantiated with a remembered past borrowed from elsewhere; will he - like other replicants - live only for a brief time, just four years? Or is he human?
There's a solid theory that Deckard is a replicant with Gaff's memories. Gaff being a detective who makes origami that mysteriously mirrors Deckard's dreams, indicating that he has special access to Deckard's inner life.
What makes the Blade Runner ambiguity so delicious is that in the released 1982 theatrical cut, Deckard's replicant identity is ambiguous. In the later director's cut, all the hints are inserted. We get to choose, and the fact that it's still debated which the "true" cut is (the one with the bigger audience? Or the one the director wanted us to see?) enlarges the ambiguity to ask who gets to determine reality.
But what happens if we apply the Narcissist Creator Razor? The answer becomes that Blade Runner is simply about the act of making Blade Runner. The fictional inner reality isn't about the story, it's about the reality of the maker. And what is that reality? This:
The reality of Blade Runner is this: Deckard isn't a human, and Deckard isn't a replicant. Deckard is a sequence of recorded images of Harrison Ford saying lines written by someone else. The story is an exploration of that fact.
Here's Aaron Sorkin (screenwriter of the West Wing, A Few Good Men, and much more) talking about characters and backstory:
Your character, assuming your character is 50 years old, was never six years old, or seven years old or eight years old. Your character was born the moment the curtain goes up, the moment the movie begins, the moment the television show begins, and your character dies as soon as it's over. ... Characters and people aren't the same thing. They only look alike.
That's what's being explored in Blade Runner. Characters look like people, except they exist for only the duration of a movie -- only while they are necessary. They come with backstory and memories fully established but never experienced, partly fabricated for the job and partly drawn from real people known by the screenwriter. At the end, they vanish,
like tears in rain.
Like Rosencrantz and Guildenstern. Like replicants.
Roy knows he is a replicant. He's the one who comes closest to understanding his true nature: that his memories were given to him, that when the short span of the film passes he'll be gone. He's coming to terms with his emotions about this in a short period - his journey as a replicant but also as a character in a film - in a way that no one else does. The Off-World Colonies - Roy's point of origin and source of memories but never seen - are a stand-in for the inaccessible outer reality of the creator.
Deckard is a character. Roy is a character. Gaff is a character.
So that’s what Blade Runner is about, for me: it’s an examination of what it means to be a character. It’s a creator using their creation to examine the nature of that creation.
(This is also why I don’t like the idea of the Blade Runner sequel. It risks the delicate balance of audience vs creator, and inner vs outer reality, and I think we might lose access to a very interesting place because of that.)
I am aware, by the way, that proposing a totalising general theory of all creative work is an utterly ludicrous thing to do. But to hedge the above appropriately would have added too many words, and this is long enough already.