Hardware-ish coffee morning, Thursday 15th

I am BACK FROM MY SUMMER HOLS, it's raining outside, and I am in the mood to hang out with hardware folks. Let's have a hardware-ish coffee morning?

Thursday 15 September, 9.30am for a couple of hours, at the Book Club, 100 Leonard St.

(Timed to follow the Internet of Things conference ThingMonk, so if you're in town for that, do come and hang out for coffee too.)

Usual drill... there's no standing up and doing intros, or anything super formal. We just meet in a convenient cafe and hang out. Folks are often involved in the hardware scene somehow, whether it's making stuff for a hobby, figuring out how to do manufacturing, or in the middle of their Kickstarter campaign. All pretty chilled. Bring prototypes if you got em.

tbh it might just be me and thee. But that's fine, we'll have a cuppa and have a chat.

I have a secret agenda -- I'm heading up R/GA's newest startup accelerator and we're focusing on hardware and Internet of Things startups. Announcement was just the other day. So I'm thinking about what kind of support startups really need, and I'm talking to as many people as possible about that.

See you on the 15th!

ps. for email updates about hardware-ish coffee mornings, subscribe to the mailing list.

Two obvious financial tips

I think the LinkedIn euphemism for it is a "portfolio career," but really what that means is I have a bunch of stuff on the go simultaneously.

So for the past three months I've been working with Google, directing a small team on an invention project. I have my vending machine bookshop; I advise a couple of hardware startups; I've been doing a bit of teaching, etc, etc. I am trying to avoid building another agency.

Working for myself: I love the independence.

Working for myself: Holy shit I hate thinking about cashflow. It destroys any kind of creativity I have, and stops me being casual.

There's a time for hustling, and there's a time for being casual. I find the most interesting opportunities emerge from coffees and talking widely. And interesting opportunities breed interesting opportunities -- as Jack says, you get what you do. So, doubly important to hold off accepting anything until the great stuff appears.

And if I haven't got much money in the bank? That's when I make bad decisions. I mean, this is a question of BATNA: If my Best Alternative to a Negotiated Agreement is that I can't pay my mortgage, then I have to take whatever gig is going, at whatever terms.

I follow two rules to keep myself sane as an independent. This goes for freelancers, contractors, sole traders, and whatever other forms of "self-employed" there are out there.

It occurred to me that other people might be interested, so I thought I'd share them here.

Pay yourself a salary

Business money is not my money. To smooth out peaks and troughs, all gigs pay into a separate account and I pay myself monthly.

My salary is the same amount every month, and paid on the same day of every month.

(Business) taxes also come out of this float.

Build a runway

Once I take into account business expenses and my salary, I can calculate how many months I can survive without work. That's my runway.

If my runway is six months, I can sleep at night. If it's six months minus one day, that's a psychic shitstorm right there.

The reason being that it typically takes me three months to go from asking around to starting a gig (longer for the most unusual ones). Then let's say I get to invoice after a month's work, then it takes a month to get paid, then add a month as a buffer... that's six months right there.

When I started as an independent again, I kept my salary super low until I built up my six months runway.

There's a flip side: If the runway is too long, I stop being hungry. Being hungry is good.

Minimum viable financial management

Two tips. Not rocket science. I imagine most people have something similar. For me, this is what gives me room to be exploratory, and how I sleep easier at night.

Hardware-ish coffee morning, next Thursday

Hey, shall we do a July hardware-ish coffee morning? And, just for kicks, shall we try a different location?

Thursday 21 July, 9.30am for a couple of hours, at Machines Rooms, 45 Vyner St.

Last time we had a fun crowd. Thanks for coming Ross, Paul, Anders, Pauline, Josh, Avril, Phoenix, John, Lloyd, Tom, and Nat.

It was from Nat that I learnt about Machines Room -- it's a makerspace right in the middle of London's densest area for hardware startups. Tech Will Save Us is on the same road, for example.

There's equipment there, and coffee, and events. So next week is going to be a busy one of us doing stuff together! On Tuesday night, I'm taking part in an event about hardware startups and business models. Sign up here. On Wednesday, my bookshop vending machine Machine Supply will be moving there for its latest residency. And on Thursday morning, this hardware-ish coffee morning!

Usual drill... there's no standing up and doing intros, or anything super formal. The coffee morning is simply a friendly space to hang out, chat, get caffeinated, and compare notes on everything hardware related, whether that's making stuff as a hobby, figuring out how to do manufacturing, swapping interesting new Kickstarters, or just spending time with like-minded people.

Hope to see you at Machines Room next Thursday!

Hardware-ish coffee morning, this Thursday

I figured it might be fun to get together for coffee this week? Usual game -- nothing formal, just hanging out in a cafe with a bunch of folks in the same game. Hardware startups, electronics, physical installations for work, hobby Internet of Things stuff at home, or simply following along by backing tons of Kickstarter projects...

I'll be at the Book Club (100 Leonard St) from 9.30am on Thursday 16 June.

Come join me! Would be great to catch up. I'll make sure I have some Machine Supply badges with me.

There's a newsletter for these announcements. Subscribe here.

How my Twitter bot makes personalised animated GIFs

Ben Brown noticed that my bot @5point9billion made him a personalised animated GIF when it tweeted him yesterday (on the occasion of light that left Earth as he was born, right at that moment passing the star Iota Pegasi, a little over 38 light years away). And he was curious about how it did that. So:

There's a previous write-up about @5point9billion here. From that post:

My new bot is called @5point9billion which is the number of miles that light travels in a year. The idea is that you follow it, tweet it the date of your birth (e.g. here's my starter tweet), and then it lets you know whenever you reach Aldebaran or wherever.

You get tweets monthly, and then weekly, and for the last couple of days... and then you pass the star. It feels neat, don't ask me why.

Since that write-up, I've also added a website to the bot. In addition to getting the realtime notifications on Twitter, you can sign in on the site and see what stars you've already reached.

Check this out: There's also a public view, with an animation. This is a 3D animated map of all the star systems we can see from Earth, within 100 light years. It sits there and rotated. You can type in your date of birth, and it'll show you what stars you've already reached.

I made this public view as a "kiosk" mode when @5point9billion was exhibiting at the Art of Bots show earlier this month. The stars were laid out on the floor, fanning out from the Sun which was right by the kiosk. Here's a photo. It was good fun to walk out from the Sun till you find the star you've just passed. And then to walk out to about 80 light years and think, hey, most people die around this point, and look at the stars falling just further from you and think, hey, I probably won't reach those. Huh.

The star map is drawn and animated in Javascript and WebGL using three.js which I really like.

And doesn't it look kinda the same as the personalised star map that the bot made for Ben? Yup.

Making animated GIFs

I knew I wanted to tweet out personalised, animated star maps, whenever a bot follower passed a star (there are over 500 followers, and between 2 and 5 of them pass a star each day).

Routes I considered but discarded pretty fast:

  • Generating the star maps offline. For sketching on my Mac, I use a Python drawing package called PlotDevice -- this is what I used to make the first quick-and-dirty star map. I don't like generating graphics offline because I want the ability to tweak and change my mind
  • Drawing the graphics frame by frame using a dedicated package like Cairo. But I already have star maps in Javascript for the browser. I don't like the idea of having two routes to draw the same graphics for different outputs. Feel like a lot of work

This is the rendering pipeline I settled on:

  • The source animation is the same animation I use for the website... it's drawn in Javascript using three.js. It's just a page on my site
  • I already have queues and asynchronous processing on my website. The website is all Python because that's my preferred language, and I have a my own Twitter bot framework that I'm gradually building up (this is a whole other story)
  • When a user passes a star, the machine responsible for that task adds a tweet to the send queue, and flags it for requiring media
  • At the appropriate time, the queue runner loads the animation page using PhantomJS which is a web browser that can run headless on the server. It's possible to drive Phantom from Python using Selenium
  • Because the animation is created on demand, and generated just for this tweet, it can include personalised information like today's date, and the name of the user
  • The animation exposes a single Javascript function, step(), that renders the next frame. Phantom has the ability to reach into a page and make Javascript calls
  • Using Phantom, each frame of the animation is generated by calling step(), capturing as a screen shot (as a PNG) to an in-memory buffer, and then down-sampling to half its original dimensions (this makes the lines sharper)
  • Using images2gif (this is the Python 3 version of the library), the frames are assembled into an animated GIF, and saved as a temporary file
  • The GIF is optimised by shelling out to gifsicle, a command-line tool for that purpose
  • Finally, the media is uploaded to Twitter using Tweepy. Technically Twitter supports animated GIFs up to 5MB, but this is only available using a kind of chunked upload that Tweepy doesn't yet support, so the GIFs have to come in under 3MB. Twitter returns a media ID, which the code associates with the queued tweet in my send queue, and that is posted when its time comes round. (The send queue ticks every 40 seconds, because Twitter rate limits.)

If you're curious, here's the source animation on the website. And here's how it looks in a tweet.

If you want, knock the "draw=1" off the URL -- you'll get a blank page. Then call step() in your browser's Javascript console and see each frame being generated.

There's a wrinkle: Phantom doesn't support WebGL, so the star map animation in three.js had to be re-written to draw directly to canvas... which three.js supports but you have to add custom sprites and a few other things. It gets hairy, and I'm super happy to have worked with @phl on that side of things -- he looked after the Javascript drawing with his amazing code chops.

Another wrinkle: PhantomJS 2 (which this requires) installs on the Mac using Homebrew just fine, but is a pain to build on Ubuntu which is what my server runs. There's a pre-built binary here.

In summary, this is a rendering pipeline which:

  • Fits my web-first approach... there's no separate drawing package just for these animations, so debugging an image is as simple as opening a browser window
  • Minimises the number of moving parts: I've added the ability to create images using Phantom but that's it, there's no separate drawing package or offline rendering
  • Is agile: I can tweak and change last minute

What else am I using this for?

I prototyped this rendering pipeline with another Twitter bot, @tiny_gravity which just does a tiny particle simulation once every 4 hours. Sometimes it's pretty.

This animation doesn't use three.js for drawing, it uses processing.js, but the principle is the same. Again, the animation is just a webpage, so I can tweak the animated GIFs in the same way I tweak the rest of my website and bot behaviour. Here's that animation as a tweet.

One of the things I'm most enjoying about having multiple projects is how they cross-pollinate.

My main side project right now is my bookshop-in-a-vending-machine called Machine Supply. Here it is at Campus, Google's space for entrepreneurs in Shoreditch, London.

It tweets when it sells a book. Because of course it does.

The selection is changed over every Monday, and you'll notice that each of the books has a card on the front (here's a photo) because every book is recommended by a real human made of meat.

These cards and the shelf talkers (the label which says the item code and the price) are beautifully designed by my new friends at Common Works. But they're a pain to produce: For layout, the templates are in InDesign (which I don't have), then I have to send an Excel spreadsheet of the new stock over to Sam at Common Works, which he then puts into the template, and prints.

My new process comes straight out of the @5point9billion code. The browser is my layout tool.

So Sam moved from InDesign to the web, and here are this week's shelf talkers as HTML. This is part of my admin site, I've temporarily turned off permission checking to this page so you can see. The template is automatically populated with details from the weekly planogram. (A planogram is the merchandising layout for a set of shelves or a store.)

And here's the exact same page as a PDF. The pipeline is taken from @5point9billion: Phantom is used to grab the webpage, and this time render it to a PDF, complete with vector fonts and graphics. Because it's a PDF, it's super exact -- which it needs to be to print right and fit neatly on the shelf edge.

It's much quicker this way.

My rule for Machine Supply, as a side project, is that it should take the minimum of my time, never feel like an obligation, and I should be able to manage it on the hoof. As a hobby, it should be Default Alive.

So automation is helpful. I like that this mode of generating PDFs can be done without my laptop: I can do everything from my phone, and print wirelessly.

Anyway. You should follow @5point9billion! It's fun, and you get a personalised animated GIF every time you pass a star, generated with the most ludicrous rendering pipeline ever.

Hardware-ish coffee morning tomorrow

[Short version: Coffee morning on Thurs 14 April in Old St! I sent this out to the coffee morning newsletter last week. Subscribe to the newsletter here.]

My Dearest Droogs,

We haven't had a hardware-ish coffee morning at all this year. I've had no Thursdays because I've been avec job for a few months earning coins. I know, I know, but it happens to all of us sometimes. Still, done with that now, and fingers crossed I can avoid gainful employment for a little while longer.

Let's hang out and drink too much coffee and talk about hardware! Same bat-time, same bat-channel:

Thursday 14 April, 9.30am for a couple of hours, at the Book Club (100 Leonard St).

Same old format... If you're curious about, or working in... designing physical things, paper or weird sensors, installations, knitting, manufacturing, internet-connected doodads, retail for hardware startups, sculpture, investment, or whatever, please come along.

There's no formal intros so it's easy to sneak off if everyone is horrendous. (They're mostly not.) Come say hi to me when you turn up, and we'll make sure you chat with interesting folks. (Most everyone is interesting.) Everyone loves prototypes, so bring em along if you have em. There are usually one or two.

If you're a woman, or don't present or identify as a dude, please do feel welcome. It's a concern to me that this tech industry, while very human and egalitarian in its early days (this goes for mainstream tech and hardware startups too) appears to heavily skew to Mainly Dudes as time goes on. That's something I can push against, a tiny bit, by trying to ensure these coffee mornings don't go the same way.

(On which serious note: If you don't feel you would be welcome - obviously or in hidden ways - at a hardware-ish coffee morning, and you'd be willing to share your feedback privately with ideas of how I could improve the format, I'd like to hear. My personal email is matt AT interconnected DOT org. Thank you!)

See you on the 14th!


ps. I've got a new hobby and it's a robot bookshop that tweets. You can visit it! Here it is. It's called Machine Supply.

Filtered for vending machines


30 bizarre vending machines from around the world.

Live hairy crabs, cupcakes, acne medication, a real live person who hands out sweets.

See also this ticket machine in Japan, where pressing the "help" button leads to an attendant appearing from a tiny hidden door. I can't figure out whether this video is real or not.


Beautiful vending machines that sell fresh salads. (Made for office lobbies.)

The Auto Store... Versatile and modular, ASC provides a retail platform that can be easily adapted to any retail environment, integrating sales and smart locker compartments.

Smart lockers are smart. See Doddle which now has concessions at train stations, allowing for e-commerce click and collect -- and also returns.


A brief history of book vending machines.

See also: The short story vending machine in Paris. Uses a receipt printer.

And not forgetting that the modern paperback was popularised by Penguin, together with a new form of distribution and the ability to sell books outside traditional bookshops.

The New Yorker on mass-market paperbacks:

More than a hundred and eighty million books were printed in the United States in 1939, the year de Graff introduced Pocket Books, but there were only twenty-eight hundred bookstores to sell them in. There were, however, more than seven thousand newsstands, eighteen thousand cigar stores, fifty-eight thousand drugstores, and sixty-two thousand lunch counters—not to mention train and bus stations.


The mass-market paperback was therefore designed to be displayed in wire racks that could be conveniently placed in virtually any retail space.

I like this:

You can’t tell a book by its cover, but you can certainly sell one that way.


Vending Machine (2009) by Ellis Harrison:

An old vending machine is reprogrammed to release free snacks only when search terms relating to the recession make the headlines on the BBC News RSS feed.

Related: Tim Hunkin's Novelty Automation: A new London arcade of satirical home-made machines -- and if you haven't visited already (it's near Holborn) you must, you must.

Filtered for quick links


A robot that sits on your back and feeds you tomatoes while you run.


By Mattel, toy race cars driven by live crickets.


Video: Star Wars by Ken Loach.


The bedrock under Manhattan and how it leads to taller buildings.

Lessons on finding flow

The following written using The Most Dangerous Writing App which deletes everything unless you type continuously for 5 minutes, on 29 February 2016 at 19:05. You get 5 seconds grace. Discoveries are made. Output follows.

This reminds me of that game on Radio 4 where you have to speak continuously for one minute, with no hesitation, deviation, or repetition. Except here I don't this repetition matters. It's all about not stopping.

Which means maybe it's more like the movie Speed with Keanu Reeves where he couldn't slow the bus down below whatever it was, 40 mph, or otherwise it would blow up.


Go bang.

Or maybe, it occurs to me, it's more like that neuroscience experiment where you try to say as many difficult challenges as possible for a whole minute. And the effort of that results in more blood flow to the brain, and because that's already a large amount of your oxygen usage anyway, that's detectable, and your head should be warmer, or you end up breathing faster, or something like that.

I don't remember.

The weird thing with this experiment is that it's not the paragraphs that are hard to figure out. I have enough time while I'm typing to choose something that comes next.

No. The problem is this:

It's when I get halfway through a sentence and I don't know exactly how to phrase what I way to say. So I usually pause for a second, delete, choose a different word. Or pause for longer, and in that gap go back and want to revise the previous sentence.

Which breaks my flow state. When I get lost in a particular word - a stutter if you like - I stop being able to think of what's happening in the next paragraph.

I feel that there's a lesson here in how I write usually.

Notes, discovered at this point 4 minutes in, that I need to remember for later, about how to write more fluently without using this app:

  1. I need to slow my writing down, in general, so that I can plan the next paragraph.
  2. I need to keep writing and keep moving forward. Don't go back, don't revise as I go. I can revise later, and that's editing. The point is to write without stopping.
  3. I need to capture this state without the app.

I've got to 5 minutes now, which is the stopping point, and already I found I have revised this sentence by deleting its second clause; I have gone back and added point 3 above which wasn't there before; I am pausing slightly to second guess myself.

So, lessons. Time to stop.

A week later

Looking back on what I wrote a week ago, I boil it down to this:

Writing and editing are separate tasks, and I should approach in different ways and at different times.

I was only able to see this after finding flow for, what, four minutes. And this category of ideas that are only visible after some period of time, or some kind of journey... this is interesting to me.

I've been reading about scoring centuries in cricket and there's something resonant for me in those stories about getting to the magic 100: An individual game, every ball the same as the last but somehow not; a score made run by run. Don't think about the 100 when you start, just start. Every ball on its merits. Even the greats remind themselves to watch the ball every time one is bowled. You can't score runs from the pavilion.

Filtered for rambling thoughts


Position. How fast position changes is velocity. How fast velocity changes is acceleration. How fast acceleration changes is... jerk. Then snap, crackle, and pop.

Here it is explained:

So each one is a measure of how fast the previous one is going. Position is the location of your car, velocity is the speed of your car, acceleration is how hard you have the foot on the gas. jerk is how fast your foot is moving on the accelerator, snap is how fast your foot is accelerating on the accelerator. It can be conceptually visualized as the pedal controlling the thing you're looking at as you just keep repeating it.


The thing is that large variations in 'snap' can be visible as "unnatural" or "uncanny" ... A very consistent 'snap', even when "jerk" is strongly controlled, can make things feel overly precise or planned. Imagine someone "doing the robot dance"


Scientifically accurate images of Earth's sky with Saturn's rings taking into account shadows and latitudes.

See also: Views of Jupiter from the top and bottom.

I was getting the tube back home the other day, just after sunset -- the sky was burnt umber, all deep orange and brown and shadowed. And because it was late, it was dim, that low light where it's not bright and not dark, but the clouds look painted on horizon, unlit. But so, yeah, it looked like a gas giant, a planet just hanging there.

And I thought... Well, Nasa have their Pluto time widget which tells you at what time of day it's the same brightness as it is around Pluto, for where you are. So I worked it out for Jupiter, where it's about 1/27th as bright as Earth -- which, it turns out, for London, on a winter's evening, is about 6pm.

So by coincidence, that day in the late afternoon, I was looking out at a sunset with the colours and pattern of Jupiter, just as bright as Jupiter itself would look, if I was hanging by it in orbit, gazing across its deep clouds and churning storms.

See also: Images of whole galaxies as if they were teeny-weeny.


Back in 2007, it used to be that tech startups were old Unix tools warmed over for the Web. grep is Google. finger is Facebook.

Then there was an era where tech startups were about individuals doing stuff publicly. YouTube, blogging, Twitter.

I think there's a similar, simple pattern now: There are a ton of big startups aimed at doing stuff your parents used to do for you.

Uber is being carted around. On-demand laundry and odd jobs is about having someone pick up after you. Food delivery is about being cooked for.

I know I have this preoccupation about being infantilised by brands -- cynically: modern coffee is a thin excuse for grown adults to drink hot sweet milk from a sippy cup.

But there's a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of "doing something for me" vs "allowing me to do even more" is going to get even more pronounced. Both are effective, but the first sucks... or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.

This user experience "stance" is similar to the dichotomy we see in Internet of Things consumer products: Is it me controlling the product with my smartphone, or does the product have smarts of its own? I favour the second. There are a lot of smart home gadgets that you need a phone to control. Fine.

But when you use Sonos speakers you find that they connect to the streaming music services across the Internet themselves. You ask the speaker to "tune in" to music using your phone. Then somebody else can use their phone to adjust the track, change the album, whatever.

The difference between these two stances sounds minuscule and academic... but one approach leaves the product diminished, no more than a physically rendered version of an app. And the Sonos approach allows the speaker to stand alone, and consequentially become more social and more part of the home.

I don't know how to refer to this design challenge (in Internet of Things, in artificial intelligence) except as stance. There must be a better way of talking about it.


Some more imagery...

Beautiful photos of Tokyo.

Geometrica, patterns by Guy Moorhouse.

The gorgeous landscapes of Grand Theft Auto V.