A fourth one-off never-to-be-etc Acts Not Facts weeknote

12.01, Monday 27 Nov 2023

Did a talk about design and AI; missed a self-imposed deadline.

Coming up this week

  • PartyKit. Ship the new website, do some planning, finish the demo I’m working on that got eaten last week by Covid brain. And push along customer development: there’s a table in Notion now.
  • Client who cannot be named. Share the strategy with leadership and nudge forward areas of interest. Doesn’t sound like a big deal. Is.

Common thread: spinning up processes.

Processes don’t exist except for artefacts (Notion tables and Trello boards) and habits (recurring meetings in Google Calendar). Also in the general intellect of the team.

None of these can be touched directly. So what you do is you speed-run portions to create muscle memory, nudge parts, and when something happens that is randomly in the right direction, amplify that by being noisy about it – organisational change by LARPing a Maxwell’s demon of workplace activity.

Of course everyone else is doing the same. It’s a collaborative effort! I enjoy it.

Media last week

None.

Also going on

Spent Sunday afternoon doing accounts.

AI Clock. Got agreement on the changes I need to the core component . Prices for that will come back soon. “We will reply to you next week. :) Best regards.” Good good.

Another update, also last week, from another of the production partners: there’s strong confidence that the product can come in at budget.

It doesn’t matter either way. It’s Thanksgiving in the US which means I have missed my self-imposed deadline for launching the Kickstarter. It’ll have to be January now. Colour me disappointed but reconciled.


The internal talk on design and AI last week went well. Maybe. It’s hard to tell over zoom.

As my starting point I took the subtitle from Ted Nelson’s 1974 Computer Lib/Dream Machines: You can and must understand computers NOW.

Nelson coined the team hypertext and published Computer Lib just 6 years after Engelbert’s team’s demo of the first personal computer. What if computers could be used for creativity? said Nelson. So the book is a tumult of ideas but also explanations of what device peripherals are, and how shift registers work, and what code looks like, and so on. Demystifying.

Neither is AI is a magical mystery.

The unspoken sense that maybe AI can do anything, and also therefore that there must be unapproachable complexity, gets in the way of building and imagining with it.

So my goal was to demystify large language models. The basics rather than, whoa AI isn’t it amazing.

Demystifying by example:

  • Showing my side project Braggoscope and how it uses AI to do data extraction (cutting about 4 days work down to 20 minutes)… then showing the short prompt that makes that work.
  • Showing Braggoscope semantic search, which is surprisingly good: there’s a GIF here in the GitHub repo. It works using embeddings, so I showed what a vector looks like then talked through Amelia Wattenberger’s concrete/abstract text editor from Getting creative with embeddings
  • Starting from a toy example that combines these basic Lego bricks, I moved onto Retrieval-Augmented Generation (RAG) (Meta machine learning blog, 2020) and how this technique is ultimately also a simple combination of basic parts, and it underpins all AI chat assistants
  • Looking at user experience examples: the smart code autocomplete in GitHub Copilot and the prompt-steerable menu commands in Replit AI are both chat assistants that don’t look like chat. They have better affordances than an empty chat window. What other UX leaps can we make?
  • Here’s one. Rupert Manfredi’s LangView (here’s a video on X/Twitter and here’s the code from Mozilla): Manfredi shows that it is possible for the AI to output data, and have that hydrate a list component, or map component, or any other UI component that has been carefully designed with human hands. The best of both worlds. (We looked at the prompt - it’s clever and, yet again, short.)
  • Finally, Make Real by whiteboard app tldraw: turn sketches into working websites by hitting a button and prompting the GPT-4 Vision API to do the work. You have to check out all of these Make Real demos (tldraw Substack). Again, we can look at the prompt – it’s dead simple.

The Make Real prompt is pretty hilarious:

You are an expert web developer who specializes in building working website prototypes from low-fidelity wireframes.

Your job is to accept low-fidelity wireframes, then create a working prototype using HTML, CSS, and JavaScript, and finally send back the results.

The results should be a single HTML file.

You love your designers and want them to be happy. Incorporating their feedback and notes and producing working websites makes them happy.

When sent new wireframes, respond ONLY with the contents of the html file.

In the AI world we call that alignment lol.

So that was my part of the talk.

The next step from showing examples is to get your hands dirty.

Rolling your sleeves up is really the only way to build good intuitions.

We didn’t make it all the way to hands-on in the session, but one colleague did an amazing bit about the value of sketching and other design artefacts, and then another colleague ran a deft mini-workshop to sketch up AI ideas. With some great output!

It is daunting maybe, for non-engineers, to unpack AI like this.

But as designers and product people we all have a pretty good working understanding of, say, what can be done with HTML, or how apps work. The same level of understanding is needed for large language models and generative AI generally: not how it works under the hood, but a familiarity with the material, what you can and can’t do with it at scale, and how to combine different techniques.

So daunting but necessary.

More posts tagged:

If you enjoyed this post, please consider sharing it by email or on social media. Here’s the link. Thanks, —Matt.