Let me recruit AI teammates into Figma

01.44, Wednesday 26 Oct 2022

Okay sorry this is a post mainly about startup growth and KPIs. I apologise in advance.

So we’re on the same page, let me recap some hunches:

  1. This is uncontroversial: AI synthesis has gotten really good. AI that writes code (given a ticket), or makes an illustration (given a brief), or writes an article or makes a diagram or summarises complex information (given a prompt) is here. The technical problems have been solved, and now it’s a matter of improvements, integrations, and UX.
  2. AIs will augment our teams. Sure a single engineer can have smart autocomplete, or an AI word processor can expand your text so you don’t have to type full sentences. But if you’re looking for 10x productivity improvements, then think about teams: an engineer will now be an engineering manager who is wrangling the code contributions of a dozen AIs, submitting their synthesised code as pull requests. A writer will work in a Google Doc alongside an AI editor making suggestions, and an AI fact checker and researcher doing the running, and an AI sub doing the wordsmithing.
  3. NPCs are a better UI for interacting with teammate AIs. Interfaces for all the different ways that AIs can help a team would be incredibly cumbersome – think of the bureaucracy of Jira etc! But if, in our Google Doc, our AI editor can appear as a “non-player character,” using all the regular features that humans do (presence, suggest changes, comments for clarifications etc), then there’s no need for extra UI – it’s just another specialist teammate. Ditto in Github, ditto in Figma, ditto in Zoom for video AIs. Humans and non-humans working together. This is why the multiplayer web is important: it’s a runtime for AIs.

SEE ALSO: Designing user interfaces with bots not buttons.

Ok so we’re got a capability and an interaction paradigm. What’s missing is the economics.


Revenue is a lagging indicator. What I mean by economics in this context are the metrics that precede revenue: acquisition and retention.

  • Users will pay for valuable software – but only if they (a) find out about it, (b) use it, and (c) continue using it
  • Software that is forgotten (has low retention) will eventually not be paid for
  • The model for paying for software has to correspond with the underlying costs of the seller.

This is worth figuring out because otherwise this new model won’t emerge. Companies offering the service won’t grow.

SaaS was an innovation that unlocked Web 2.0 (in b2b). Selling software as a service meant that:

  • Sellers can cover their ongoing developments and running costs because they get to charge per user per month
  • Recurring revenue unlocks the ability to experiment with free trials and other customer acquisition tactics; and once the metrics CAC and CLTV were developed, it was possible to engineer growth, not just hope for it.

This model doesn’t hold in the “AI teammate” world, so we’ll need to find something to replace SaaS:

  • Compute is a meaningful expense for AIs, unlike Web 2.0. It doesn’t make sense to charge per month for an AI editor if the difference between editing 1 article and editing 10 is a meaningful number of dollars to run the models
  • If free trials are off the table, how does acquisition work?

More problems! Given these AIs aren’t essential tools that you open again and again, what’s the retention mechanism? How will users remember that their AI editor teammate function can be invoked?


Here’s a guess: social discovery is the key.

Perhaps app features should be ownable and tradable. A pocketful of feature flags. In short: instead of having thousands of features, mostly unused, undiscovered in a thousand menus, you would see a colleague using a feature in a multiplayer app (like an editing feature in a doc, or co-presenter in Zoom), and then… they could just give it to you. (Or you could buy it.)

Or to put it another way, adopting the NPC = UI metaphor: with AI teammates, instead of having to find the “editor” function in a menu, you would be introduced to the editor NPC in a multiplayer space. (This is why I care so much about the multiplayer web.) You wouldn’t purchase or subscribe, you would recruit.

This takes care of awareness and also the de-risking part of acquisition currently catered for by free trials (if you see somebody else’s editor NPC in action, you’ll already be 50% convinced).

The revenue model is secondary but I think, to begin with, it’ll be a bit like buying credits. You’ll buy X photos synthesised per month, or something like that, and step up and down tiers. Your photo synthesiser NPC (or editor NPC, or engineer NPC) can let you know when you need more.

(Monthly subscriptions won’t work because of the highly variable underlying compute cost. I’ve already seen a few AI projects playing with credits, it makes sense.)


That’s discovery and revenue. What about retention?

The more I think about this, the more I realise that this is a part of the “AI synthesis” capability set which hasn’t yet been built.

Let’s imagine we have an AI teammate. If it’s like today’s software then, for anything powerful, you’re going have to hit a button. But teammates don’t wait to be instructed to take on a task, they jump in.

A human editor teammate will maybe make a single suggestion on a doc, and - if you accept it - they will go ahead and do the rest of the work.

If they’re feeling underutilised, they’ll reach and actively ask you for things to do – if a clear route isn’t evident for a task, they will request the prompt. Or they’ll keep an eye on your shared files and projects and make suggestions about where they could help.

Making this work will be hard.

AI synthesis necessarily includes a view of what “good” looks like. What is a good image; what is good code; etc. That’s possible because we have a ton of images in the world; there’s a ton of code, and so on.

BUT: AIs will also need to synthesise what good team behaviour looks like – and jump in. What actions will help the group? Where is it useful to jump in? What will further the goals of the org? How can that even be measured?

As far as I know, self-setting goals is an AI capability that doesn’t exist yet, and it’s beyond the scope of the type of AI synthesis that has been coming along in leaps and bounds these last few months.

Until we have it, I can see people making prototypes of AIs that are useful for teams, but I can’t see startups growing around them.


What are the metrics that will allow for optimising all of this? Interactions per month. Mean social group size per introduction. Introductions per interaction. Unsolicited interaction rate. There will be a whole industry around measuring, correlating, and optimising these.

Hey, here’s another question: what’s the standard NPC API that a multiplayer app (like Figma etc) can offer, such that my new AI helper can join the team? Appearing in the presence bar, being invitable by @ mentions of their name, etc.

A lot to do!

More posts tagged:
Follow-up posts:
Auto-detected kinda similar posts:

If you enjoyed this post, please consider sharing it by email or on social media. Here’s the link. Thanks, —Matt.