Today, a post from my day job.
Back in May I was invited to speak about large language models at the board strategy day of the BMJ.
The BMJ publishes the British Medical Journal and 60+ other scientific journals, together with a number of technology products in the health sector. As an organisation, they’re a few hundred people.
So this is a fascinating and actually pretty common organisation profile. Mid sized, technical but not a pure technology company. And they’re asking themselves, will AI impact us? If so, how should we respond?
As it turns out, it was obvious even in May that generative AI would matter to the BMJ, and the session I was part of - run by CTO Ian Mulvany - was about the second part of the question: what’s the strategy.
My part of the session ended with an approach I call strategic pathfinding. A high-level framework really.
I caught up with Ian (his homepage) last week. The approach, it turns out, holds up. So with his permission, I’m sharing it here.
Strategic pathfinding in a capability overhang
See the framework: the slide is on the project page at Acts Not Facts.
Should you wait and see? Or jump in? I’d privately polled a number of other companies about their strategies and heard the entire spread, from a blanket ban on using generative AI, to commissioning a 3rd party consultancy to attempt to replicate team workflows using large language models.
My point of view is that
- we’re in a capability overhang – the AI tech that already exists has huge potential impact, whether you engage or not, so get ahead by exploring
- the appropriate approach is pathfinding which uses experiments to learn and, critically, artefacts to tell the organisation what to do next.
The framework breaks down the different activities of pathfinding.
1. Search for value systematically
There will be opportunities across the whole business, from ops to the product suite.
Don’t just build the first, most obvious gen-AI prototype, but engage with every team and department to surface ways to save time, reach new audiences, or provide new services.
Run this as an ongoing and highly collaborative process.
Document and disseminate what you build and what you learn: the pathfinding approach is not about learning for an isolated and small AI team, but making the entire organisation more capable.
Be guided by the overall strategy not by AI itself. The potential of the technology is too unbounded, but you may find within it new ways of reaching your goals.
2. Build internal knowledge and spark the imagination
One of the challenges is recognising opportunities: most organisations have two decades of digital knowledge telling them what’s easy and what’s hard. A lot of that no longer applies. And AI is seen as magical, or mysterious, or a threat.
Maintain a practice of sharing external examples and quick prototypes built with AI tools. This will lift general knowledge and demystify. Share null results too: it’s important to know what what AI can’t do.
3. Red teaming
Red teaming is the process of actively understanding threats, as in the famous GPT-4 System Card (as previously discussed) which showed how OpenAI’s latest large language model could be used for dangerous outcomes.
In this context we want to document AI-related threats very broadly,
- from: increased imposter calls to contact centres, targeting PII
- to: how will the website get traffic when a user’s point of first intent has switched from Google to ChatGPT?
(Thinking about how to deal with threats may also reveal opportunities.)
On the second point: even if you’re not planning to use AI right now, your customers are.
4. Build capabilities to make invention cheaper
Experimenting needs a quick cycle time. Along the way, identify and build new enabling capabilities to:
- make future experiments easier
- allow for pilots with future AI-related suppliers.
(For example, LLMs and computer vision mean that there is a lot of data that didn’t previously look like data. That index needs a new internal API.)
5. Understand needs by sharing both internally and with partners
Prototypes and experiments do not need to be taken to production. In a build vs buy discussion, for an organisation whose core competence is elsewhere, you shouldn’t be building foundational tech.
But how will you identify value when it arrives? There will be hundreds of well-funded startups and dozens of consultancies with deep-pockets and impressive AI demos, all knocking at the door.
Running experiments means you’re understanding your own requirements – which means you’ll spot the valuable suppliers and can ask smart questions.
I’d go further: be open and noisy about your experiments,* and suppliers will pre-emptively change their roadmaps, come to you, and let you in before their work is public. And there’s a competitive advantage to being first.
6. Governance – a bonus point not on the slide
AI tools are getting available fast. Whether or not you have a centralised AI approach, individual team members will be using AI to create job descriptions, summarise documents and emails, make illustrations, write code, and so on.
When I talked recently with Ian, he told me about the BMJ’s AI governance approach. This allows for permissionless innovation with AI in workflows for everyone in the organisation, by providing simple guidelines describing
- when it’s ok to use AI tools
- when to absolutely not
- when to be cautious (and who to escalate the question to)
These guidelines will differ for every org. I found it really smart for the BMJ to be engaging in the benefits and risks of AI tools like this, and engaging with the reality that yes, people are already picking them up to help do their jobs.
Of course it’s easier to talk about strategy than to do it.
The reality of strategic pathfinding for AI is that it comes down to
- real people and objectives
- meetings, workshops, slide decks, and Trello boards, alongside often more urgent work
- figuring out new, ongoing processes
- ultimately, rolling your sleeves up and doing the work.
…all of which will become very tactical and very specific to your org as soon as you start, and will evolve fast.
But it’s worth it. So just get started because you’ll figure it out faster doing than thinking.
My hope is that this framework provides a useful starting point.
Today, a post from my day job.
Back in May I was invited to speak about large language models at the board strategy day of the BMJ.
The BMJ publishes the British Medical Journal and 60+ other scientific journals, together with a number of technology products in the health sector. As an organisation, they’re a few hundred people.
So this is a fascinating and actually pretty common organisation profile. Mid sized, technical but not a pure technology company. And they’re asking themselves, will AI impact us? If so, how should we respond?
As it turns out, it was obvious even in May that generative AI would matter to the BMJ, and the session I was part of - run by CTO Ian Mulvany - was about the second part of the question: what’s the strategy.
My part of the session ended with an approach I call strategic pathfinding. A high-level framework really.
I caught up with Ian (his homepage) last week. The approach, it turns out, holds up. So with his permission, I’m sharing it here.
Strategic pathfinding in a capability overhang
See the framework: the slide is on the project page at Acts Not Facts.
Should you wait and see? Or jump in? I’d privately polled a number of other companies about their strategies and heard the entire spread, from a blanket ban on using generative AI, to commissioning a 3rd party consultancy to attempt to replicate team workflows using large language models.
My point of view is that
The framework breaks down the different activities of pathfinding.
1. Search for value systematically
There will be opportunities across the whole business, from ops to the product suite.
Don’t just build the first, most obvious gen-AI prototype, but engage with every team and department to surface ways to save time, reach new audiences, or provide new services.
Run this as an ongoing and highly collaborative process.
Document and disseminate what you build and what you learn: the pathfinding approach is not about learning for an isolated and small AI team, but making the entire organisation more capable.
Be guided by the overall strategy not by AI itself. The potential of the technology is too unbounded, but you may find within it new ways of reaching your goals.
2. Build internal knowledge and spark the imagination
One of the challenges is recognising opportunities: most organisations have two decades of digital knowledge telling them what’s easy and what’s hard. A lot of that no longer applies. And AI is seen as magical, or mysterious, or a threat.
Maintain a practice of sharing external examples and quick prototypes built with AI tools. This will lift general knowledge and demystify. Share null results too: it’s important to know what what AI can’t do.
3. Red teaming
Red teaming is the process of actively understanding threats, as in the famous GPT-4 System Card (as previously discussed) which showed how OpenAI’s latest large language model could be used for dangerous outcomes.
In this context we want to document AI-related threats very broadly,
(Thinking about how to deal with threats may also reveal opportunities.)
On the second point: even if you’re not planning to use AI right now, your customers are.
4. Build capabilities to make invention cheaper
Experimenting needs a quick cycle time. Along the way, identify and build new enabling capabilities to:
(For example, LLMs and computer vision mean that there is a lot of data that didn’t previously look like data. That index needs a new internal API.)
5. Understand needs by sharing both internally and with partners
Prototypes and experiments do not need to be taken to production. In a build vs buy discussion, for an organisation whose core competence is elsewhere, you shouldn’t be building foundational tech.
But how will you identify value when it arrives? There will be hundreds of well-funded startups and dozens of consultancies with deep-pockets and impressive AI demos, all knocking at the door.
Running experiments means you’re understanding your own requirements – which means you’ll spot the valuable suppliers and can ask smart questions.
I’d go further: be open and noisy about your experiments,* and suppliers will pre-emptively change their roadmaps, come to you, and let you in before their work is public. And there’s a competitive advantage to being first.
6. Governance – a bonus point not on the slide
AI tools are getting available fast. Whether or not you have a centralised AI approach, individual team members will be using AI to create job descriptions, summarise documents and emails, make illustrations, write code, and so on.
When I talked recently with Ian, he told me about the BMJ’s AI governance approach. This allows for permissionless innovation with AI in workflows for everyone in the organisation, by providing simple guidelines describing
These guidelines will differ for every org. I found it really smart for the BMJ to be engaging in the benefits and risks of AI tools like this, and engaging with the reality that yes, people are already picking them up to help do their jobs.
Of course it’s easier to talk about strategy than to do it.
The reality of strategic pathfinding for AI is that it comes down to
…all of which will become very tactical and very specific to your org as soon as you start, and will evolve fast.
But it’s worth it. So just get started because you’ll figure it out faster doing than thinking.
My hope is that this framework provides a useful starting point.