If a web server runs websites then a corporation server…?

12.47, Thursday 13 Mar 2025

Two of my old app ideas have recently become buildable! Which is neat.

They’re very different… but, it turns out, connected by the principal of homeostasis – the ability to be self-stabilising.

And that, in turn, gives me a new perspective on self-driving companies.

Old idea #1. If Ayn Rand designed your email app

Here’s a quote from Rand: The question isn’t who is going to let me; it’s who is going to stop me.

(She invented Objectivism (Wikipedia), a philosophy based around the concept of man as a heroic being.)

And so back in 2011, this was the idea:

At the moment, my email client defaults to doing nothing, and I must intervene to create action (ie, write a reply).

But if I had an Objectivist email app, it would automatically respond to all emails with stock enabling and forceful replies after a period of (say) 15 minutes, and I would have to intervene if I wanted it to not do that.

So… you could actually build that nowadays? With AI agents able to peep at your email plus some extra code.

I’m not saying that it’s a good idea necessarily! Just that you wouldn’t have been able to build it before, and now you can.

And secretly, yes, I want it.

btw I have a special RSS feed to surface blog posts on this day and it’s fun and often serendipitous to re-discover old posts like this.

Old idea #2. Self-driving companies should be default alive

When I wrote about self-driving companies it felt so speculative.

I reckon I could write an operations manual for a micro agency pretty simply. We had a “choose your own adventure” style operations manual at BERG in the form of linked checklists. Like, “is it a Friday? Do bank reconciliation. Is it the last Tuesday of the month? Then process holidays, process employee changes, run payroll,” and so on. It built up over time.

But what if the “agency in a book” could be software? What if the checklist was actually a set of forms, and the forms actually filed the paperwork?

Software-defined companies!

Which is an appealing idea! Imagine one person could run a big company to do… whatever… and all the drudgery were handled by software automation. Doable today! With a bunch of work, sure. But like, just work, not fundamental breakthrough technology. “Robotic process automation,” they call it in the enterprise world.

But current work in company automation is focused on performing tasks, e.g. Asana AI teammates that jump into your ticketing systems and help out. Like, these are great labour saving devices. But such a company is not self-driving because it’s unaware of its bigger goals. Less self-driving and more automatic gear-shift.

The way I see it: companies are default dead. I’m using Paul Graham’s framing re startups. When he sees a startup, he asks:

Assuming their expenses remain constant and their revenue growth is what it has been over the last several months, do they make it to profitability on the money they have left? Or to put it more dramatically, by default do they live or die?

If I start a company in the UK today, and then do nothing, it’ll get automatically struck off the register as soon as I miss the next annual confirmation statement filing. (I assume there’s something similar in other countries.) Default dead.

So the minimal viable self-driving company is one that files its annual statement. Now it is default alive.

I mean, it won’t be doing very much. But it’ll live indefinitely.

Then everything else is embellishment and feedback loops around that core: give it a bank account and the ability to receive money, and now it needs some automation to file and pay taxes. To not die. Give it a text form so you can ask it to spend money: it would refuse to do so if it didn’t anticipate having that money to spend.

And so on, you build from the ground up.


The implementation of both the email app and the self-driving company would use AI agents. (AI agents perform actions towards a goal and run semi-autonomously. I wrote a paper about AI agents linked here. They’re easy to build.)

But focusing on the implementation… that’s a distraction.

What’s really going on is the software is working to achieve homeostasis, and I’m using that in the cybernetics sense: it’s using feedback to maintain a steady state, like a biological system:

  • My unread email, left alone, grows indefinitely. The Randian email app keeps my inbox under control.
  • A company is default dead. The self-driving company becomes default alive by anticipating (and steering clear of) compliance issues or running out of money.

i.e. can we use homeostasis as a systems architecture principle?


BUT!

How can software be even aware of potential halting states, and respond sensibly to perturbations?

I know that thinking about “homeostatic software” seems like a pedantic statement compared to, you know, actually building it. But I think it illuminates some useful points, and takes us in new directions.

For instance:

OpenAI recently released their software framework for building AI agents. They have a breakdown of the major components, and I like it:

  • Models – the decision making to take multiple steps towards a goal, and a natural language UI
  • Tools – how the software interfaces with the world
  • Knowledge and memory – databases and the ability to search them
  • Guardrails – checks and balances on behaviour
  • Orchestration – developer tools

This is actually a really great summary of AI agent platforms. It’s a fairly agreed-upon architecture today.

But you know what’s missing from a homeostatic software perspective? Being able to extrapolate into the future.

A self-driving company will need a module that runs regularly and asks: what’s coming up? If I continue just as I’m going today, am I dead or alive? Oh I’ll run out of money, I’d better reduce spending. Oh I’ll get shut down for not meeting compliance, I’d better file a tax return.

AI is really good at extrapolation – that’s one of the qualities that has always struck me.

So agents need an “extrapolation” module (i.e. how might my context change) in addition to “memory” and so on.

And once you have extrapolation, it gives more resolution to the existing modules:

  • Memory: what should an agent store to memory, i.e. “learn”? Well, remember anything that increases extrapolation accuracy.
  • Decision making: what should “surprise” an agent, and be brought to the user’s attention, versus what should be handled automatically? Well, any event that violates the previous extrapolation.

So the homeostatic perspective changes how I’m thinking about software architecture.


It also strikes me that this “bottom-up” approach to self-driving companies suggests new infrastructure to build.

Such as:

A web server is internet infrastructure that continuously serves a website, with falling over by running out of memory or whatever.

A “corporation server” is legal/financial/coordination infrastructure that runs a company without halting from compliance issues or running out of money? What’s the operating system or runtime for a corporation?

i.e. there should be a software platform to run companies without human intervention.

And then you could bolt on all kinds of different companies on top, and that’s where the human part would come in:

Self-driving startups, sure. That’s what iterate.com is doing. AI can write software now, with a little bit of guidance, and run ads, and take money. Humans decide direction. So what’s the company server infra that they, and others, could use?

But more enticingly, to me, self-driving co-operatives (2020)? The corporation server doesn’t need to be used exclusively for capitalist ends.

I know the co-op world is working on ideas like this. I’d love to know the state of the art.

Because, historically, what happens so often in business is that management commoditises labour. What if instead labour commoditised management?

Auto-calculated kinda related posts:

If you enjoyed this post, please consider sharing it by email or on social media. Here’s the link. Thanks, —Matt.