Alternative epistemic agents for restaurant menus etc

19.44, Tuesday 24 May 2022

The dessert menu at Le Relais de Venise has red and blue underlines, which maybe represent IRL user interface for epistemic agents, and I kinda feel like (a) this is a prototype for smart glasses, and (b) I would like this everywhere (but better).

If you haven’t eaten at Le Relais de Venise, it’s useful to know that there is only one option. You get steak, fries, salad, and their signature parsley sauce. When you finish your plate, you get the identical thing again. No other dishes available. It opened in 1959 in Paris and it has since instanced in a handful other cities.

The dessert menu, by contrast, is lengthy. Here it is on my Instagram.

  • A dozen options, e.g. Le Glace au Chocolate, are listed plainly
  • Interspersed, three more have bold red underlines, e.g. Les Profiteroles au Chocolate
  • Also interspersed, another three have bold blue underlines, e.g. Le ‘Cheesecake’

The key: Underlines = most popular items. Top with chocolate in red, top without in blue.

I had to ask the waiter to learn this fact, which is by-the-by a neat micro interaction engaging you in the dessert consideration funnel.

I like this! Here is Amazon-e-commerce-style social recommendation and social proof embedded as print in the dessert menu. It’s PageRank for pudding.

BUT: other types of recommendation algorithm are available.


In Reasonable People #26, Tom Stafford begins by reviewing Knowledge in a Social World (1999) by Alvin Goldman – then riffs on software, social media, and epistemic agents: autonomous agents, computational entities that cooperate with a user in the service of information-gathering tasks. (Goldman’s words.)

It used to be, says Stafford, that there were many tools to explore and develop knowledge.

For example (quoting Goldman again), a tool for searching the web:

The Scatter/Gather system can analyze those pages and divide them into clusters based on their similarity to one another. Aunt Alice can scan each cluster and select those that appear most relevant. If she decides she likes a cluster of 293 texts summarized by “bulb,” “soil,” and “gardener,” she can run them through Scatter/Gather again, rescattering them into more specific clusters.

Super neat!

But this variety of tools has vanished. The agents have folded into the applications.

On the modern internet, except when we search, we hardly think at all of ourselves using epistemic agents at all. …

Delegation of tasks on our knowledge quests hasn’t gone away. Instead, epistemic agents are now deeply encapsulated in the sites and apps we use. Companies design and deploy the epistemic agents and we buy their services, based on them “just working” - in other words, accurately guessing what will make us happy. So Spotify makes a mix which is a pleasing blend of songs I already know and like and new songs which I have a good chance of liking. Amazon suggests products I might like to buy in combination with my current purchase.

AND:

Along with this encapsulation, it seems like one epistemic agent ate all the others - recommendation. Whether it is new music, concurrent purchases, or which the best take-away is in my area, most epistemic tasks can be looked at as recommendations.

In particular, says Stafford: SOCIAL recommendation won.

And he asks: is it possible any longer to imagine other epistemic agents?

The recommendation algorithms in these platforms are great at showing me more of what I like, but are there any which try and identify gaps in my experience and surprise me? The algorithms are great for promoting affiliation, suggesting people I might know, but are there any which deliberately try and open new vistas in my social network, rather than merely complete triadic closure?

(Triadic closure is a concept in network theory which is when a network cluster gets more densely interlinked, instead of enlarging and bridging to other clusters.)

It’s provocative!


It is healthy to name the algorithm (as previously discussed; 2020). The algorithmic newsfeed in Facebook is embedded, so it feels “natural,” but imagine if there was some kind of truth-in-advertising type of legislation.

What if it were law to name the algorithm according to its reward function?

  • Little Miss Reverse-Chronological
  • Little Miss I-Make-You-Click
  • Mr I-Reinforce-Your-Prejudices

And you would get to choose.

But those are still forms of social recommendation, however.


Stafford’s provocation makes me imagine epistemic agents which are anti-social or anti-recommenders or anti making you happy as a reward function, or all of the above…

  • YouTube suggestions titled: None of your friends have seen this yet
  • A subtitle on my takeaway app: You had that appetiser last time. Why not try something else?
  • Epistemic agents that push understanding the problem space, shying away from providing a solution: Here’s are the tradeoffs for the various lawnmowers you could buy. Explore to see which of your preferences make the biggest difference
  • Spotify gives you an automatically generated playlist: Try Anything Once, Twice If You Like It.
  • You’ve got a spare hour so your calendar starts suggesting things that you haven’t put on your to-do list but really you ought to do. Not type 1 fun, not even type 2 fun, actually not fun at all but you’ll have a sense of relief when you’ve finally filed your expenses.

That kind of thing.


Let’s assume we’ll all have augmented reality, networked smart glasses in a year or two.

So we can expand the Le Relais de Venise approach in two directions perhaps.

What if my future smart glasses showed me the top three in ANY category, using that same visual language?

…walk into a book store, see the current top fiction books with a red halo; the current top non-fiction with a blue on. (Bonus points: “popular” according to my chosen demographics and influencers.)

What if the Le Relais dessert menu offered alternate epistemic agents?

…look at a menu, run Scatter/Gather on any list. The words swim and reorganise into categories; I pick one, focus, they re-categorise. Maybe not so useful for the sweet course. Could be handy to learn about wine.


You may ask why I wasn’t focusing on dessert instead of spinning off about agentive software UI and algorithmic hegemonies and taking notes for later. Yes I ask myself that too. Focus on the cheesecake Matt.


Many years ago I got obsessed with habit-breaking days. We live our lives in self-reinforcing networks of habits: you walk down the street so you see the Pret coffee shop so you go in and you see the snack you always get and… BUT: walk down the other side of the street and your eye is caught by an old spot that you can only see from that angle which takes you to a place where you try something else and you sit in and do your emails before your commute so you get a seat on the train aaaaand… you’ve got a new routine.

What if you discovered a secret toggle, deep in the Settings of your phone, and it was labelled “Routine” – and one morning you tapped “Turn off until tomorrow.”

Then your Citymapper ranks a route at the top which is almost as quick, but you never take it. Your Priority Inbox makes sure it shows you emails from people you typically don’t read. Your alarms are all late; you get breaking news notifications from publications you don’t read. A klaxon goes off if you get the same darn sandwich from the same darn place for lunch. Your phone rings but actually it has spontaneously placed an outbound call and is just letting you know. It has called your father. He doesn’t have a blue underline in your contacts, which your phone knows. “Hello,” he says, picking up, “What a surprise! I was just thinking about you.”

Follow-up posts:

If you enjoyed this post, please consider sharing it by email or on social media. Here’s the link. Thanks, —Matt.