top of page

What if the simplest games to capture cooperation problems always involve 3 players?

  • Writer: Adam Timlett
    Adam Timlett
  • Nov 3
  • 8 min read
The title image looks at a three way game involving the idea of triadic closure. This is a theory from graphs that demonstrates the idea that interesting interactions can be 3-way not simply 2-way.
The title image looks at a three way game involving the idea of triadic closure. This is a theory from graphs that demonstrates the idea that interesting interactions can be 3-way not simply 2-way.

In standard teaching of game theory we model the simplest possible games of cooperation as games in which there are only two players. While we do talk about one player games, these types of games are those in which one player makes all decisions, and so there is no conception that one player games model cooperation.


However, it seems to be assumed that 2 player games are the minimal model that we can use to study the properties of cooperation and coordination problems. 


In my 2025 book On the Origin of Risk, and also on my Turing Meta blog and in various talks, I have argued that classic 2 player games may not be the simplest possible, correct place to start to model cooperation. Instead, it may make sense to add complexity by considering what I call 'organisation games' in which we still have two players, but multiple games, such that the simplest game is a meta-game in which all players benefit equally, but have their own choices to make to contribute to the collective value of the payoffs as the game itself changes.


For an analysis of the implications of this alternative formulation you can see my youtube video here: https://youtu.be/47acmgZGhWQ


In the video I argue that different kinds of games in classical game theory look like meta-games in which everyone gets the same payoff, but have different number of individual options for satisfying the collective. The 'temperature' of the collective game rises and games look more zero sum when there are fewer options for each player to satisfy the collective.


I think this is a useful way to frame problems of effective cooperation and coordination in an organisation, where the organisation benefits from everyone cooperating and can take systemic decisions to provide enough options to employees to enable them the 'wiggle room' to cooperate.


However, when I proposed this analysis to a client and in talks in technology teams, I noted that the zero sum perspective of participants was very clear. People just didn't see as reasonable that I would take the accurate state of cooperation and coordination as one in which the payoff is collective rather than individual. I valued it as a simplification, but it just didn't ring true.


Separately to this, I have become interested from a climate perspective in the limits to growth and how we should respond in terms of economic theory to the fact that we live on a finite planet. As part of that interaction with that research I now would consider that there is a different sort of minimally complex cooperation and coordination game that I hadn't given enough thought to, and that is one in which there are three players, and one player is normally regarded as either an 'externality', or 'the commons', such that the returns or payoff to players is ultimately determined by their interaction with the environment 'player'. 


While thinking about this a couple of weeks ago, I had the dramatic realisation that if we argue that every cooperation or coordination game is minimally a 3 way interaction between at least two players and the environment then game theory looks different. Rather than confining the study of the environment to the so called 'tragedy of the commons', every game involves minimally an environmental interaction which we should account for, in which the effect of the environment returns a sometimes delayed payoff to the participants or their offspring or future generations.


What if this is the minimal complexity with which we can actually assess any coordination or cooperation problem?


In that case, things that we normally assume are inverted. It seems to be commonly assumed that if only people saw more cooperation and coordination problems as 'positive sum' then life would be a lot easier. That's probably how I saw it when I gave my talk on organisation games. I viewed the people who argued or assumed that office politics in companies means that we are in a prisoner's dilemma or just a zero sum game, when trying to work with other teams, or even our boss, as limited or 'backwards'. But if we instead assume that the minimum complexity of any game is 3 way, including the returns from the environment, then any positive sum game in which two individuals cooperate to get a higher payoff, can lead, or is likely to lead, to the additional expending of resources in a finite environment, which returns a negative value to both players, or their succeeding generations.


The view from biology

In Nature, higher productivity is possible if, for example, plants can be made to produce higher yields, e.g. through fertilizers, and breeding, or even genetic engineering. However, natural organisms don't seem to 'want' to be highly productive in the way that we humans would like, and an open question is why this should be.


Working with biologists who study bioengineering of organisms, including bacteria and plants, has given me an insight into their intuitions as scientists, which is that indeed the plants or bacteria we would like to bio engineer have actually evolved to be averse to being highly productive of any particular biomolecule.

 

Why should this be? I believe, and my friend in a good position in a biology department would expect, that such a strategy would be risky, and so not selected for by evolution. What I would argue then is that the higher efficiency that is always implied by higher coordination and cooperation leads to growth or increases of productivity, and this also implies higher throughput of nutrients and energy, just as it does for plants and organisms that we try to push into a high productivity regime by the supply of such high levels of fertilizer, etc.

 

Essentially, we are trying to make these organisms behave like high performance engines or machines, that we can run faster and longer. As a result, to achieve higher productivity, there is a corresponding jump in the resources that we supply, just like a high performance engine which is guzzling more fuel. In the Natural world, rather than the safety of a minded field, such a regime becomes risky, if that higher performance is selected for by evolution then it is necessary, and as a result, other systems and options are shut down. Further, you are using resources that might run out in a Natural environment and you are shifting to a regime where you are depending on high throughput of resources, just to successfully reproduce.

 

All of this screams risk that you want to avoid, not embrace. Hence the organism actively resists being pushed into this regime, e.g. by proving very hard to bioengineer for high productivity of some desired product.


Back to game theory

To go back to game theory, in Nature it can still prove difficult to do the analysis as scientists to understand the problem. There are papers and reviews on this type of problem which shed a lot of light on the experiments and thinking we can use from economics to work on problems as biologists, e.g. as microbiologists, on understanding how behaviour evolves. However, although the economic analysis adds rigour, it also tends to provoke more questions than it answers. A notable review article is this one: West, Stuart A., et al. "Social evolution theory for microorganisms." Nature reviews microbiology 4.8 (2006): 597-607.


One highlight of this review is the conundrum of why selfish behaviours don't outcompete social behaviours in microorganisms in Nature. Clearly, there are potentially evolutionary pressures at work if the social behaviours benefit the whole population and we can study that through the lens of group selection, for example, or alternatively, through kinship analysis. However, game theory just doesn't fit very well. It suggests that we might look for mechanisms to punish cheating microorganisms.

 

It is far more natural in my view to simply argue that this is a three way game, not a two way game. So, at a minimum, we include the lagged returns or payoff to microorganisms from their shared existence as getting a payoff from the health of the environment. The resources provided by social behaviour, such as the example of bacteria to produce a chemical called siderophore to free up the iron from the environment that they can convert to energy, is an example of a resource generated by social behaviour or a public good. It contributes to a healthy environment in a way that is shared by all future generations of the microorganism in the population. By simply arguing that this is a three way game in which the payoff to the environment is also shared by everyone, including the loss of value through loss of siderophore, the 'punishment' of 'cheating' is built into the equation. It means, it is in fact inevitable that if there is some way to evolve the avoidance of 'selfish' action, such as not producing your fair share of siderophore, that this will be selected through the three way interaction between the environment and populations of organisms that contribute to that environment.

 

What matters in the details is the lag between the payoff from the environmental player that affects the returns to all participants and those participants engaging in activities that affect that payoff.


By arguing that, in some sense, all games are three way, we can now argue that positive sum games in which two players benefit by boosting their productivity by more efficient cooperation must also affect a third player, the environment, if they obtain that return by boosting productivity. So, if I work out how to more efficiently run a factory by organising it so that two workers can now organise their roster such that one while sleeps the other works, and vice versa, I can raise the productivity of the factory. Let's say that they can work remotely, and one works in Australia and the other in Europe. It seems to be a win-win, a positive sum game.

 

But if all games are minimally three way, then the boost in productivity of the factory affects the third player, which has a delayed effect on the players by impacting their payoffs in the future. It isn't simply a positive sum game at all, there is no longer such a simple idea of a positive sum game. All games are in a sense zero sum, in reality, because boosting productivity implies exploiting more finite resources.

 

And because of the lag between the environmental payoff/punishment we cannot scientifically frame a game where we fix the known number of players. Real games, I argue, involve unknown numbers of players, in general, determined by the effects of the initial player decisions. This would be a more scientific way to study and use ideas from game theory, and using this logic I would then argue that looking at fixed player dynamics is simply not scientific. It neglects effects essential to the frame of reference by which we define the cause and effect of the systems we study. The true number of players depends completely on the lag effect to the 3rd player, the environment. Speed of use of resources also matters as resources can be replenished at a certain level of productivity, but not at another higher speed of consumption.


This, then, is the argument that there are limits to growth because we live in a finite planet and it goes to the heart of economics, the microeconomics of game theory. Rather than neglecting the effects of the third player as negligible in so-called 'positive sum' games in which we only posit 2 players, I argue that they are just games with a significant lag for all the players to interact. It is a frame of reference that we define as a minimum to be scientific about the true interplay of cause and effect. It looks, from the case of bioengineering that Nature has 'learnt' the lesson of the true frame of reference for coordination and cooperation, but for human beings we may need to (significantly) tweak our teaching of microeconomics if we want to also become more successful as scientists studying it.


PDF for Download


Bibliography


West, Stuart A., et al. "Social evolution theory for microorganisms." Nature reviews microbiology 4.8 (2006): 597-607.


Vatn, Arild. "On limits." Beyond Uneconomic Growth. Edward Elgar Publishing, 2016. 83-105.


MacLellan, Matthew. "The tragedy of limitless growth: Re-interpreting the tragedy of the commons for a century of climate change." Environmental Humanities 7.1 (2016): 41-58.


Roman, Sabin. Dynamic and game theoretic modelling of societal growth, structure and collapse. Diss. University Library, University of Southampton, 2018.

adam@turingmeta.org     Turing Meta Ltd Registered Companies House 14573652

  • twitter
  • White LinkedIn Icon
  • White Facebook Icon

©2023 by Adam Timlett.

bottom of page