Adam Timlett
A mathematics of unpacking and indifference functionality
Updated: Dec 28, 2019
Introduction: The problem of a theory of cooperation
In this blog post I will be 'setting out my stall', so to speak. In particular I will try to explain my central contention which is that:
Neither economics nor biology has developed a successful theory of cooperation and in order to develop a successful theory we must look again at to how nature solves the problem.
The problem of building something complex under uncertainty can be re-stated as the problem of creating ‘chains’ of coordinating behaviour of sufficient complexity under entropy.
If we choose to think about cooperation problems as the problem of synthesising complex 'coordination chains' of information successful strategies for this already exist in the IT sector and so can also help to understand nature.
While the 'agency problem' is important, the fundamental enemy of complex coordination chains is, in fact, entropy and the agency problem can be viewed as a special case of this.
Certain basic strategies will exist in order to defeat entropy to allow sufficiently complex products to be created, both in nature and in human society.
The mathematics of these anti-entropic structures and products can also be found in the IT sector, and will likely be pervasive in nature. This is, I believe, the mathematics of 'unpacking'.
Computer theory as a coordination problem
For human society, a computer is the ultimate system able to coordinate a long complex chain of internal events to synthesise complex functions. As has been said many years ago of the construction of the humble pencil, none of the agents involved in the manufacture of some sufficiently complicated process will know that much or maybe anything about all the other components they coordinate with. With computer theory this simple idea is taken to its limit. Alan Turing in his theory of computation, along with Alonso Church, set out a mathematical theory to show how each component in a set of programs is itself broken down in to coordinating instructions (components of a program) which know nothing about why or how they link to the other parts. Due to this they can be made ludicrously simple in operation despite the complexity of the whole product. The Church-Turing thesis can be said to merely state the idea that any sort of function regardless of complexity may be coordinated using this 'divide-into-small-enough-parts' strategy.
The problem of coordination under uncertainty and opacity
When we take a close look at the development of products in the computer industry we see that the central challenge of coordination is more fundamental than cooperative / uncooperative agents: It is rather about building long enough chains for sufficiently utile complexity that are also resilient under entropy (uncertainty) so that they still 'hang together' i.e. still remain sufficiently coordinated and so viable, to give the payoffs that only coordination chains of sufficient complexity are capable of. What I mean by this, is that the attempt to coordinate, even under ideal conditions where no agents are inclined to 'cheat' the others, is still very difficult under conditions of uncertainty. i.e.
When information about the situation is changing
The behaviour to coordinate with is a 'moving target' in some sense
Where agents are operating under 'opacity' and so can only see local environment or;
Only have potentially unreliable information about what lies outside of their local domain
All these factors are more fundamental to coordination problems than the problem of 'duplicitous' agents (e.g. freeloaders) so intensively studied by game theorists both in nature and in economics. That is not to say that 'duplicitous/freeloading' agents, (known as the agency problem) are not a problem - they are a big problem - but opacity and uncertainty are more fundamental because we can imagine no duplicity / cheating, but we would still encounter significant coordination problems under local opacity and uncertainty when the chains to be built are of significant complexity relative to their local components.
Convergent evolution of the solution to coordination problems
My hypothesis is that the methods of solving the challenges of coordination that arise in rapid, innovative software development are subject to a kind of 'convergent evolution' with the coordination problems already solved by natural biological systems. This is because the horizon of uncertainty becomes very similar to that found in nature when the coordination chains must be sufficiently long relative to each agent. It doesn't matter that we are more sophisticated agents / designers relative to individual cells or genes; what we try to design is also correspondingly so complex that our view becomes limited relative to the whole. Therefore, if we choose to study both organisation in the IT industry and in organisation in nature in 'parallel', we should not be too surprised to see the emergence of some significant and non-trivial theory which unites the two. In the next section, I will present another way to look at this and then present a more detailed hypothesis; that the theory which describes successful anti-entropic strategies is described by the mathematics of ‘unpacking’.
Cooperation as a result of 1st order properties
One other useful way to frame the question is to say that game theory scenarios like the prisoner's dilemma imply that the decision of each agent to cooperate is the end of the story of that coordination problem. They treat coordination as a '2nd order problem' (one step removed from the details of the problem). For game theorists, cooperation is often a problem of rational choice of each agent, e.g. to cheat or to be honest. In nature, especially micro-biology, where coordinating elements are many, coordination can instead be seen most often to be a '1st order problem' of maintaining specific coordinating behaviour under opacity and uncertainty. Just deciding to cooperate, doesn't mean you will be successful, it depends, crucially, on the details: The information each party has available to coordinate sufficiently to get the whole chain of coordination to pay off. It is these details that also present a problem to the IT/Computer industry when trying to coordinate with different systems, sub-systems, etc in order to build what is often called a 'stack' of systems that all integrate together.
Entropy is the true enemy of cooperation
My hypothesis about cooperation then, is that a more important feature of cooperation than the decision of whether it is rational to forgo some immediate benefit for a larger pay-off to get some shared goal is whether you can overcome the enemy of entropy. A strategy for cooperation, to succeed must first beat the odds stacked against it due to entropy; the odds that it will cost you something to cooperate that makes it not worth the risk. E.g. the odds of mis-connection, bad initial connection, wrong information, opacity, moving targets that cause coordination failure. As coordination chains get longer and more complex they are more subject to entropic effects which destroy the chains and make coordination less likely regardless of the intentions of the participating agents. Now, I want to talk a little about the options for a maths of 'anti-entropic' strategies.
Choosing a mathematics of entropisation using geometric probability rather than statistical mechanics
Anti-entropisation is analysable using statistical thermodynamics but statistical methods of counting chance events simplify the properties of the units to be coordinated (or dis-coordinated), e.g. perfectly round billiard balls. This simplifying step makes such techniques difficult to use where we have dissimilar objects to be coordinated, such as in the computer industry or in biology. Also note that the success of statistical mechanics which contrasts the odds of disorder versus order occurring are in the original case of application assuming no resource input to organise, and I believe this doesn't map well to purposive systems where risk is being actively managed by agents, even though non-equilibrium statistical mechanics is a ‘thing’. Further, in these risk-oriented scenarios order is not the converse of disorder, rather someone else's order (control of local risk) is usually at the expense of someone else's; hence, order is entirely subjective depending on coordination relative to our object of interest. e.g. if I over-prioritise my tasks ('over-order' them relative to other priorities so that other people cannot optimally order theirs), or if I am a moving target (by insisting on keeping my options open and so refuse to commit to a position becoming a source of uncertainty). In these ways the systems we are interested are not like statistical mechanical systems.
Geometric probability
I propose instead that one can take a geometric probability approach to the problem of coordination and this may still have something in common with the approach taken by the interface mathematical community (physical interfaces/colloids/boundary conditions). It is also broadly the same approach to uncertainty taken by Nassim Taleb; some remarks he makes in his book ‘anti-fragile’ insists we look at the properties of objects to understand their 'resilience' to entropy. In any case, the history of geometric probability is a long one, beginning with Buffon’s Needle in the 18th century.
Incidence of positive and negative entropisation as odds of coordination
Therefore, to study anti-entropic organisation we must look at the objects being coordinated and look at the problem of estimating the coverage of an abstract area; how exposed is the object to 'positive entropisation'? In the simplest case we can think of coordination as controlled by the simple area under which an event counts as coordinating. For example, if we have photons arriving at an area to provide power, if we require all the power to be provided in a very small area (e.g. a small solar panel), then a coordinating beam of light must be very focused to land on the small area it coordinates with. e.g. we require that 100 photons all land within 1mm square. If we can, however, expand the area from which to draw power to 10mm square there are many more patterns of light incidence that count as 'coordinating' with our solar panel for the same power.
To analyse the odds that coordination will occur we simply have to calculate the odds that information (e.g. photons) will arrive in a certain pattern over some area which can be defined as a likelihood of some spatiotemporal distribution.
Using such a simple model allows us to introduce complexity only in terms of how we define the area. Qualitative differences in information are just more dimensions of incidence. For example, we consider the 'wavelength' of the photons in our picture of a distribution; plants require for photosynthesis not simply the incidence of enough photons in a 1mm square area, but also that the light is of the right range of wavelength. By adding dimensions to the notion of the coordinating area, we can introduce significant complexity to describe dissimilar objects that are being coordinated.
Abstracting incidence to information
‘Positive entropisation’ can be defined as the incidence of useful information on the agent/object which requires the information to coordinate. The probability of incidence is a principle of coverage, just as a solar panel has more chance of coordinating with a photon if the panel covers a larger area, so a coordination object has more chance of coordinating with some other object if the information needed can arrive in one of many temporal/spatial epochs and still be actionable to create a positive pay-off (positive entropisation).
Example of application to risk: Hedging
Hedging can essentially can be understood in this framework also in terms of this principle of coverage of some abstract area. If 2 dimensions are inversely correlated in terms of the probability of incidence of relevant information, then an 'investment strategy' which covers inversely correlated dimensions can, when the information entropises in one of these dimensions, reduce variance in the pay-off by means of this investment. i.e. the variance of entropisation (positive or negative) is less due to the structure of the financial instrument.
A mathematics of unpacking
To understand anti-entropic strategies of coordination chains we need to consider means by which the agents and products coordinating can be 'unpacked'. This refers to the process by which we add functionality to some product component or agent so that the area of positive entropisation increases in the product component. Unpacking problems can be contrasted to the mathematics of packing problems where we start with a bounded area and try to optimally fit objects in to that bounded area efficiently, although they are closely related. Efficient unpacking is the means by which we add functionality that increases the area under which entropisation can be positive for the product, and so it is more likely to remain or acquire coordination with other objects.
Unpacking across utile dimensions
One way we can unpack is by spreading utility across utility dimensions. I have already mentioned hedging, which could be viewed as an unpacking process, whereby we take our initial investment in some dimension, and then identify an inversely correlated dimension and move some of our investment across from the first dimension in to the inversely correlated one. That is one case of unpacking across utility dimensions.
Compression example
However, consider the following example: In preparation for sending some of my favourite music to a friend, I choose to compress the audio file, by turning it from a CD in to an MP3 which is a lossy encoding technology, which I can then email to my friend. I thereby save space on both my and my friend's hard disks at the expense of some of the quality of the audio and reduce the memory cost (and footprint size) of the file. This particular preparation sounds like 'packing' with loss of utility in 1st order terms. However, when such packing is done the compression processes are also in utile terms an unpacking process. I have unpacked some of the utility of the music file and transferred it to the utility of saving some space on my hard drive, a change in the distribution of utility in dimensions. Specifically, if my email server has a limit on file sizes, this unpacking of utility by lossy compression helps me to coordinate with my friend and the email server itself. This means that a 1st order process of packing information in to a smaller space can also be viewed as a 2nd order unpacking process too from the utility dimensions perspective. Hence the ambiguity of packing and unpacking. We are pretty much always doing both.
The coordination utility of an indifference curve
I will now explain how adding the utility of an indifference curve to some product has the effect of unpacking the coordination of that object with certain other objects. This idea of the utility of an indifference curve may seem, at first sight, a contradiction in terms. In economics, indifference curves just emerge from being offered a choice of products/features. In the IT industry indifference curves are often the result of purchases, i.e. they are often secondary functions of components we choose to buy. As an example, what if when deciding to purchase some graph database platform, A or B, you want to know if each can be used with either one of two query languages, Cypher and Gremlin, versus working only with one language. We can say that if database A is purchased, we become indifferent in the pay-off that it gives us, regardless of our eventual choice of query language to combine with this prior choice. By purchasing that product A we have also purchased a superior indifference curve with respect to our future choice of graph query language - Database A comes with an indifference function.
Why indifference functions are valuable
If we are in a situation where the query language that we want to use is subject to uncertainty, the purchase of our indifference to the choice of query language along with our purchase of A becomes very important. We can now think of purchased indifference functions, these secondary functions of products as means of utile unpacking. In the example, investing in a database product with an indifference function on the choice of query language means that we don't have to delay our choice of database until information about the optimal choice of database query language entropises. We can even define our temporal ‘epochs’ in terms of the watershed choices we have to make: We can say that in the epoch prior to our choice of database query language, Gremlin or Cypher, certain actions are 'actionable'. As a result, we can now 'unpack' the selection, purchase and installation of a graph database A in to this prior epoch from the epoch post our choice of database query language. This means that we have unpacked the coordination chain of purchasing and using a graph database in time, and the purchase of A defined the epochs that we use to determine the chances that enough optimal choices and actions are now likely to be made in the time afforded by the purchase of graph database A rather than B. Of course, we don’t know in advance when the epoch after our choice of language L will occur, it depends on when information to make the final choice of L entropises.
A diagram to illustrate this, can compare pay-offs under choice of database A or B, for different precipitation times of final choice of language, L, either, Gremlin G, or Cypher C. It can also be considered a cartoon of the precipitation distribution of choice L, i.e. like a scatter plot.

This allows us to compare the indifference curve of choice of features of a database under different conditions of entropy. We see the trade-off between investing in primary functions versus secondary functions of indifference to choices in various dimensions. Secondary functions (indifference functions) are more valuable where entropy is high. Figure 2 and Figure 3 show the respective indifference curves.


Unpacking in terms of epochs
In the prior example I considered different periods in time which are now occupied with the process of coordinating some related objects like installing the database. Unpacking in terms of time used to coordinate can mean increasing the time available to do something. Just like increasing the incidence of photons on the solar panel by making a larger panel, I can increase the incidence of photons, by giving myself more time to coordinate some task. Hence temporal unpacking also increases the odds that other events will also be well coordinated. Rather than thinking of time periods in absolute terms (like seconds) I will be using the term ‘epochs’ to characterise time periods which are available for coordination to occur. The temporal size of these epochs informs the geometric probability of positive entropisation, just as spatial sizes do. Figure 4 shows these scenarios, and can be considered a cartoon of the distribution pattern of precipitation of choice L, i.e. like a scatter plot.

Summary: Coordination chain unpacking is by choice epoch and dimensions of indifference to choice epochs
To compute the chances of successfully building some coordination chain, we compute the chances of positive entropisation of the necessary coordinating elements. We unpack utility in to epochs defined by a series of choices that we also need to measure in terms of the temporal intervals that these choice epochs instantiate and the extent we are indifferent to their instantiation. For instance, the time interval during which I can gather the money, information and resources to select, purchase and install a graph database is a certain time period of a certain length. Most of those actions must be taken in the ‘post- choice of database’ epoch. If, however, I decided to purchase database B, that post-database interval must then also be bounded by being prior to the eventual final choice of graph query language being entropised, and this is likely to make the time interval shorter. But by purchasing graph database A this epoch is no longer bounded by this choice and so is likely to be a longer time interval. By geometric probability this changes the odds of information entropising positively and reduces the dimensions of information which can entropise to form a negative pay-off.
Bullet-points: Unpacking and choice epochs created by investment in product utility
Under entropy, the odds of coordination success are the product of the size of the spatiotemporal area where each piece of final information can be positively entropised, and the indifference to dimensions of this entropisation.
The time intervals for positive entropisation are bounded by choice epochs which themselves are determined by product choices and investments.
By purchasing products which have invested in indifference functions as secondary functions there are fewer bounds to these choice epochs, and so the time interval for positive entropisation is likely to be longer, increasing the odds, by geometric probability, of successful coordination, and also reducing the information which entropises that can have a negative payoff.
Defining choice epochs by quality level of choice
We can also consider the availability of choice epochs, where we define the epoch in question by the quality level of the choice. Let's consider epochs in terms of quality levels: Epoch A, is a low-quality choice era, Epoch B, is a mid-quality choice era, Epoch C is a high-quality choice era. Each epoch refers to the same choice dimensions but differentiated according to the likelihood that a certain amount of information of a certain quality entropises during that epoch. How indifferent is an agent to choices made during these different epochs?
Agility is the dynamics of quality level choice epochs
In the earlier example I discussed a graph database A that was 'agnostic' to which of two query languages it uses, Cypher and Gremlin, and how this 'unpacked' the coordination chain. The important thing to understand though is that these are only static examples of available utile indifference to singular choices, e.g. choice of L. In the case of a graph database that is indeed query agnostic, it could still be the case that as a customer, you might buy the graph database start and developing in Cypher. You might then write rather a lot of code using that Cypher, and then realise only at that later point that you in fact have to switch to the Gremlin language. You might then incur a significant cost because you now have to re-write all that Cypher code into the language of Gremlin, i.e. significant re-work is required. Recovering utility after entropisation like this is therefore something that you also want to be do, to increase the chances of a successful coordination chain. Of course, it is not as expensive as it could be if you already have graph database A since it is agnostic (as described in the prior example). On the other hand, in the choice epoch post the choice to start writing code you still seem to have wasted a lot of money; If further information entropises during this epoch it forces a costly switch in language. This type of coordination problem has been extensively addressed in the IT sector by a huge investment in what is termed 'agility'. This is making available cheap transitions between the same set of choices, or 'choice recycling'.
Purchasing agility
What is so important to understand is that we can purchase systems which have agility as a secondary function built in to them, just as we can purchase static singular choice indifference function. For example, your word processor is just such an investment in agility compared to a traditional typewriter. A word processor means you can throw away some of your now useless words as entropisation occurs without using whitener or screwing up pages and hurling them in to the wastepaper bin. All you waste is a bit of time. The 'undo' button is the button for cheaply throwing away some information produced when you eventually realise it is not quite right while still retaining most of the work you already did which is OK. This can be understood, mathematically, as a means of moving backwards and forwards along the options of your written product with respect to prior choices, regardless of choice type, limited by the number of moves and the time to make them. It allows you to move within this indifference plane of now similar states of the quality of your prior work in an agile (low cost) way, which is highly efficient under entropy.
Agility as dynamic indifference functions
The ability to cheaply switch from one coordinate to another on a set of options as you make other choices is often worth a lot. This is because it is a far more powerful strategy in the fight against entropy than 'static' indifference to singular choices are. Agility makes it cheaper to begin build a coordination chain and re-build it as you go, as plans change, when information that you thought was reliable turns out to be unreliable. In other words, the switch is largely conservative in its function, in recovering most of the utility of prior coordination effort, despite entropisation occurring, continually, mid-build, as tends to happen in IT. We can say that in contrast to the prior unpacking example, where part of the coordination process is unpacked prior to the critical epoch, with the purchase of agility, utile unpacking occurs post the original notional critical epoch. You thought you had all the information, but it turns out you did not.
Agility in terms of epochs of choice quality level
Let's say that I make a poor choice of indentation of my text and start writing. This is an epoch of low quality level in relation to that dimension of formatting. The cheaper the revision of my text, the less that it matters that I work during this epoch. In other words, even if I ‘load up’ that poor quality epoch with a lot of choices, I am still indifferent to making those word choices in this poor quality epoch, rather than a higher quality epoch with respect to paragraph formatting. In the exact same way, I can be indifferent to the choice of database language if my database is agile in this dimension. Let's say it has a built-in tool for converting code written in Gremlin into Cypher. In that case, I can be indifferent to the epoch in which I make my choices. I can write lots of code in an epoch defined by a sub-optimal choice of language without affecting my overall coordination chain. This ability to build significantly during sub-average or average quality choice epochs, actually therefore works both post-notional critical epochs and prior-critical epochs. What is termed 'agility' is mathematically a powerful anti-entropic strategy. Consequently, nature is likely to be under significant selection pressure to evolve secondary indifference functions specifically to allow further unpacking of coordination chains this way.
Future posts: The ubiquity of an investment in 'Agility' in biology
In my next blog I will be describing how some of these strategies likely work, in nature, and relate to existing research. I will also be describing the indirect evidence of nature's investment in indifference functions such as agility to produce longer and more complex coordination chains even under opacity of local agents. Most of this evidence will be based on the likelihood and evidence of genes which have invested in the secondary indifference functions. I will describe evidence of this investment and predict what these investments will look like.
I will be arguing that the strategy of investing in indifference functions such as agility is likely to be essential to understanding the biology of coordination via signal transduction systems and also to ensure stability in complex coordination, e.g. in bacterial ecologies. As part of this, in the next blog I will also be providing specific examples of strategies which incorporate traditional ideas about cooperation such as dealing with the agency problem but using anti-entropic strategies.
In addition, I will show that agile behaviour in biological signal transduction systems allow these systems to innovate (mutate) with robustness.