top of page

Options beyond Growth and Failure

  • Writer: Adam Timlett
    Adam Timlett
  • 6 days ago
  • 25 min read

In The 2nd Invisible Hand I argued that Nature may be exploiting other options around growth to control problems such as pollution and exhaustion of resources from over-exploitation. One interesting example of this may be the courtship of flamingos, whose complexity I argue is a cryptographic effect for the flamingos themselves, to navigate a complex process that limits the ability of flamingos to efficiently reproduce, so protecting their populations from over-exploitation of resources.


So far, this idea has led me to explore two kinds of meta-modular effect, where we step outside of one model, such as a learning gradient, and actually shape our own learning gradient, or otherwise exploit what the model does ‘outside’ of the main effect of the model itself.


So, one of these options for controlling one’s own learning gradient or ‘fitness function’ is around the idea of encoding gradients to reduce or ‘self-limit’ the options for shorter term adaptive mutations to over-exploit resources. We make such options harder to reach, even by random mutations.


For example, an organism can encode genes in a way such that optional features like the ‘propellor’ or a flagellum of bacteria used for mobility in finding food, is made ‘essential’ by being encoded on the division point of the cell.


A random mutation that messes with the flagella in a way that disables it can have a short-term advantage when food is plentiful. You don’t waste energy moving around when you don’t need to and so you can now out-reproduce rivals. But encoding the flagella gene at the same place on the cell where the cell divides means that mutating to mess with the flagella also affects the bacteria’s ability to reproduce by cell division, at all.


Therefore, I argue that Nature seems to have found a way to transmit a signal of downsides to short-term advantage in reproduction, by doing things like encoding ‘optional’ genes on ‘essential’ genes and processes.


This protects species from evolving in ways that Nature has already selected out because they are harmful in the long-term, e.g., due to the effect on the environment from over-exploitation, or ‘over efficiency’. One such case of over-efficiency being the disabling of flagella when food is plentiful.


Applicability to Economics


Despite the simplicity of organisms like bacteria, the idea of controlling gradients to avoid such ‘adaptation traps’ is also directly applicable to how we design regulations, etc.

It actually means that typical solutions that humans look for at a local level, such as threats of punishment for cheating, are just one small set of options: Nature has likely explored many more ways to affect these adaptation gradients that are simpler in some ways that our current ideas.


The cryptographic effect of optimsiation


In the 2nd article, Optimisation’s Shadow is Cryptographic I argued that there is another duality of optimisation that we may have overlooked. This is the cryptographic effect of optimisation when viewing the optimisation from another perspective, e.g. from the standpoint of another agent or objective.


Related to its cousin negentropy, which describes the duality of order & disorder, it is something that, as humans, we often seek to overcome when we reprioritise our optimisations to minimise downsides to our original optimsiation. It means that the learning or adaptation gradient for some other objective or agent increases as a direct effect of us optimising to lower the gradient for our main objective. We can usually control this effect when we need to, but it leads to more complexity.


For example, we often overcome such steeper gradients that we’ve created by adding more objectives to our optimisation criteria to minimise unwanted side effects. An example is sorting a table. By adding more indexes to a database than just one, we can negate some of the side effects of having to use one index by just adding another, e.g. that sorts by an orthogonal column to the original index. The side effects of our optimisation for sorting data then shift to things we don’t care as much about. Another side effect would be that it takes longer to write data to the database because we need to update all these indexes when we add any new data. Reading data got faster at the expense of writing data.


In this 2nd article I argued that Nature will also be exploiting cryptographic side effects, rather than trying to overcome them or mitigate them. This is because Nature will use optimisation and the dual cryptographic effect to simultaneously both optimise and control resource use by others by making it harder for them to access resources.


It is as if by sorting a table one way to make it easier to find certain values, we also get utility from making it harder for someone to search for different values not suited to the current sort arrangement.


This cryptographic value is another significant option we see for controlling access to resources in ways that are not sustainable by stepping outside of our original model. Because such cryptographic effects of optimsation are ubiquitous, they are a cheap way of controlling access to resources.


A concrete example of this in economics would be if we optimised supply chains to work best over local distances. This would make it easier to obtain and make local products, but possibly harder to make products that require non-local supply. This optimisation in itself would have a gradient, or mild encryption effect for non-local access without any regulations, such as trade tariffs, being necessary.


In order to get this desired dual effect, we may optimise for local access slightly differently than if we just didn’t care about non-local access. Hence, the solutions we need don’t just come about by accident, we can enhance them by focused effort. We just need to better understand the advantages of specific optimisations for us that also make utility more encrypted for other agents.


It is inconceivable to me that Nature is not exploiting this duality of optimisation to manage things like excessive resource use and pollution, etc, or to manage risk of resource access by other organisms, by rogue genes, maybe even cancerous cells. Even as organisms seemingly optimise for easier access to resources for themselves I predict they also encrypt resources for others.


Examples of this may be found in areas such as developmental biology, where certain regulatory pathways make it more difficult for genes to affect the odds of the sex of offspring in order to prevent genes found only on females from biasing the odds in favour of females versus males.


It is also inconceivable to me that we won’t find utility in this strategy for our own economy when facing issues such as climate risk and the need to modify supply chains to make our economy more sustainable, .e.g. by making unsustainable activity more inefficient and undesirable.

Detail of painting by www.jamesrobertwhiteart.com
Detail of painting by www.jamesrobertwhiteart.com

A 3rd major method of adaptation in this 3rd article

Now in this 3rd article I want to flesh out the full stack of adaptation options that we get when we step outside of our models. The first two examples:


1) Evolving our own steeper learning gradients in directions where we don’t want to adapt.


2) Exploiting the cryptographic effect or gradients that are raised that occur outside our own optimisation model.


These are two major ‘meta-modular’ effects that I predict that Nature is bound to be exploiting, and which we already have indirect evidence for. They are very similar: both involve using raised gradients that make adaptation harder, either for cases where we don’t want to adapt in that direction, or where we want to make it harder for other agents to adapt in a way that gives them access to resources that we want to control, even as we make things easier for ourselves.


However, there are also more effects than these to consider which also involve stepping outside of our optimsation or our initially ‘given’ learning gradients. Together, these other forms of model ‘escape’ form a complete ‘adaptation stack’ of strategies that are defined at a meta-modular level. That means that they are not at the same level as the main model we are using to optimise: They result from defining relationships between the main model and at least one other model.


It’s important to realise that the idea of a meta-model is not rhetoric but a formalisable construct. It has a definite meaning; Once we define any primary model, we can define relationships to other models. In the same way that when I hit the ‘escape’ key on my computer keyboard, the control passes from the current program or model to another program. They are not the same program. Yes, the meta-model is also a model, there is another program that has control, and there is program that passes control from one to another, but if we suddenly switch the subject to these models then we are moving the goalposts. The important thing is that we start with a model, and then define the meta-modular system, the other models, around it, that relate to it.


The meta-modular relationships can be defined logically for any formal model that we choose, regardless of the mathematics used to define the formal model we start with. This is a consequence of computer theory. In this respect, such meta-modular systems are of the same ilk as ideas such as category theory. Escape characters in regular expressions or strings, escape keys, ‘end task’ options for programs that are looping. All of this is everyday computer theory in action. Such things are distinct as meta-modular processes, from models themselves. And this distinction in levels and models can be formalised, i.e., within computer theory.


(When you fail to understand the difference between a model and a meta-model you can’t actually design or understand programming very well. However, studying algorithms doesn’t give you a good grasp of the difference between models and meta-models. Data scientists don’t necessarily understand the difference, and assume everything is just another fancy algorithm. This is not the case, as I will explain).


My argument is that mainstream economics ignores many parts of the ‘full adaptation stack’ that Nature will have evolved to exploit, partly because they lack a meta-modular theory to describe the relationships between models that allow us to describe the differences in the choices available. But, by studying Natural systems, while also developing the meta-modular maths that comes along with this full adaptive stack, we can start to answer questions, like: “What does a plan B look like once the economy is not focused only on growth?”


Obviously, due to climate change, we urgently need answers to this type of question, and it is my strong belief that it’s to Nature that we need to look, while using a good grasp of the mathematics of meta-modular systems to be able to see with clarity what Nature is actually ‘trying’ to tell us about options for growth and adaptation.


So far I’ve looked at negative growth constraints, so we can choose what to adapt to, and also what not to adapt to. This is important for getting out of growth ‘traps’. But the 3rd example is about positive adaptation and what options we have for that, which mainstream economists perhaps don’t characterise or explore very well.


The Adaptive Stack of Mainstream Optimisation and Capitalism


So, let’s see what the ‘adaptive stack’ of mainstream optimisation and by extension, all of capitalism, looks like. And let’s then consider what it could look like once we fill in the gaps in this model using meta-models in terms of missing positive adaptation options.


I would argue that we can see this best by introducing the concept of an Adaptation Game.


Adaptation Games


The idea of an adaptation game is that:


1) We have different options available to us when we want to adapt.


2) We are part of a system, which has other agents with diverging interests in what strategy we use to adapt, so that:


3) What suits one agent as a level of adaptation may not suit the others, and vice versa.


4) Specifically, there are different levels and extremes of adaptation, and some of these levels suit the lower-level agents, and some of these levels of adaptation suit only the higher-level collective, or higher-level agents in some system.


5) Some levels or extremes of adaptation don’t suit the lower-level agents because they involve the extinction of most or all of the agent concerned. This means that the adaptation has a liquidation step or a liquid phase.


6) Effectively, these different levels and extremes of adaptation require a meta-modular theory to properly define them.


a. We have some levels of adaptation that happen within a given model, and we have some levels of adaptation that only happen outside the given model we start with.


b. Defining the model in a meta-model gives us the shells for these different extremes of adaptation. In other words, we can only coherently define the different strategies using meta-modular maths within the framework of computer theory.


c. Computer theory is about the essence of organisation. Adaptation is about changes to organisation. That’s why the mathematics of adaptation games and extremes of adaptation is ‘written’ in computer theory.


The revenue generating computer program example


For example, let’s say that I have a program running on a computer to perform some task, which is supposed to have value to me as the owner of the computer.


· The person who owns that program I am running is being paid a subscription by me, and in return the program is supposed to also earn me money, above what the subscription and costs of running the program on my computer are, to me.


And let’s say that this program is adaptive and is learning as I give it data and that I also evaluate its success in terms of the revenue earned for me the owner of the computer.

This is no different to having a company with a department that performs some revenue-generating function, and which is continually trying to adapt its service to earn more revenue for the company.


· The owner of the company is like the owner of the computer.


· The owner of the program is like the department in the company that is supposed to earn revenue for the company.


· But by doing so it also means that the department manager is earning a wage which is a cost to the company.


· This wage is just like the subscription fee that the computer owner pays to use the program running on the computer for the benefit of the computer owner.


If the computer program is not earning the revenue that justifies the subscription fee, then there are two basic options for me, as the owner of the computer or the Head of the company.


Adapt or be Liquidated


1) Adapt the program so that it hopefully earns more money by being better adapted to the task.


a. Note that the program is already adapting as it is a learning program, so this is the default option. The lowest level of adaptation is to let the program continue to learn.


2) Liquidate the program. ‘Escape’ the program and cancel the subscription fee. Use the money, computer power and memory, to purchase and run a different program that hopefully will perform better.


a. This is the largest level of adaptation. We can use the resources to do something very different, so the adaptation step is much larger.


b. This option works for the computer owner but represents ‘extinction’ for that particular instance of this program. It’s good for the computer owner, but not the program owner. In the company analogy it would be the dissolution of that company department. Maybe the people who work there would be used in the new department or elsewhere, maybe they would have to find themselves a job with an entirely new company.


We can represent this in game theory as an adaptation game.


· We can see that the department head wants to adapt at a lower level, but the company head doesn’t care as much about which level of adaptation is necessary. Their interests therefore don’t exactly align.


· The risk of dissolving the department acts as a threat which means it is in the interests of the department head to adapt sufficiently, even if this is expensive because the alternative is a more extreme level of adaptation in which the job of department head may itself be dissolved.


Liquidation is inefficient


This more extreme level of adaptation involves a ‘liquid phase’, by which we recover the resources used in the program/department function and re-deploy them. For that reason, it’s not that efficient.


In physics, we know that everytime we convert energy from one form to another there is an unavoidable loss of energy, which mostly goes into heat.


This is part of the laws of thermodynamics. They limit things like efficient storage and recovery of energy.


The same is true of the liquid phase of any extreme adaptation process like dissolving a department, or deleting a computer program. Except that in this case, we not only lose physical resources in the cost of liquidation, we also lose information. The cost of information loss is already well known in economics.


The cost of losing information


A classic example of the cost of losing information as we liquidate an asset is the depreciation of around 30-40% when you simply drive a new car off of the car dealer’s parking lot, and onto the street and then try to sell it immediately to someone, to liquidate the asset. The loss of the car’s value is correlated to the loss of information to the next customer about whether the car is actually functioning and roadworthy.


In the case of deleting a computer program or dissolving a department function, we also lose a lot of the information investment that we had in that system so that is lossy. And migrating that information can also be expensive and lossy, just as converting energy from one form to another is expensive and lossy.


I would argue that this adaptation game and these two basic levels of adaptation describe the basic ideas of capitalism. They show how we seek to generate sufficient adaptation in an organisation or a market by having these basic options for the level of adaptation we can use. Let me explain how.


The moral hazard thesis and the adaptation game


Specifically, we see in the idea of moral hazard, that, in order to incentivise success and hard work, (i.e., sufficient adaptation at the lower level), it is important that if you fail to generate revenue or fail to avoid being a drain on the finances of the company you work for, that you know you will pay a price (i.e. that the higher level agent will move to the liquid phase adaptation level).


For example, if banks after 2008 were considered ‘too big to fail’ this removes the moral hazard to the bank employees of consequences personally, for failing to protect the investments of the bank customers and shareholders.


The fact that the liquidation of certain banks was not considered a viable option by the US government in 2008 is the reason many economists saw that getting into this scenario meant we now couldn’t avoid excessive risk taking happening again in the future by bank employees. They argued that with no credible punishment threat, there was also no incentive to do things differently next time.


The extreme option or the moral hazard of adaptation via a liquidation phase seems essential for functioning capitalism to motivate sufficient hard work and adaptation at the more efficient lower adaptation level.


Why Nature isn’t so stark as capitalism


However, why would we expect in Nature that everything in terms of adaptation is also as ‘black & white’ as these two stark options: ‘adapt or die’? And, what about the fact that many organisms also face pressure to adapt, but are not subject to moral hazard in the same way?


I have shown that, properly understood, an adaptation game is meta-modular. This means, it exactly corresponds to the dilemma of whether to escape and delete an adaptive computer program that is not sufficiently useful. If we choose to escape the program we can then reallocate the resources outside of the original model. But this is less likely to be an efficient transfer of resources compared to the default option of allowing the original model to adapt, given that this works. (It would be a currently open area of research as to whether a sufficiently intelligent adaptive computer program can also be incentivised to adapt more successfully by the mere threat of being deleted).


Leaving aside moral hazard, we see that this is a meta-modular question, where the payoffs are determined by the efficiencies and costs that can be created by each adaptation process. This means we should analyse any other options using meta-modular mathematics, not any other form of mathematics, because only meta-modular maths can express in information and physical terms the correct costs in terms of escaping and liquidising information in one model in favour of instantiating another. Also, it demonstrates conclusively that there is a 3rd option available to agents that fail to sufficiently adapt.


A 3rd option is available when defined in terms of meta-modularity


Let’s now consider another option available due to the meta-modular nature of adaptation games. We can consider the case where there is a back-up option available to the computer program owner or Department Head, whereby if the original computer program is not earning sufficient returns, then we can pivot to a new use of most of the same information. Rather than liquidating the original program, we consider a pivot to a new function. This type of pivoting involves formalising the change in type and model, while conserving a lot of the information from the prior program.


It is therefore a 3rd option.


Due partly to the influence of the moral hazard thesis, but mostly because we haven’t recognised this is a meta-modular game, the 3rd adaptation option has probably been systematically overlooked in mainstream economics.


Using meta-modular mathematics, however, we can formalise the notion of ‘repurposing’ ‘reusing’ and similar adjectives popular in non-mainstream economics, such as climate economics.


Repairing and adapting things has been argued to be an essential and qualitatively different strategy to mainstream growth oriented economics. However, we can now give these former intuitive notions a formal clarity that they may have lacked to-date.


There is therefore, a 3rd option in which we adapt in a more drastic fashion than the default option, but less drastically than the higher-level adaptation that requires a liquid phase before reallocating resources.


The 3rd pivot option requires despecialisation


The 3rd Option requires despecialisation: Defined as the ability to re-allocate bits between models, rather than reallocating resources only after liquidation.


In the computer program example, it would involve not deleting the whole program after escaping it, but rather first escaping, and then taking the same bits that the original program utilised that reside in memory even after escape, and reallocating them, many in their current form, to an entirely new program or ‘model’.


This back-up program utilisation option would mean that we design this option potentially from the start, and that this back-up option takes away from the risk, to the lower-level agent, of a fully specialised program that either succeeds or is liquidated.


We can represent different games showing the change in the game as the risk that one option, rather than another, becomes favourable. We can also contrast the lack of more than two options if we specialise, compared to the despecialisation option.


To illustrate a simple example of reallocating bits to model as part of a program: In the description of the options I want to present the example of the possiblity of a logic change to a program as a pivot, of the type seen in the programming language JavaScript. In JavaScript we can test for the equality of a value yielding a True/False output.


e.g. 5 === 5


This evaluates to True in JavaScript, because the value 5 is indeed equal to 5. (The assignment operator meaning ‘make x = y’ e.g., make number of apples = 5, uses one equal sign while comparison uses two or three.)


But in JavaScript if we don’t use three ‘=’ signs as the assignment operator the language will interpret the type in a flexible way, and so the logic evaluation can change depending on how structly we tell JavaScript to interpretsthe data, e.g. whether 5 can be numeric data or has to be text (text is typically delimited by quotes, so: ‘5’).


5 == ‘5’


In JavaScript this less strict evaluation also evaluates to True, as one data type is treated as ambiguous, it will interpret 5 as if it is text or numeric, in order to be biased to evaluate as True.


This represents a concrete and formalizable micro-example of a pivot to reallocate bits. In this case, some bits are reallocated from the ‘numeric’ to the ‘text’ type versus the original logic that was used when we had three equals signs to test for strict equality instead of two. This adaptation by the programmer may make the program revenue generating, in principle. In reality, more complex pivots can be defined informally, (though in theory they are just as formalizable).


Lip-reading example of a real-world example of despecialisation


As a concrete real-world example of a complex real world use case consider a lip-reading algorithm project. This might take inputs to be videos of lips moving where the output would then be the transformation to written text interpreted as being said by the moving lips.

But, if it doesn’t work very well, the lip-reading program can be adapted to perform a back-up option to reverse the mapping of inputs to outputs: Supplying text inputs would now produce the realistic animation of lips.


This reuses lots of bits that perform the mapping, but has potential for revenue generation in the computer-generated images market rather than the automated lip-reading market. It’s an entirely different use but reuses the same bits that map between lips moving and text.

Going back to the micro-example, here’s let’s look at the different games for the JavaScript example and how the rational strategy can change when we add the 3rd option of micro-pivoting. In the below tables:


· The individual or Department Head agent has the single strategy in the column.


· The higher level or Company Head agent has the ‘Liquidate’ or ‘Do Nothing’ option which are in the top row going across.


· The combination of strategies is chosen by combining each player’s choices, but there is only one player that has two strategies available in the first instances, the Company Head.

· The payoffs to the players are written in brackets. The Individual player’s payoff is the first value in each square and the Company Head’s payoff is the 2nd value in each square.


· Therefore, the Company Head just chooses the most preferable strategy for herself, which is ‘Do Nothing’ in the first case, yielding a payoff of ‘8’ which is the best option for the Company Head in that scenario.


Agent specialised and game favours specialisation

ree

Game payoffs now change to favour liquidation of the program assuming no option to modify the program.

ree

Type Pivot example when game disfavours the primary task

ree

Hence, we see that, in fact, it is the utility of specialisation which, although a core mainstream economic concept, is one that we need to modify in order to accommodate a 3rd positive option for adaptation games.


This 3rd option, created by despecialisation, is only formally definable using computer theory, and the notion of uncertainty about which algorithm we can use to do the new work after escape, and yet reuse prior work, rather than liquidate it.


Pivoting is routine in software, it’s called software architecture


Computational architectures are the semi-formalised concrete products that exploit this ability to pivot, in general. We design architectures of programs, such as database warehouses, or complex applications so that we can modify them easily when the original program needs to be ‘escaped’ and modified. This is the practice of software architecture. It is a highly skilled profession whose details are still being worked out. Contemporary examples of philosophy and practice include Barry O’Reilly’s Residuality Theory (in which O’Reilly identifies the key issue of using models and the need to ‘escape’ models even when designing software architecture).


When we discover that the existing program and software needs to be adapted by the programmer to do a different task we need options, or we face liquidation. These are ideally foreseen in the architectural design of the software to cheaply and usefully pivot to a new design. This is not the lowest level adaptation step. An example we already have is a learning algorithm, but we might still need to modify it in a way that the program cannot adapt to, by itself.


Similarly, even though software architects also work in departments, it doesn’t mean that we can’t distinguish these levels of adaptation in this case. These differnces in levels are real, but they are relative, so they are defined relative to the primary model, or lowest level of adaptation, regardless of how sophisticated that level actually is.


If the primary model is the function of a software development business, then a change to that business model or how the department works or is organised is the ‘pivot’, not something with the departments business as usual act of software development. So such levels are consistently defined only once we define the primary method of adaptation at the lowest level and then they are defined relationally. If that lower level is software development, then ‘escape’ in this case is not doing the same type of software development, but something else, like software operations management or switching to a different type of software process entirely, but maintaining the data used previously.

Fortunately, even though these levels of adaptation may seem vague when described informally, they are fully formalisable, just as they are in computer theory.


The fundamental nature of the escape and pivot option


Mathematically, this is a different step and a larger change than the adaptation within the learning algorithm i.e., whilst it is running, as part of the algorithm. It is true that we can create an architecture where an algorithm carries out experiments on computational learning and potentially implements changes to rewrite itself. These programs are getting closer. But should such an algorithm be developed, then we can still just step outside of this algorithm, and define a change to it unavailable within that algorithm itself. If we then incorporate that type of functionality for adaptation into the new algorithm, then the same logic of escape applies to the new iteration but it would be a different change that is still inaccessible to it and requires escape from that model to implement it.


The key is to understand that computer theory is what applies here, not the study of algorithms per se. So, we can always define what is outside of the current model, no matter how flexible or sophisticated the model is. In the same way we can always define a computer program running on a computer, regardless of how sophisticated the computer program is. Once we make a change, or incorporate a new feature in the program, the same logic of a specific ‘escape’ option applies to the new program, but perhaps it becomes obsolete in the old program.


This is just to say that we can always press the ‘escape’ key for a computer program, no matter how sophisticated that program is, and we can always define something outside of the options for the program we escaped. This is proven by the Halting Theorem, which is the cornerstone of computer theory, due to Alan Turing. Whether an existing program also has the controls to press escape for other programs that it manages or interacts with is actually irrelevant to the truth that there is always a change outside that model, and inaccessible to the current model. What we do once we escape the model is up to us. We can choose to liquidate, or we can choose to pivot, but it is a fact that some options will always become available by escape that were unavailable before we hit escape.

As economists, for now, we only use the ‘escape’ key of a model that is failing, to implement a liquid phase, and this is very lossy.


In fact, we don’t trust agents that try to avoid such liquidation by changing the rules of the game, which in some ways, is all that a pivot is.


For example, a financial product ratings agency can change the ratings that it gives to investment products based on the risk that the bank asking for the ratings will go to a rival agency if they don’t like the original rating ‘algorithm’ output. We see this as bad source of ‘rule change possibility’.


In other words, corruption is also about such changes outside the model to retain some value of the model to certain parties. However, with corruption we normally think of the value retained as selfish conservation of ‘bits’ at the expense of the utility of the program to others. The people who benefit from just the program continuing to run, will do anything to prevent it being stopped. Changes they are allowed to make to change the program are therefore more likely to cause loss of real value to the company owner. If the original algorithm says that the output is ‘x’ and this isn’t favourable to the people who run the algorithm, we must stop them from being able to change the output to ‘y’ just to satisfy themselves at the expense of the owner of the company or the supposed beneficiary.


In the same way scientists are prone to p-hacking, which is messing with their analysis until they get the result they want. It is proposed and recommended that scientists register their planned analysis in advance in public, precisely so that they can’t just change the analysis after they collect the data to get the significant result they’re looking for.


But it is neither ‘bad’ not ‘good’ in itself to be able to change the rules of the game, which is the current algorithm. Rather than simply a source of corruption, more generally, it’s a source of adaptation that can be useful or deleterious depending on the details of how it is used.


Again, this debate around corruption and incentives illustrates the ontological stack of mainstream economics, in which people who take steps to try to protect their jobs when at risk of failing are sometimes thought of as deleterious to the company, as a whole.

Protecting a prior investment in work and function is potentially deleterious, but also potentially advantageous, especially because liquidation is very lossy. Yet mainstream economics seems to mainly pathologise this level of adaptation option, or at least has mainly focused on deleterious use cases, (such as ‘ratings shops’ or ‘p-hacking’ in the examples I gave above). So as a corrective to this bias in mainstream economics against pivots, let’s look again at Natural systems.


Pivots in Nature


One example of such pivoting in Nature, is the squirrel’s strategy for winter. As every child knows, squirrels will bury nuts for later retrieval in the depths of Winter, when food is scarce. However, a lot more nuts are buried than are retrieved; either the squirrels are forgetful or they are overly industrious.


As a result of this, many new trees are planted by squirrels.


We can think of the emergence of a renewed habitat for the squirrel as a fall-back option if the squirrel doesn’t get that particular nut, whether through forgetfulness or due to a surfeit of nuts.


Such pivoting then happens by default due to the method of storage. The principle then, is that rather than liquidising the unused nuts, the food ‘bits’ are reallocated to a new use that indirectly benefits future generations of squirrels and become tree ‘bits’. It’s a whimsical example, but that belies the power that such simple examples contain for economics to learn from Natural systems.


At the heart of the example is despecialisation. The function of burying nuts for winter is probably compromised in its utility by being despecialised. Either the squirrel buries more nuts than necessary and spends longer trying to bury nuts it doesn’t need to, or it needs the nuts ideally, and wastes time as it forgets where they are. Either way, the specialist function looks less efficient. But, because this inefficiency directly enhances the back-up option for utility, it is a type of pivot or bet-hedge that ultimately benefits future generations of squirrels which likely share genes with the squirrels that originally buried the nuts. In other words, it may not be accidental at all that Nature has produced this behavioural pattern.


Despecialisation looks inefficient to a specialiser, but from an adaptation game perspective it can be very efficient compared to liquidation, which is the option the specialiser doesn’t want to contemplate. When we think of the centrality of the idea of specialisation for efficiency, and our incoherence when we try to talk about the disutility of specialisation, then formalising the value of despecialisation and its link to meta-modularity, seems to me, of great value.


One aspect is that we could avoid too much liquidation and the problem of deciding what to do with those liquid resources.


It also affects the politics of who gets do decide.


Rather than assuming that non-local central or higher-level people know best what to do with the liquidity, we might assume that with back up options lower-level agents, given the tools and options, are more epistemically reliable users of resources. The supply of resources for changes is then locked up into existing processes more.


The pivot also creates a gradient that encrypts those resources so that changes retain their local value at the expense of non-local value. The nut is buried in this forest for descendants of those squirrels, not others, thousands of miles away, supposedly for the collective benefit of all.


So, to summarise:


Summary


· I’ve introduced the idea of adaptation games and argued that they can only be properly understood using computer theory, which has a direct and deep link to risk and adaptation once we zoom out to ‘meta-modular’ mathematics.


· I’ve argued that mainstream economics overlooks, or simply pathologises certain kinds of change that are ‘missing’ from the stack of good options for adaptation.


· I’ve tried to explain that computer theory is not a theory of algorithms, and that it doesn’t matter how flexible an algorithm is, there is always a defined step outside of any current algorithm; and that there are at least two steps outside: One is a pivot, which can be formalised, and the other has a liquid phase.


· While the pivot option conserves bits and reallocates bits without a liquid phase, the liquid phase is more lossy. Therefore, it makes sense that Nature will also exploit these pivots and this is something I will explore more in the forthcoming articles here, as well as exploring the varied ways that escape can be used to affect adaptation games and offer strategies outside of lower-level adaptation and the liquidation option.


In future articles I will also relate this adaptation game analysis to how we think growth works along with adaptation, and show that there are other strategies for adaptation which don’t depend so much on growth, growth traps and liquidity or liquid phases.


Timestamped PDF for download


Recent Posts

See All
Type Redundancy & Risk in Biology

This post is to provide access to a PDF which summarises a presentation I recently gave to the the Rationality Vienna group. I really enjoyed the discussion afterwards, and the opportunity to present

 
 
Label Theory & the Value of Cross-talk

This post is to present access to a PDF describing a theory related to meta-modularity, my main argument for how risk is managed in biological systems. Label theory is a way to understand intuitively

 
 

adam@turingmeta.org     Turing Meta Ltd Registered Companies House 14573652

  • twitter
  • White LinkedIn Icon
  • White Facebook Icon

©2023 by Adam Timlett.

bottom of page