A hypothesis on the difference between 'organism' and 'machine'
- Adam Timlett

- Oct 30
- 5 min read
As a result of the research, described in my book "On the Origin of Risk", I am developing a new theory of organisms based on the distinctive way that they manage risk compared to typical human designed solutions.
Central to this effort, I hypothesise a key difference between the concepts of 'mechanism' and 'organism' which can be defined scientifically, that is mathematically, by a certain concept of risk.
Here is a simple thought experiment, which contains the essence of the idea by which we may scientifically define mechanisms and organisms presented below. The intention is for us to be able to use this thought experiment to easily convey the core ideas of my research.
Following the book "Flatworld" we can imagine a mathematical world in which imaginary mathematical 'entities' live, consuming nutrients by performing mathematical functions on objects they come across.
The Thought Experiment
In one case there is an entity which consumes things in the form of tables. Anytime it comes across a table it gets energy from it by first counting the rows of the table. Once consumed they move on until they encounter another table.
In another case there is an entity which also consumes things in the form of tables, however, it is more choosy. It only consumes tables that are square, i.e. having equal number of rows and columns. If a non-square table is encountered it rejects it as unpalatable.
These two entities sound pretty similar, but they have a singular contrasting property in how they manage risk.
The first entity can 'eat' any table, no matter the shape, but if the table is 'perturbed' by being pivoted 90 degrees while the entity is counting the rows, so that it suddenly switches from the counting nth row to the nth column, then it will often get the total number of rows wrong, and as a result it will die. This entity is not robust to such changes while it is 'eating', and such changes are fatal to it. This fragility and need for stability while functioning is why we should call this entity a 'mechanism'. This entity works like a computational function designed by human beings. It would need machine minding, and would need guarantees that nothing changes in the way it is coupled to the environment while functioning or it will 'error'.
The second entity has the different property that if the same thing happens while counting the rows of a square table, that the table is rotated 90 degrees in the middle of counting, so it goes from the nth row to the nth column then it will still get the total number of 'rows' in the table right, and so it will 'survive'. This entity displays 'metastability': We can switch the type of thing being counted from row to column mid-count, and the entity is robust to such changes. This, 'meta-stability' is I believe is why we should think of this mathematical entity as like a biological organism.
Figure 1 showing effect of rotating a rectangular table 90 degrees mid count of rows.

Figure 2 showing effect of rotating a square table 90 degrees mid count of rows.

Discussion of the Thought Experiment
I will now discuss why the meta-stability of an organism versus the stability of a mechanism is important.
For readers unfamiliar with this type of research, it is typical of a branch of theoretical biology, which was pioneered in the 80s by people like Chris Langton of the Santa Fe institute who founded the field of 'Artificial Life'.
There is also an entire book based on one set of computer simulations of life: "The Case Against Reality. How Evolution hid the truth from our eyes". by Donald D Hoffman.
While I don't agree with Donald's analysis about the evolution of perception, I think his method of developing theoretical biology using computer science is a valuable tool in biology theory development.
Another key resource you can probably find online is Wissner-Gross experiments These suggest that keeping options open, which is another key feature of such meta-stability, leads to seemingly intelligent mechanism and development and has been applied already to biological systems.
The idea behind this thought experiment is that whereas mechanisms conserve models, and types of data, such as rows and columns, organisms conserve bits often at the expense of models, moving bits between types in a way that gives both high plasticity, but also a type of robustness that is probably essential to couple to an open environment and function without 'breaking' all the time.
This also implies strongly that organisms are atypical computational functions that have to be choosy about their interactions with their environments i.e. what data to 'generalise' across, such as not consuming all tables, only certain types. This makes organisms often appear inefficient. Rejecting lots of information or interactions is expensive.
Machines are more general types of function, but because they are not choosy about their data in this specific way, they are also paradoxically more fragile and depend on stability.
Both machines and organisms exploit regularities in their environment, but for open environments organisms will be more viable without 'machine minders'. Think about the difficulty of autonomous vehicles in open worlds for an intuitive example.
Another key resource for background is Dave 'Pragmatic' Thomas talk "Agile is Dead" from GOTO 2015.
He describes the idea of agility as how you do things not what you do. He talks about it as surviving perturbations in the environment as you work. Dave uses the analogy of a PID controller. A PID controller is a fine mechanism, but it can only be stable, not meta-stable.
But this idea of meta-stability as a key property of organisms is inspired by Dave's talk, based as it is in Agile and the idea of 'pivoting' while in mid flow and the notion of robustness combined with plasticity.
Another example of this pivoting is typing the date 28.02.2025.
If I start typing the wrong part of the date into some form, then I can switch from typing the first number of the day to the second number of the year without throwing away any bits. When I correct myself, for example, by doing this without throwing away data I'm acting more like an organism. But to do this I exploit redundancies in information across types.
There is actually lots of circumstantial evidence that may be able to support these ideas in biological empirical work, lots of which I discuss in my book, though not in these precise terms. One example is pleiotropy in genes. Genes that have many different effects in different contexts. For a machine this would not really be how we design things. Of if we try, it doesn't work very well.
A Note
Examples never explain the principle exactly. Although the thought experiment seems to be just talking about symmetries, in fact as I explain in the date example in the comments above, it is a generalisation of this idea to redundancy in bits across types. That's why I framed it as the data type changing from row to column. This means that in reality, a high number of bits are conserved as we switch types mid flow, and while this leads to robustness but no new information with the square example, in general it also leads potentially to a high degree of plasticity, defined as the ability to integrate novel information while switching types yet conserving existing information. There doesn't have to be exact symmetry, new information can be added via this type of pivoting. Timestamped PDF version of the Article for download




