Author's Note: the math referenced here about the dow jones or whatever is really fudgy, it was 4.5 years ago. Hopefully the idea comes across right all the same.
“When is a piece of matter said to be alive?” he asked. He skipped past the usual suggestions— growth, feeding, reproduction— and answered as simply as possible: “When it goes on ‘doing something,’ moving , exchanging material with its environment , and so forth, for a much longer period than we would expect an inanimate piece of matter to ‘keep going’ under similar circumstances.” Ordinarily , a piece of matter comes to a standstill; a box of gas reaches a uniform temperature; a chemical system “fades away into a dead, inert lump of matter”— one way or another, the second law is obeyed and maximum entropy is reached. Living things manage to remain unstable. Norbert Wiener pursued this thought in Cybernetics: enzymes, he wrote, may be “metastable” Maxwell’s demons—meaning not quite stable, or precariously stable. “The stable state of an enzyme is to be deconditioned,” he noted, “and the stable state of a living organism is to be dead.”
Gleick, James (2011-03-01). The Information: A History, a Theory, a Flood (pp. 282-283). Random House, Inc.. Kindle Edition.
In the study of complex systems, whether they be organisms, economies, or entire ecosystems, this disavowal that organisms strive not towards, but away from equilibrium, seems to have been duly noted. Unfortunately, I do not think this is the case: unable to do away with the notion of “stability”, we fall back into the basin of attraction that keeps us relying on the idea that these systems are in fact “stable” in some unspecified way, while masking this paradox with the very ambiguity of terms such as “metastable”, or more recently, “allostasis”:
The original conception of homeostasis was grounded in two ideas. First, there is a single optimal level, number, amount for any given measure in the body. But that can’t be true— after all, the ideal blood pressure when you’re sleeping is likely to be different than when you’re ski jumping. What’s ideal under basal conditions is different than during stress , something central to allostatic thinking. (The field uses this Zen-ish sound bite about how allostasis is about “constancy through change.” I’m not completely sure I understand what that means, but it always elicits meaningful and reinforcing nods when I toss it out in a lecture.)
Sapolsky, Robert M. (2004-09-15). Why Zebras Don’t Get Ulcers: The Acclaimed Guide to Stress, Stress-Related Diseases, and Coping – Now Revised and Updated (p. 9). Holt Paperbacks. Kindle Edition.
By doing this, we end up throwing out the baby without even getting rid of the bathwater: whereas the idea of homeostasis unambiguously describes the integrity of a system in terms of adaptations that keep certain variables within a particular range, the pseudo-concept of “allostasis” specifies no such process to signify an organism’s integrity. The problem, I believe, comes from a subtle framing issue that is apparent in Sapolsky’s definition and in fact the very word itself: that life should be fundamentally defined in any way, shape, or form as constancy or stasis.
This connotation isn’t a mere matter of semantics; it’s a relic of the paradigm of cybernetics, founded by the above mentioned Norbert Wiener, and its flagship concept, feedback. As I elaborated on in an earlier post, cybernetics is predicated on the idea that a system maintains a state of equilibrium through negative feedback, in which a deviation from equilibrium leads to a subsequent adaptation that brings the system back. To use an example from neoclassical economics, a discrepancy between the supply of and demand for a certain good will eventually be reconciled by a change in price that brings the two back into alignment. Positive feedback, by contrast, amplifies, thus bringing the system further and further from equilibrium; a phenomenon you can see in bank runs, where a fear that banks will not have enough cash to satisfy withdrawals results in a stampede of withdrawals that increases the chances of this actually happening. In many cases, this description provides an accurate description of how things work: if your body heats up too much, you sweat; and if you keep oversleeping for work, hopefully you take some corrective action like setting two alarms.
But this logic is rarely as air-tight as we’d like to believe: most phenomena come in the form of virtuous or vicious cycles while proving to be downright unpredictable in their downstream effects. To take one example, “price signals” rarely work the way economists expect them to, instead showing massive fluctuations and downstream effects that are rarely, if ever, predictable. Rather than following the simple rules of negative feedback, phenomena such as price squeezes and liquidity traps, not to mention the effects of new innovations, technologies, and political realities, contribute to a dynamic in which positive, not negative, feedback is the norm.
From a point of view that prizes equilibrium, this kind of instability is disastrous. During the Great Depression, even those economists who insisted on a lasseiz-faire course of action did so on the basis that this was the only way the market could eventually “correct” itself in the long run. Keynes, by contrast, despite his mostly unfair association with epistemically overconfident “neo-Keynesians”, had a much better grasp of the dynamics at hand, remarking that “in the long run, we’re all dead.” By this, he meant to lampoon the hand-wavy notion of “the long run” by saying that if we wait too long it won’t matter any longer; but in such a quip is the deep insight that only in death does everything balance out, echoing Schroedinger’s insight that equilibrium is death.
To understand this idea of equilibrium, one need only think about what it would really mean to have a market at equilibrium insofar as such a thing can be conceived. It would mean that there’s no surplus anywhere with which to experiment: that means no new businesses, no innovations, no new technologies, no new job sectors; in short, no development. Instead, we would be left with the very stasis that negative feedback works to maintain. What’s missing is any notion that systems do not merely stand still, but grow (and eventually atrophy). This, not stasis, is the central feature of complex systems, with feedback serving not as something to be corrected, but as opportunities to be exploited; that is, these systems survive not by correcting disequilibrium but by using it to beget more disequilibrium. Put simply, a system is more developed, more alive, the further it is from equilibrium.
Feedback still plays a role in these systems, but its role is peripheral and descriptive, with negative feedback signifying the ways in which a system maintains enough constraints to not ironically dilute into a state of equilibrium and positive feedback illuminating the fundamentally compounding nature of growth and differentiation. While the primary metaphor of cybernetics is the thermostat, which uses clear deterministic rules to force various mechanisms to compensate for any changes in its surrounding environment, I’ll be using ecosystems and the opportunistic mechanism of co-evolution, borrowed from Jane Jacobs’ The Nature of Economies, as the central analogy for understanding systems that not only thrive, but define themselves through compounding disorder.
We generally tend to think of evolution in terms of “adaptations”: camouflage to hide from predators, musculature to overpower prey, greater intelligence to compensate for weaker physical defenses. There is doubtless some validity to this seeing this way but it ultimately puts the cart before the horse: a phenotype that fails to dominate a given niche will be eliminated from that niche, but this is only the subtractive side of the equation; it fails to take into account the fact that before this there was an opportunity, and that when this opportunity is exploited, it usually results not in fewer opportunities but in more, whether those opportunities come in the form of a plant that makes a novel nectar, a mammal that fertilizes the soil in a way amenable to new species, or any other such change that creates yet another niche. It’s not just that the system is managing in this way to stay out of equilibrium, it is getting further away by acquiring new options, resulting in a richer possibility space than before.
This process of increasing optionality is what allows a system to travel ever further from equilibrium and thus stay “alive”. Just like an ecosystem, Jacobs notes, this logic can be applied to economies, as new companies, technologies, organizational structures, infrastructure, etc. can create new economic possibilities by supporting things that could not have existed before. With each new option, the system will have more possibilities to be realized, which means mathematically that these possibilities will expand in a compounding fashion. This phase of compound interest is a state of growth, and can be seen not just in ecosystems and economies, but in the exponential fashion with which humans grow between infancy and young adulthood.
On the flip side, should the system lose possibilities for growth, it will find itself in a compounding state of atrophy, as there will be fewer paths available to open up more paths. Taken together, the system will inevitably hit an inflection point at which it switches from a phase of growth to one of atrophy. There will doubtless be many oscillating episodes of growth and decline in between, a fractal pattern that I will elaborate on in the next section, but whether we’re talking about microorganisms or globe-spanning civilizations, a plunge into atrophy is inevitable. There is a surprising rigor to the cyclical and organismic theories of Toynbee and Spengler.
All this being said, the inseparable ideas of equilibrium and negative feedback maintain an importance of their own. While life may be defined as a relentless departure from equilibrium, it also requires sufficient constraints to ensure that it does not dissipate into a permanent state of equilibrium. In each of our bodies we have an epidermis and a more or less unchanging body temperature, while civilizations have governments, markets have (at the very least) fundamental protections of private property, and ecosystems, while they may seem to be an actual instance of “anarchy”, are themselves shaped by the constraints of geography and physics to name a couple of readily apparent ones.
Although it’s hard to specify the exact nature of these constraints, a helpful visualization for understanding their importance is the Koch Snowflake. Its area is finite, and thus it can be inscribed in a circle, but its perimeter is infinite, thus allowing for infinite differentiation:
Although this snowflake is finite in its complexity due to its precise self-similarity, there are other fractals that are not strictly self-similar that accomplish this same counter-intuitive feat. Note, however, that in order to have an infinite perimeter, and by extension, to differentiate ad infinitum, it must have strict constraints on the area it can take up: if we were to fill in the entire circle, we would destroy all of the complexity. Negative feedback is important in that it defines a similar kind of constraint for complex systems.
In the case of real-life systems, however, they are not created all at once but grow through time. In addition, there are many ways in which something could choke off optionality by occupying the wrong places. In this case, the system must sometimes go one step back to take two forward, a phenomenon known in biology as natural selection and in economics as creative destruction; both of which are vital to the continued thriving of these systems. Even organisms follow this logic when one takes into account processes such as cellular autophagy and muscular catabolism, not to mention the constraints on cell growth whose loss defines one of the most prevalent diseases in the world, cancer.
If we were to imagine this kind of generative process of creation and destruction by generating a generic fractal, one could note the way in which these additions and subtractions would happen in a log-periodic succession of scales (see below). Similarly, this same pattern of oscillations comprises the growth and atrophy of a complex system, with tiny vibrations and giant leaps existing side-by-side. This happens on account of the fact that every complex system consists of parts within parts, each of which has its own finite life cycle of growth and atrophy.
Without this fractal layering of life-cycles, the larger system would itself become compromised by a lack of optionality, with fewer and fewer spaces remaining for new parts to grow. Just as importantly, that system is itself part of a larger one, and must run through its own life cycle if it’s not to compromise the system of which it’s a part. As best put by Greg Linster to the tragicomic howls of disgruntled transhumanists: without death, there cannot be life.