Enactments and Kernels

The canonical idea of science, and by and large its foundational assumption with the possible exception of quantum physics, is that knowledge comes first and foremost from observations that can in some way ultimately be traced back to our senses, and that by comparing and contrasting these observations in nuanced and rigorous ways we can uncover how reality works and by extension what it truly is independently of any of our biases or limitations. This notion goes all the way back to Plato, who described the idea with the following metaphor: we are inside a cave, facing a wall on which a light is shone and shadows can be seen. We cannot directly see the objects casting the shadows, but by observing the patterns in the shadows and applying our intellectual faculties to their various shapes and movements, we can get an idea of what that object is.

This parable, while canonical, nonetheless was never a rigid dogma: Descartes asked how one could know for sure their life wasn't an illusion cast by a demon (now retold today as the "brain in a jar" hypothesis), Hume complained that we only observe events and superimpose a pattern that fits them but can never actually know the underlying causality and by extension had no reason to believe we had any knowledge about what would happen next. These qualms, despite revealing fundamental issues that science simply glossed over, nonetheless bought into the same underlying axiomization in which "observations" served as the irreducible building blocks on which knowledge is built. Other thinkers such as Spinoza and Kant challenged these assumptions with ontological frameworks that acknowledged and attempted to satisfactorily account for the inseparability of one's observations and the ideas that frame these observations, but Plato's legacy was ultimately amplified by and adapted to the enlightenment project of finding the unequivocal truth about the universe through the miracle of induction: the belief that one can ultimately corroborate how things truly work from the collective weight of past observations.

And why should it have gone any other way? This mythology has proven fruitful beyond any shadow of a doubt, with all kinds of universal laws that have unceasingly demonstrated their reliability and applicability without skipping a beat. Some of them may prove to be a bit more complicated in the long run, such as Newton's laws giving way to relativity in the aforementioned exmaple, but their unquestionable efficacy reduces questions of "how do we really know?" to little more than angst. In pretty much every other domain, we haven't seen quite this same degree of certainty, but common sense by and large tells us there's an objective world outside of us that we can understand through observation, and to idly kvetch about this is a waste of time and energy. Anyone who uses "subjectivity" as a bludgeon to categorically deny the law of gravity is free to jump off a building and tell me how it went.

The logical and philosophical shortcomings of these ideas brought up by various philosophers nonetheless shows that there are some troubling limitations, and while the fiction of Plato's cave has proven itself extremely effective on countless occasions, our methods have increasingly faltered in the face of more complex problems and in many cases have demonstrated an utter lack of vocabulary to address difficult questions that cannot be separated from notions that were previously dismissed as "subjective". The only way to avoid the lose-lose choice of dogmatic faith in induction and debilitating arbitrary skepticism is to carerfully deconstruct the very concept of observation in order to find more fundamental principles.

Observation Is Not Passive

One of the earliest obstacles to robotic vision was figuring out how to get robots to reliably recognize objects. This might seem like something trivial, but it's far from that: image recognition algorithms for 2d images are still notoriously bad at properly identifying what the images are about. Some of this is due to the complexity of how we name and conceptualize things beyond our immediate sensory impressions, but even basic obstacle avoidance in robots proved not as trivial as people imagined. The reason why is that a single 2-dimensional snapshot of something often provides insufficient information for knowing what it is. Consider the example of a table:

[insert the table image here]

From a stationary point of view, there are many different things that the two dimensional grid of light hitting the eye could corroborate. The only way to get an idea of what it is is to find a way to narrow it down. Luckily, there's an easy way to do this: change your point of view and see which new observations don't match your hypothesis. By moving your body or even just your eyeballs, you can quickly figure out what the actual object is.

This is the source of optical illusions; two dimensional images lack the constraints that would allow us to rule out what they would "be" in three dimensions, thus allowing for multiple contradicting interpretations. A paucity of information forces interpretation, and as such we have to actively decide what something is. We often think of interpretation as a passive exercise going on between our ears, but this is mere icing on the cake: every time someone or something has to act on incomplete information, it is interpreting.

Because of this, our sensory apparatus is not naively separable from our locomotion or our cognition. Nor is there any reason why any given act of sensing necessarily involves what one would consider a "representation", let alone a "truthful" one: if some sea creature is mechanically compelled to move away because some neuron is fired as a result of some simple light-detector being blocked, then so long as this keeps them alive long enough to reproduce, there's no reason why they or their descendents must "see" the world at all beyond the simple causality of this reflex. But while there may be nothing that we would ever think of as a representation, there is an interpretation insofar that it does something in resposne to a stimulus.

Does this mean that there is no representation, let alone any way to objectively observe something, but only Skinnerian stimulus-response? Not exactly: to go back to the example of the table, we can infer a consistent object because we see what does and doesn't change as our own interactions with it vary. The feedback that doesn't change is the invariance upon which we develop a consistent idea, a representation, of that table. There's nothing fake about this representation, it's just that rather than existing anywhere in the chain of events, it instead exists as a symmetry across these interactions.

But is this representation objectively true? Is it not the case that any representation we see might be inextricable from the prejudices that helped our evolutionary ancestors survive? This is jumping the gun: prejudice is not some lens that distorts an otherwise faithful representation, it's a property inherent to action. To revsit our simple sea creature, there's no right or wrong answer about what the stimulus ostensibly stands for: anything could have blocked its light and the sea creature survives by acting in a way that from our point of view "assumes the worst"; but it's not as if it's failing to be "objective", it just doesn't have any other information to act on! Representation, being the invariance one finds across responses, is in fact a recognition of prejudice by virtue of demonstrating the existence of something independently of the baggage of any one interaction. A coat rack from a certain distance and angle might look like a person, but once you look at it from other angles, the prejudice of that particular vantage point vanishes, you might reflexively see the pattern again from that angle but all of your behavior will be inevitably based on the coat rack being just that.

That's not to say that finding such invariances on one scale means one has found what would qualify as "objective" at some larger scale: objectivity on the scale of, say, a group of people, would be made up of relationships that hold across the entire group; money, whether printed or made of gold, most definitely has objective value insofar that regardless of what any one person thinks, it would continue to work exactly the same way with the exact same accessibility. It's not even enough to say "if everyone stopped believing in it"--everyone would have to stop *using it* for it to stop working, and that doesn't happen without extreme circumstances because people have lives to live. It's not just that things are objective despite being constructions, things are objective because Objectivity is constituted by the stable composition of action.

Similarly, the representation of the table requires what Deleuze calls an active synthesis--one only perceives this consistent phenomenon by actively changing their point of view in a sufficient variety of ways. Insofar that these enactments reliably converge on some stable pattern regardless of where they may have started from, one actively constructs the invariance that constitutes this pattern. By pattern, I don't mean a pattern of stimuli, but a pattern of action; as in the coat rack example, you can still see optical illusions even if you know that they're illusions; what matters is that for all intents and puproses your behavior proceeds as if the coat rack is just as coat rack. A representation, existing coextensively with some such behavioral invariance, is a kernel, of sensorimotor enactments.

Enactment All The Way Down

Already the whole business of "observation" has been complicated by the fact that a mental representation of an object is not some phenomenon either thrown onto our senses or sitting between our ears like some ghost in the machine but a kernel comprised of invariances that persist across a family of actions. One can no longer speak of passively seeing a table, as any such representation is something synthesized from the actions through which one's point of view varies. The active nature of observation becomes even more starkly apparent when one considers the preconditions necessary to engage in scientific research: any experiment worth its salt must in some way "control" the environment in which it's conducted; this is not a question of any specific methods but a basic tenet of being able to reasonably define what one is looking at.

Nor could scientific observation merely be a matter of the information of one's senses as something like an electron is simply not visible to the naked eye or even a microscope; one must instead rely on the readings of instruments in order to "see" what is happening. The ability for such devices to reliably signify something not directly graspable is possible only through a technological and institutional gestalt that contextualizes it through particular invariances that persist across practices taking place within this larger framework. It's therefore not faith (at least in any naive sense of the term) that justifies scientific theory, nor even any kind of immediate instrumental vindication (applications rarely have any "clean" correspondence to fundamental ideas), but the active maintenance of yet another kernel, made this time not of raw sensorimotor interactions but of a specific body of scientific practices.

One might nonetheless think of the earlier example of the precambrian sea creature as an example of an observation that is truly passive, insofar that there is a stimulus foisted upon the organism that elicits a response, but since there is not any kind of invariance one could call a representation, this at best can be thought of as an affection akin to a ball rolling when pushed. Of course, an organism, no matter how simple it may seem to us, is not an inanimate object: from birth to death it continually travels along a trajectory of growth and decay, so it is never inert in the same way as a ball at rest; it's always in flux, and its lifecycle therefore must be seen as a continuum of activity, at no point perforated by any state of total rest.

The creature's reaction to the potential presence of a predator is therefore not merely a mechanistic response to some perturbation of an otherwise passive body but feedback that happens in the context of some larger enactment in which it's circumscribed, no different than how looking at an object from a new angle is itself feedback resulting from changing one's vantage point. As the effectively "blind" response of the sea creature demonstrates, feedback need not signify anything external or serve any "purpose" (perhaps the creature gets eaten anyway or maybe the response was superfluous to its survival) but only change the course of action and as such has some kind of "meaning" insofar that it has some kind of effect on the enactment in progress.

Such feedback further articulates the enactment: if I start making myself a sandwich and look in the freezer only to see that I have no bread left, the task must branch off into the sub-task of going out and buying bread. Each such branch, itself a continuous trajectory, could hypothetically branch off further, making it something that cannot naively be reduced to an algorithm (though under some special conditions it can be.) Even something as simple as swinging my arm can be interrupted by standing too close to a wall; one can therefore never simply assume some simple atomic change of state (although such things are, again, under special conditions, possible.) Nor does this need to happen in a purely linear manner: although time physically only moves forward, how one chooses to act is, as noted before, an interpretation of some kind, and so any such action will potentially entail things about the entire enactment. This is not time-travel but a question of one's actions contextualizing the past in ways such that its significance really does change; by adding a new part, a change in the nature of the whole is implied and this is true not only in a spatial sense but also a temporal one.

An enactment is therefore defined from the outside-in as well as from beginning to end through as a path inscribed in a neighborhood of formally defined ends, such ends themselves defined by their relationship to some other enactment; such as the end of getting bread existing by virtue of the immediate task at hand of making a sandwich. Of course, this same "sub-task" is wholly an enactment in its own right and can serve any number of other purposes, either because of the bread that was bought or something else "incidental" to it. Any such enactment may have some kind of "identity" beyond its own execution--not the enactment as a whole, but a part of the enactment at the intersection of some multitiude of functions--in other words, a kernel derived from some commonality across a family of enactments that addresses some family of purposes.

Enactments are in other words what Deleuze calls "repetitions", acts that are a-priori unique, not interchangeable with anything else, and the shared kernels between such enactments' identities that belong to what Deleuze would call some order of "generality", a set of relationships according to which different enactments are fit into roles in a way that may make them interchangeable with any number of otherwise different enactments. These "identities", by which different individual enactments are interchangeable insofar that they fit their structural criteria, are the formal properties that define the ends by which any such enactment is instantiated, with its progression between said ends a material phenomenon that may yet articulate itself through other formally circumscribed enactments should the path forward not fully be ready-to-hand.

Two enactments could therefore have literally infinite differences and still share a common identity in terms of their ends--two wildly different supermarket trips may all the same help you complete that same struggle of making that damn sandwich, but of course they could in other senses be different in truly consequential ways; you may in one enactment end up with slightly stale bread and in another meet your future spouse, but both will have interchangeable idenitities with regards to the fact of completing making of the sandwich, even if it is itself a different sandwich and a different post-sandwich world in each course of action. There is therefore no single formal "essence" of buying bread, but a relative essence based on how some kernel formally composes with other such kernels. Beyond such kernels, there is a fundamental contingency to every enactment, and this is what makes it something that is both "filled in" from inside out and enacted from past to future; the latter happening as a means to accomplish the former.

Understanding enactments as trajectories that can only be formally composed with regards to specific invariants also clears up the role of causal entailment: a sentence such as "Bob is mad at Kevin for insulting him" is not based on some succession of states in which different forces act on one another, but a narrative that contextualizes the relationships between events. Reality is effectively infinite in its "attributes", it cannot be reduced to a succession of states that exist a-priori, so causality in the usual sense of the word cannot be understood in an absolute way; it can, however, be distilled through the extraction of invariants and the concomitant construction of some new kind of causal entailment--therefore the ability to acquire laws of physics by establishing relationships amongst a limited collection of variables. These laws are in every sense of the word "real" insofar that these laws can then be used for practical purposes, but when mapped back into the real world there is always a need for engineering material conditions such that the kernel holds in practice, just like one must engineer an experiment in order to make the kernel hold in the laboratory. The same logic holds for the interchangeability of parts in a working machine, the interchangeability of currency in a system of commerce, or the interchangeability of locations in maintaining a reptile's body temperature; the laws never exist ex-nihilo but always in the presence of a constructed but nonetheless unequivocally real system of material constraints.

That we reliably trust in scientific empiricism is not some arbitrary leap of faith but the reuslt of gradually building up a systematizing embedding through generations of models, tools, experiments, practices, and interpersonal folk knolwedge that together compose some reliably efficacious system of inputs and outputs. The laws of mechanics do not address all the properties of any given object, but instead specifically posit relationships between certain properties in a way such that one can infer one such property from another simply by applying these laws, effectively acting as a forgetful functor that strips experimental data and practices of all kinds of details and perturbations while preserving some kind of structure. So it might be true that Newton's laws are formulated in such a way as to assume a frictionless vacuum, but this is not an issue so long as the scientific practices supporting it converge on this underlying structure the same way that one converges on a table when looking at it from enough sides.

In turn, one cannot simply build the universe back up from fundamental laws: the laws are not wrong, they in fact do show things that do not ultimately change on account of a different angle of approach, but they only address those particular things (and it's important to note that theere's no reason to assume any such invariance has always been or will always be.) The universe cannot be conceptualized as being made purely of atoms, causality cannot be purely thought of in terms of laws of motion that accumulate in chain reactions: we can, however, derive more power and understanding by mapping these structures back into the world from which we derived them and using them to extrapolate select things: I can, for example, get a good enough idea of how fast a roller coaster will go by deriving the speed from the height and angle of the drop, even if three would a fuzzy range around it based on wind resistance or the properties of the materials. This selective ability to extrapolate is key to the evolution of a scientific paradigm, as it allows one to assume certain things about what a certain reading on an instrument means and therefore be able to ask question about what an unexpected reading on such an instrument would signify. One therefore turns the instrument into a source of truth by creating an isomorphism not between the whole of reality and scientific theory, but between experimental assumptions and theoretical laws, effectively forming an adjunction.

Machinery and Generality

Of course, not all enactments converge, and some even seem to converge only to be invalidated in the end: financial markets, to take one example, have unpredictable wild swings that are so big they can wipe out decades of "reliable" gains, and the same can be seen in a lot of "natural" phenomena as well. This kind of "invalidation" of kernels in fact inevitably happens with the deterioration of the kernel's substrate: you can't regulate your pH level or utilize the laws of motion once you're dead, and even if financial gains are wiped out, some pocket of stable gains did exist for a time; it just wasn't necessarily exactly the kernel you thought it was.

So these kernels are not necessarily permanent, nor should they be mistaken for what people call "Natural Law" even if there are (applications of) natural laws that are kernels of observations. In that case, however, how is this convergent in any sense if it doesn't monotonically approach a single point indefinitely? The answer is that a kernel is not an absolute entity, but is a kernel by virtue of being invariant in some particular way with respect to a syntax that it serves as an element of, with each manifestation of this kernel having the correct properties to stand in as an instance of such an element. Similarly, its convergence is relative to the topology of such a syntax: that is, with respect to this syntax, regardless of when or how such a syntax is relevant or tenable or how long it remains so, it converges in accordance with said syntax's own notion of time.

Each such syntax can be thought of as a machine, with its elements being the roles that different parts play, and kernels as the ways in which a part can be considered interchangeable with respect to this or that role. Such interchangeable parts in these machines, with respect to their relationship with the machine, are generalities, distinct from ways in which they may otherwise constitute repetitions insofar that their "essence" qua generality is a purely operational one. To refer back to the ideas on adjunctions earlier (don't worry about this too much, this is something of a technical tangent), one may see such a generality as the lifting of a repetition to a monad that topologically closes them.

Of course, since kernels are of enactments they're not technically parts of a machine in the sense of material components, but instead they're functions that are composed to create the machine, or perhaps more technically, morphisms. Together, their composition creates new functions, and each such machine is really itself just one big function insofar that it turns certain inputs into certain outputs.

A scientific paradigm that composes certain instrumental practices with theoretical abstractions is such a machine (with the more grand "theories" behind it being themselves yet another kernel), and similarly a pattern in financial markets that gets exploited, complete with all the institutions and technology that make such a pattern and its respective exploitation possible, would be another such machine, meaning that the failure of prediction was not the pattern per se but simply the machine breaking down and its users not entirely noticing.

There is nothing wrong with this, our lives are themselves trajectories with a beginning and an end, and these various kernels are pockets of stability that aid us in not only finding a satisfying resolution to our own story, but in finding kernels common to our collective trajectories that may in turn further the story of one's lineage, one's civilization, one's values, or even the biosphere we all inhabit.

By seeing theory as the distillation and refinement of practice rather than its progenitor, we effectively reverse Plato's arrow: forms are no less real, even if they are constructed. This is not a dismissal of epistemology, but a celebration of just how powerful it really is. We need not angst about the impossibility of induction when the key to knowledge has always been the limitless power of invention.