Prototypes & Syntax

"No experience would count as grounds for revising, for example, that 5 + 7 = 12. Were we to add up 5 things and 7 things and get 13 things, we would recount. Should we still, after repeated recounting, get 13 things we would assume that one of the 12 things had split or that we were seeing double or dreaming or even going mad. The truth is that 5 + 7 = 12 is used to evaluate counting experiences, not the other way around."

—Rebecca Goldstein, Incompleteness

There's No Such Thing as First Principles

In light of the above quote, the basic laws of addition do indeed seem to be first principles: the fact that "believing" in them is essential to being able to do science in the first place would seem to say as much. But if something is a first principle, that would mean that it is not in any way up for any kind of "debate". In the usual sense, this is true: nobody seriously puts 5+7=12 up for debate in the normal sense of the word. You can philosophize about it all you want, but every time you engage in a monetary transaction or measure a piece of wood or make sure you packed enough sandwiches for the family or did work in the laboratory, you're demonstrating that you in fact do not doubt the truth of basic arithmetic.

All that being said, there are places where these rules are not fully taken for granted: amongst theoretical mathematicians, for example, there are times where something like 5+7=12 is not assumed but instead derived from a set of even simpler rules known as Peano Arithmetic. I will not go into detail here but the axioms of Peano Arithmetic only assume the existence of the number 0 and the ability to generate other numbers by defining a "successor" number, such that 1 is the successor of 0, and 2 is the successor of the successor of 0 and so on. With a few rules defined on successors and the number 0, it's possible to prove all the basic laws of addition and multiplication on whole numbers. So these truths are not necessarily first principles as one can come up with something prior to them.

One may of course ask what the purpose of this is, is this not true only in a pedantic sense? Consider negative numbers: these do not come from our everyday experience, you cannot see a negative number of cows or rocks. On the other hand, the concept clealry has some import: without negative numbers, it would be impossible to balance financial transactions or do all sorts of similar things, and this even extends into the physical world where contraptions such as imaginary numbers are required for the equations that give us things like telecommunication. People were able to come up with these ideas despite being "self evident" because of the ability to further generalize these basic rules by creating an even simpler set of assumptions.

But there's no way that the idea of negative numbers could simply be posited ex-nihilo without first encountering the limits of the natural numbers. We learn the natural numbers and their operations through a combination of experience and our parents and schoolteachers explaining addition and subtraction to us by telling us about what reliably happens when we put things together and take things away. Negative numbers are simply a result of extending the logic of this story: is it possible to take away 4 rocks if there are only 3 to begin with? Maybe not in the physical world, but what if you promise someone you'll give them 4 rocks because you know you'll have one more by the end of the day? Only then do you look for "more prior" principles to expand the possible things you can do.

From Prototypes To Principles

The acts of counting, adding and subtracting positive numbers become the prototype for the concept of negative numbers. The concept only exists because the question of "what happens when you subtract more than you begin with?" cries out for an answer, first quietly, when it's still only philosophical, but then gets louder and louder as the question takes on increasingly material and practical motivations.

But couldn't a framework with negative numbers come out of nowhere anyway? Surely there is nothing stopping one from creating these symbolic rules and these symbolic rules being consistent regardless of what led up to their creation. Yes, this is absolutely true, but the problem is that symbolic rules alone do not mean anything. I could create any number of arbitrary systems of propositions with these or that rules of inference, but they can only have any meaning if they actually *stand* for something, and not just because I said they stand for something but because people can actually find some kind of relevant congruence. In other words, one must demonstrate some kind of semantics.

And if you were to go back to prehistoric times and teach cavemen numbers, would you be able to make them understand what negative numbers are? What would be the basis for accepting the idea of negative 2? You would not be able to show them an example, because there's not yet a relevant situation that one could clearly point to; they do not yet have currency or any modern notion of debt (as opposed to ritualistic ideas of debt, which are very old). The only way that negative numbers could become meaningful to them would be to first teach them the natural numbers and then let them ask for themselves "what happens if you take away more than you have?"; which would itself remain a merely metaphysical issue until they gradually learn on their own what such a concept would allow them to do.

To put it another way: for negative numbers to exist as an actual general principle and not just arbitrary manipulation of logical propositions, they have to have something to generalize. Generalizing is also not by any means an arbitrary act: the purpose of a general principle is closure. What do I mean by this? I mean that if you don't have a concept of negative numbers, subtraction is not an operation that is closed--if you subtract 4 from 3 you simply fall of the edge of the map and can't get back on it. By introducing negative numbers, anything subtracted from anything else keeps you within the syntax of arithmetic. The motivation for closing a system is itself pretty simple: the more you can guarantee that some representation of reality is syntactically closed, the more you're allowed to just mechanically crunch the symbols instead of having to consciously worry about whether what you're doing makes any sense.

This logic does not just apply to an individual trying to save labor but to systems of all scales as well as to the production of more advanced knowledge in general. The degree to which a person or an institution or a society or anything else can rely on purely syntactic operations is the degree to which it can behave in a way that embodies the logic of these operations and therefore the degree to which such behavior can be composed into more sophisticated enactments. It's therefore not just that one could save themselves some effort figuring out what they owe in some financial transaction, but the very existence of financial transactions themselves relies on this kind of operational closure. Financial crises themselves, for that matter, can be seen as ultimately the same thing as bugs in computer programs: failures to compose that stem from some deficit, however small, of operational closure. Physical technology, with all of its moving parts unavoidably coupled, also obeys this same logic, as does the validity of experimental data in science where the idea of "observing" an electron is not done through one's senses but through the reading of a scientific instrument whose output is understood as a syntactic proposition because of a context that composes said instrument with an entire body of practices, technology, and literature that together operate on rules that are consistent with the syntax of that proposition.

So where then do the natural numbers and their operations come from? Leopold Kronecker said that everything is a human construction except for the natural numbers, which were given to us by God. I have no doubt he's a smarter man than myself, but he's dead wrong about this one: the natural numbers are themselves a construction that comes from the reliability of our natural ability to account for objects. By this I don't mean "it turns out to be the case 100% of the time"--that's just begging the question--I mean that it's embedded in our habitual cognition the same way that computation is embedded in the activities of a personal computer and as such is a kernel of our species' ancient social and technical practices no different from the way in which sensory representation is a kernel of stimulus-response mechanisms.

Emergence via Composition

Just as it's impossible for purely formal reasoning to signify anything beyond the arbitrary shuffling of symbols, it is also impossible to grasp anything without some kind of symbolic mediation. This might seem patently false: after all, one does not need to know any formal math to make themselves breakfast, but this is only true if one thinks that all formalism is overt. There is however, no reason why one needs any kind of explicit symbols or conscious volition for a computation to happen: we might understand the behavior of a computer through the code we run on it, but this is just a projection of the pattern of action that makes a computer decide to light up different parts of the screen according to what runs through it. Similarly, we can give a cashier a dollar and get back change for our purchase without any ambiguity or reflection because the form of our financial and economic system allows these actions to work with this kind of reliability and simplicity.

Even deep within our own bodies this same logic takes place: we're able to become better typists, guitarists, or weightlifters by creating regularities in our neural behavior that don't then simply become one less thing to think about but themselves constitute a grammar that forms the building blocks of new affordances. Non-pedantic formal reasoning therefore does not "point" to anything in the real world but instead organizes the world around it and composes kernels out of enactments.

To make myself more clear, consider this specifically formal definition of an affordance: it's simply a potential transformation of a proposition. The reason one can formally, though not materially, define an affrodance as such, is that an affordance is formally definable insofar that it naturally composes with some system of relationships. For example, one can formally tell you what a bike tire does by defining its specific functional relationship to a working bicycle; you may be able to say other things about bike tires but the meaning of a proposition is strictly limited to its relationships to other propositions, so any formal definition of a bike tire can only include things that define it in terms of some network of propositions describing the bike. That being said, an affordance, formally defined, is not necessarily limited to what can be explicitly identified: even if we never have overt words or mathematical models or diagrams for something, a formal idea of something exists insofar as our behaviors follow such a form.

With affordances defined in this manner, one big question remains: what of things that are syntactically imaginable but not feasible? To use our favorite example, a system of addition and subtraction where negative numbers are not defined leaves questions that have yet to be answered. In a situation like this, one must invent: on paper, one may invent the negative numbers, and in one's own infant system of material exchange, one may invent the I.O.U. A syntactic possibility with no appropriate closure is therefore itself a different kind of affordance: whereas the type previously defined is purely formal, this type of affordance is material: one exploits the affordance by creating something that works in such a way that it composes with the system and through this composition has its own formal properties defined.

The material nature of the affordance, however, is not simply arbitrary: while material properties that have no effect on whether or not it composes with the system can be written off as accidents, those accidents themselves exist as affordances that enable new behaviors with regards to this enlarged system, which makes it a potential prototype which itself can be closed off into a system through composition with some other material entity at some point. Any part of the system definable by syntactic closure is a kernel, a positively definable "essence", where this interplay of materiality and formality is the process by which new kernels "emerge" from enactments through continual co-evolution.

Metaphor

Formalism, being a way of framing phenomena, and constituting the way in which a material process organizes itself, is in many respects how such a process sees the world. Once again, this doesn't just apply to constructing a formal model of something on a computer or a piece of paper: a market, for exmaple, formalizes currency in a way such that any two dollar bills are interchangeable and any material differences other than reasonable evidence that they're legal tender are dismissed as accidents (and even then, the system is built around counterfeiting being either intermittent or subtle enough that it doesn't significantly disrupt the circulation of bills). But what about an organism? Human cognition certainly isn't so cut and dry: much of the function of our activity might be formally definable when talking about things like replacing a bike tire or memorizing a phone number, but changing a bike tire still requires some degree of monkeying around not reducible to just the formally defined steps, and our memory is always rooted in context, leading to a need to anchor arbitrary things to something material.

Nonetheless, the process is the same as above: our formal ideas about the world, our syntax, whether tacit or explicit, are grounded in the prototypes that we act in relation to. Whatever parts of replacing a bike tire or memorizing a phone number are irreducibly contingent, it is still defined with regards to some formal nature. The irreducible "monkeying around" that comes with changing a bike tire cannot in any way be defined, and therefore must be accidental to something essential; it therefore serves as a kind of glue within the confines of the formally defined ends of an enactment.

Viewed from another angle: if we wish to compare two things, we have to decide on the predicates on which they're being compared--whether that predicate is number of legs, fur color, lifespan, etc. A common prototype is how this happens--it is not about two things having the same "class" in some taxonomy, but instead about conceptualizing some idea that both can be sufficiently related to. Once one has such an object, a prototype, one can effectively "factorize" the problem into a formal semantic space by defining things in terms of how they differ with regards to the prototype and eliding all things that never differ within the prototype (for example, when comparing horses, one would never have a criterion of "which horses can breathe underwater" or "which run on windows vs. mac OS").

In algebra, there is an operation analogous to this where one factorizes an algebraic structure by "dividing" it by some identifiable substructure, resulting in a new set of algebraic terms constituting a simpler algebraic system. One may, for example, take a structure representing addition/subtraction amongst the integers, and then "divide" this by a structure representing addition/subtraction amongst all multiples of 4. The result of this, skipping the details about the exact rules, leaves only four elements: one of which represents integers divisible by four, those that would have a remainder of exactly 1 if you divided it by 4, and for the other two you get the idea.

Any syntactic closure by which one organizes their actions is also an algebra, actions and stimuli mediated as symbols defined with respect to the formal relationships constituting the syntax. Any such system, effectively being a defined semantic space and itself being the result of our interactions with a prototype, is the result of factorizing a given formal understanding of reality according to a prototype. This happens not in an explicit or eager fashion, but is instead lazily evaluated; that is, this factorization is a latent relationship that exists insofar that whenever an entity acts in a way that can be formally defined with respect to the prototype, it can also be formally defined as an element of the syntax of this semantic space, and therefore insofar that actions are defined with regards to some prototype they inhabit this partitioned space.

In the example of evaluating horses, one can see the way in which a prototype allows one to talk about horses with regards to something that defines a horse rather than in all kinds of ways that don't have much to do with what makes a horse a horse. Of course, there is no one essence of being a "horse", the prototype is a practical choice that facilitates some kind of grammar that works well enough for whatever purpose; categorization depends never on taxonomy but on what allows for an effective logical apparatus. The resulting set of elements may work well enough for categorizing, but this is not a semantic "space" per say: just a framework for talking about things. A space is a representation, a persistent object that one can traverse.

For such a partition to truly act as a space, one must therefore be able to maintain some consistent sense of location that does not change depending on which way one is looking. Therefore, the prototype against which the original syntax is defined must yield the same juxtaposition regardless of the angle from which it's approached; in other words, it must be a kernel. By angle, I mean potentially any incidental difference that would transpire from the relevant ways of interacting with the prototype; for example, the observation of a microscopic mass maintains a certain consistency regardless of your observations. By contrast, electrons are sensitive to observation and change depending on how you observe it, you can interact with it, but you cannot truly "observe" it as you cannot consistently represent it.

There is an analogue to this in algebra: depending on what "direction" you decide to approach from when operating on the substructure, you may get a different result; aX may not equal Xa, where X is the substructure in question and a is one of the elements of the larger strucutre. However, when this is not the case and there is instead symmetry, you get not a mere partition, but a quotient, and it not only produces a simpler structure but also one that obeys all of the rules of the original larger structure; that is, its form does not in any way differ, although it inevitably shaved off some details. The substructure the original structure gets "divided by" is the kernel, and here one may see the connection to prototypes: that if our angle of approach does not affect the nature of the prototype, then it is a kernel in this book's sense of the word, and furthermore, when one conceptualizes the world against said kernel there will be a congruence between this new syntax and the earlier one.

When this is accomplished, one is able to reason within a simplified syntax about more complex syntax through the use of metaphor. It's important to understand that metaphor is rooted in comparing two things with regards to their formal properties: specifically, by evaluating the formal properties of some prototype with regards to the syntax it's situated in. The question of metaphorical validity and efficacy beyond mere formalism belongs to the realm of hermeneutics.