Imagine A Utility Function Stamping On A Human Face Forever

This polemic was originally meant to be something much more abstract in its subject matter and aims, about deep issues both ontological and technical I’ve had with the way people glibly drop the word “value” not just in a moral sense but even in a practical one; a pretense of some catch-all number that can unflinchingly tell you “welp, this thing has more value” so we better do that and not the other thing. Now, as a global pandemic burns through the entire world without any sign of relenting and talking heads call for effectively unconditional “reopening” while making facile comparisons between human life and economic activity that amount to sticking their heads in the sand and demanding everyone else do the same, this essay has suddenly taken on a new importance to me that I didn’t even see when I first told myself I’d finally make time to write it.

But this is by all means still an essay about the deep inextricable fallacies underlying the term “utility” and why it fails not just in the real world but even within the basic mathematical abstractions it was built to interface with, because this facsimile of quantification has become so endemic within our collective discourse that it is impossible to explain why you can’t just mash up human lives, exponential growth, second-order effects, extreme outcomes, the possibility of systemic collapse, GDP numbers, unemployment metrics, and amateur psychology and throw them all in a cooking pot and come out with a nice thick stew that tells you how to optimize society’s results without first exposing the ideological roots of these types of arguments and showing why they’re not in any way backed up by any kind of sound logic but instead built on half-assedly imagining certain boundary conditions that only exist under special circumstances.

Worse, this pseudoscientific midden is further muddled by the dualist spectre of a disembodied “subjectivity'' such that, despite the obvious paternalistic undertones of being rigorous and objective in calculating tradeoffs, somehow everything boils down to people’s “subjective” evaluation of all the other myriad factors; all of this leading to the central claim of utilitarianism: that the market magically tells us what people subjectively “value” and we better shut up and listen to it because who are we to say otherwise about what’s good and what’s bad?

And yes, I understand that this is not what every utilitarian necessarily thinks: they might say that due to “externalities” or some other such phenomenon, the market can’t always tell us this, that we have to realign incentives and if we can’t do that we just have to make the best damn calculation we can. I’m not trying to make a straw-man argument or accuse all utilitarians of acting in bad faith (some of my best friends are utilitarians): what I’ll be trying to do over the course of this essay is demonstrate that the very idea of utility takes a concept that doesn’t have any concrete meaning outside of economics, assumes it can be generalized to the realms of ethics and politics, and makes this assumption by failing to understand that the idea of having some metric of value, in which one can assign numbers and organize everything into a total ordering, is not something subjective and ontologically prior to the market but is an objective phenomenon that is constructed by the market. It’s not subjective: people don’t hold orderings of preferences in their head like some kind of homunculus; rather, it is objective insofar that within the context of market activity, one can unequivocally tell you what the worth of something is. In order to explain this further, I’ll need to walk step by step through a brief parable:

A Tale of Two Concepts: Function and Fungibility

Quick question: if I offer you a free hammer or a free screwdriver, which would you take? If you’re going to need both eventually, you’d just take whichever one is more expensive because then you save money buying the other thing. If, on the other hand, you have a hammer and not a screwdriver, and let’s just say hammers are more expensive, you’re probably going to take the screwdriver because even if you could actually get enough of a resale price on the hammer to save money by selling it and then buying the screwdriver, you’re probably not going to spend hours setting up a sale on eBay and delivering it just for a couple of bucks. Now on the other hand, let’s say I offer you either a screwdriver or a painting that you and everyone you know think is hideous but has a market price of tens of thousands of dollars; well now the resale would be worth the time and energy even if the resale value is only a fraction of the sticker price. Unless, of course, you’re rich enough that the money is a rounding error and not worth that expenditure of time.

But what if instead of a painting you just got a digital deed of ownership and you could just put the deed up for auction and only have to click a few buttons instead of negotiating and dealing with delivery? Now it might be the case that you’re just so rich that this isn’t worth even a few clicks, but for the vast majority of us this is now a situation where you get something for nothing. Of course there is also uncertainty about how much money you’ll get for it, maybe the sticker price is not indicative at all of what you can sell it for: all it really tells you is what somebody paid for it or what people think people are going to pay for it, not a guarantee of what somebody will pay you for it. On the other hand, if you got a deed representing ten ounces of gold plugged right into your trading account, this is not nearly as much of a problem: trading is so frequent and there are so many buyers and sellers that you can almost always instantaneously and effortlessly sell it for something close to the sticker price.[1]

In this hypothetical scenario it might be a free lunch either way, but now consider the situation where you’re not given it for free but you can buy it for a discount: with the painting, that’s not a guarantee of anything, it’s hard to predict what you can sell it for because each painting is different and a person who might buy one painting won’t necessarily buy another one and a single painting can’t change hands very quickly so for all you know you might not be able to sell it at all. Gold, by contrast, is a commodity: it’s interchangeable, you can buy and sell just about any quantity you please, and people are buying and selling gold worldwide every minute, so if you have some option that allows you to buy gold for $1,500 per ounce and the sticker price is $1,600 per ounce, not only do you have the option of making a profit exercising that right, but even if you don’t have the money up front to do the buying, you can just sell the option to someone that has the cash in the bank to buy the shares and sell them for a profit for a price that’s almost exactly the same as the profit you’d make doing it yourself.

The market for gold, in other words, is more liquid. To get a feel for this metaphor, think about actual solids and liquids: with a solid object, you can either fit it into a container or you can’t, whereas with a liquid you can pour part of the liquid into a container; and if you want to fit a solid through some entrance, you better hope it fits, whereas a liquid can always go through a pipe a little bit at a time no matter how narrow. Similarly, if you’re selling an asset in a liquid market, it’s not an all-or-nothing thing: unlike a painting, where you can’t just cut it up into pieces and sell pieces of it, you can sell whatever fraction of your gold you want (up to a point, but you get the idea), and because of this the gold is much more immediately interchangeable with money (and other liquid assets) than something like a painting. It’s also important to note that because of this, the sticker price is a much more accurate indicator of what you can sell the asset for because if someone is willing to pay $1,500 per ounce for up to four ounces of gold and you only have three, you can still sell those three ounces for $4,500, whereas if a factory is looking for some specific contract, they might have no use for your goods unless you can give them everything they need. And because you know much more reliably what you can sell something for, that means you truly can identify whether a given set of trades would unequivocally make you a profit, thus opening the possibility of arbitrage: finding a loophole that gets you a free lunch.

Of course, no market is perfectly liquid: there’s always a limit to how small a piece you can divide things into, so it’s less like turning solids into liquids and more like grinding them up into increasingly fine grains of sand: for example, you can’t buy half a share of a stock; not directly at least, but in many cases there exist special contracts and/or arrangements of assets that one can own, also known as financial derivatives, such that it’s as if you’re trading something like a fraction of a share of a stock or some other exotic beast, and these contracts and juxtapositions further increase the liquidity of the market. The creation and utilization of these derivatives is the expertise of traders: although many people think of traders as people who sit there trying to predict what the price of something will be in the future, their actual bread and butter is seeking out arbitrage by creating, exchanging, and juxtaposing various financial derivatives as a way of effectively buying something for less than its sticker price and then selling it for more than what they had to spend but less than the sticker price; whether the thing being arbitraged is a stock or commodity or itself a derivative (and those derivatives don’t necessarily have simple assets as their underlying subject matter, they could just as well be derivatives of derivatives.) The process is in many ways no different than a soap company finding a way to make soap for cheaper and then making a profit by charging less money than its competitors.

And whereas in the soap example there’s a very tangible and obvious real-world benefit (cheaper soap for us), there is another function to it in the financial world that might not be concretely beneficial in daily life but is nonetheless essential for the market itself: it makes the market more liquid by both making the asset (or derivative) more available to interested buyers and by selling it at a price that more people are willing to pay for it. Even in the most liquid markets, there will be at least a small difference between what you can buy something for and what you could immediately sell it back for, because if the amount people were willing to pay was the same as the amount people were willing to sell for, then the sale would have happened and that price wouldn’t be available anymore, but the more liquid the market is, the smaller those differences will be and the more accurately and reliably you’ll be able to ascertain what you could immediately sell it for, which in turn is what gives it value: not because the price is some magical gauge of its contribution to society or what other people subjectively think of it, but because you would without any ambiguity prefer it to something cheaper because even if you otherwise liked the cheaper thing better you could just sell the more expensive thing, buy the cheaper thing, and pocket the change. So thanks to traders pushing prices together like this and making it easier to buy and sell things on a dime, you can more and more objectively know what something is worth insofar that the price gives you the information to know if, when, and how it’s possible to get something for nothing.

All of this also gives us a very clear idea of what makes somebody “rational” in the context of the market: it doesn’t matter what your feelings or subjective preferences are, it would not change at all what the correct move to make is because if A is more expensive than B then it follows that anything B can do A can do better. This is markedly different than outside the market where A might have one function and B another function and you can’t just compare them because one function might be more important to you than another for whatever reason. Nor does it matter if nobody in the real world is a perfect calculator of profit maximization: so long as just one person picks up free money whenever it’s lying around, the market behaves as if it were populated by genetically engineered Ferengi-Vulcan-Borg hybrids.

The Perils of Probability: Utility Theory vs. The Kelly Criterion

But like I said earlier, there is no perfectly liquid market: a market in which you can immediately resell an asset for the price you would have had to pay, where there are no sudden fluctuations and you can exchange things at any level of granularity you want, is a mathematical fiction. In the real world, there’s always some degree of uncertainty, some chance that the price changes between when you buy and when you sell, some transaction that doesn’t go through, someone selling to the highest bidder before your order comes through and so on. So when can anything be said to be a free lunch? Surely, if there were a bet where there was a 99% of getting a million dollars and a 1% chance of losing $100, just about anyone would take the bet, but what about something less obvious: what would you do if you could take a bet with a fair coin where if it lands heads you win $100,000 but if it lands tails you lose $50,000. On average, you win money, but most people don’t have $50,000 lying around and, even if they did, losing that kind of money would probably be debilitating. So it doesn’t seem all that “rational” to just take the bet.

Even though this whole essay is beating up on utility theory, I will admit that it offers a reasonably intuitive explanation for why it’s rational to pass up the bet: the first $50,000 in your bank account is much more important to you then the next $50,000, so it has higher utility. But instead of falling back on an abstraction, ask yourself why this is the case. Is it some kind of psychological pain? Heck no. Think about what having $10,000 instead of $0 in the bank means: it means you can get groceries whenever you need to instead of waiting for payday, it means that if your car breaks down you have spare cash to get it fixed right away and without going into debt, it means not having your interest rates skyrocket from accumulating credit card bills, it means being able to go to a reputable bank instead of a loan shark. The most important reason why it matters so much is that having no money in the bank is extremely precarious, and because of all the ways you can get wrecked once you lose that cushion, it might become very difficult if not impossible to reverse the damage.

And this doesn’t just apply to extreme cases like this bet: in the long run, losing too much money too fast makes it harder to gain money, and this concept, not utility, is all one needs to understand why having an edge doesn’t necessarily make a bet rational. To get an idea of this, one can play a simple game. You start with a dollar, and a coin will be flipped 10 times. The coin is weighted with a 60% chance of coming up heads on a given flip. You get to choose before the start of the game what percentage of your money will be bet heads each flip. Now if you choose 100%, you’re pretty much guaranteed to lose your money because if it lands tails just once you lose all of your money and have nothing left to bet with, but you also don’t want to be too conservative and miss out. Without getting into the formal math, there is a formula called the Kelly Criterion that gives you the optimal bet, which in this example would be 20% of your money. Why not more? Because even if you don’t lose all of your money, betting a higher percentage results in having less money available in the next flip to bet with, a phenomenon called volatility drag.[2]

One thing that’s quite remarkable about the Kelly Criterion is that it is not sensitive to the order in which things happen: it doesn’t matter in which order the coin flips heads and tails, only the number of heads and tails that come up; this would not be the case if you were betting an absolute amount of money each time instead of a percentage of your money, because too many tails in a row would bankrupt you, depriving you of the ability to make bets in the future. This type of probability that’s sensitive to order is called time probability, distinctively different from the kind where all that matters is the total amount of times it comes up one outcome as opposed to another, which is known as population probability. The Kelly Criterion, by maximizing the exponential rate of growth as opposed to growth in absolute terms, manages to spin time probability into population probability like straw into gold; and because it works in percentages and not absolute amounts of money, it also means that how much money you’re starting with at any given point is irrelevant.

And this allows one to easily define “rational” behavior in the marketplace without recourse to utility theory: whenever there’s an opportunity to make a bet that increases one’s rate of exponential growth, there’s free money on the table. Of course, in the real world, there are limits to how much one can divide up one’s money and bets, but this is yet again just a matter of liquidity: the more liquid a market, the more fine grained one’s bets can be, and the more opportunity exists to maximize one’s rate of return, once again demonstrating why liquidity is what allows one to speak of value. So even when outcomes are not entirely certain, one can still identify ways in which “something for nothing” still exists, and by extension how things can have “value” not dictated by some bolted on concept like “utility”. And as always, one can assume that everyone is an arbitrary maximizer of exponential growth even though this is true of absolutely nobody, because as long as somebody grabs free money when it’s lying around, the market still works as if this were true.[3]

You may be asking at this point why this should be a total replacement for utility theory if there are still other reasons why someone may want to do something like pull out their money and buy a yacht, since now there would be an opportunity cost to staying in the market to grow their returns (as opposed to the deterministic scenario, where one would just take the free money and then buy the yacht.) There is, however, a subtle issue with this objection: when I define value, I’m talking about it within the context of the market, so something has value insofar that it allows one to maximize their returns in the market. And at this point you may be asking “but what if they don’t care about maximizing their returns?” Great question, the answer is this: it doesn’t matter, because market rationality is not normative, it is explanatory insofar that it accounts for the collective dynamics of how the market works, and so even if somebody decides they don’t care about maximizing their returns, that opportunity to do so will be grabbed by somebody else, and that will make the whole market work such that any free lunch with respect to increasing one’s compound interest will be taken by somebody; and as I pointed out before, as long as there’s somebody taking free money whenever its on the table, the market works no differently than if everybody were a perfect wealth maximizer.

But What About Extreme Events and Unknown Unknowns?

This section is a philosophical aside for people who may understandably have qualms with how the last section treats probabilities as if they’re just handed to you. If you’re satisfied with the preceding section as a debunking of utility functions, you’re free to skip this one. It’s also more speculative anyway.

Now there’s an obvious question to this whole thought experiment involving the Kelly Criterion: we don’t know the true probabilities of anything so how can we talk about someone being “rational” with any kind of uncertainty if it has to do with maximizing the Kelly Criterion? First thing’s first: if this is the complaint, then utility theory, as opposed to the Kelly Criterion, doesn’t help you at all because if one doesn’t know the probabilities then one couldn’t maximize expected utility anyway and thus couldn’t act “rational”. And, as I said earlier, the market behaves as if everyone were “rational” even though no one individual actually can be truly rational, let alone at all times. As such, any definition of rationality that applies to an individual can never be something purely mathematical; it will always come down to evaluating the particulars of a situation and having an answer that works well enough even if it can’t be plugged into a mathematical model or sweeping philosophical generalization; rather than being any kind of flawless decision making, rationality in this layman’s sense is much better described by what David Chapman calls Mere Reasonableness[4].

This systemic idea of “rationality” in the marketplace doesn't even require that anyone be conscious of what they’re doing, all that matters is that somebody is doing it, because that’s all that matters for thinking about what the market is doing. And of course, in real life, nobody “knows” probabilities because they don’t actually exist at all. They certainly don’t exist as frequencies because if that were true it would be impossible to apply probabilistic expectations to the magnitude of events, and they don’t exist as “degrees of belief” in one’s head either because for one thing, names and labels don’t just float around in our skulls like some homunculus, but more importantly probability doesn’t mean anything if it can’t do anything, and it can only do something if you can update your priors in such a way that they converge on some reliable result; so if you say there’s a 2% chance aliens attack the Earth in the next century, it’s just an empty gesture with about as much semantic content as saying the weather tomorrow is going to be partly pointy with an increased chance of Porky Pig.

So in the case of the market, probabilities sometimes exist: specifically, you can infer their “existence” when you can find some pocket of order such that you’re able to make bets that achieve exponential growth. But then there’s the big question: can’t this change at any time, and if a long string of exponential growth were wiped out, doesn’t that mean that these probabilities were bullshit and that there was no long term rate of exponential growth being maximized by anyone?

My unqualified answer to this is that extreme events are not about probability: probability always exists within some ludic situation, and such games emerge from pockets of stability that are mined by those that figure out how to do so. And this goes back to my earlier section about arbitrage creating liquidity: the activity of arbitrage creates liquidity, which in turn creates more opportunities for arbitrage; except now apply this logic to the creation of tractable probabilities and one can see that probability is not something that existed in the ether or between anyone’s ears and then applied to making money but instead the opposite: a kind of pattern that emerges from playing with currency that was later recognized and formalized into its own axiomatic framework.

So economic rationality is not so much about “the market” as one monolithic thing as it is about its relationship to this or that game that allows one to maximize exponential growth within it for as long as possible. Nor would one be truly infallible even if they knew the “true probabilities” of the “entire market”: you can win all the big bets on tail events you want but if there’s a solar storm that wipes out the electrical grid or a violent revolution that expropriates all your wealth and sends you fleeing, then maybe all of your probabilities were bullshit and you weren’t maximizing anything in the long-term; there’s simply no level at which probability exists independently of radically contingent boundary conditions, not least because the market itself, the cradle of probability, is contingent on stable institutions built up over centuries of accumulated political and technological flesh. The decision of when to leave this or that game before it’s too late or whether to play it at all or whether to try to bet against it in some way is always a question of reading the situation and making a true gamble for which no mathematical formula or any other kind of prescriptive advice can tell you what to do, of the kind of nebulous individual-level “rationality” that has nothing to do with homo economicus. Such incomputable dilemmas are where epistemology ends and art begins.

Not Everything is a Market

And this brings me to being able to clearly articulate the central fallacy of utilitarianism: you can’t quantify value without a market, and not everything is a market. There’s most definitely been a crypto-panpsychism of sorts that reduces everything to economics, and I myself bought into it when I was younger, but exchanging things and negotiating terms does not a market make; a market is something that renders things truly fungible and in doing so allows people to trade and acquire things they need and want with a minimum of friction. The idea that one sees in economics textbooks that currency happened because before that people had to barter is logically absurd. I appreciate David Graeber’s anthropology on the subject, but a simple application of common sense is sufficient to see the problem: how would anyone get anything done if they had to trade a chicken for a sack of potatoes and then trade that for a bucket of paint and then trade the bucket of paint for a set of candles? Any large scale practice of regularly exchanging private property requires a mode of economic coordination that can only be achieved through the gradual construction and evolution of some real-world medium that makes it possible for people to reliably understand what comes out when they put something into it.

Without these specific conditions, the allocation of resources is going to be based on some mixture of autarky and personal (or familial) reputation with a minimal amount of any kind of specialization, and this was by and large the case until very recently. Prior to the industrial revolution, “the market” in some sweeping sense of the term was peripheral to the lives of most families and the economies of most towns, and it was only with the ascent of standardized goods with replaceable parts that the physical allocation and exchange of goods could become liquid enough for markets to become an objective indicator of value for those who were part of them.

But even then, not everything marked to market has some kind of objective “value”: in the absence of liquidity, the price of something doesn’t tell you what it’s worth in any actionable way, it’s just some marker of what someone paid once upon a time or what some expert thinks someone might pay for it, none of which is any guarantee. For example, a company might be willing to pay more to a programmer with five years of experience than one that’s just starting out, but this is not a liquid market like some commodity: what kind of productivity they’ll get out of that worker is inevitably a qualitative question and no matter how confident they felt that the decision to hire them for a given salary will result in positive cash flow, it’s something that will happen over time and can’t just be instantaneously sold away at a moment’s notice; so ultimately the programmer’s salary has nothing to do with any kind of objective measure of value and is the result of an educated guess made against a backdrop of fundamental uncertainty.

And there’s nothing wrong with that: people and organizations for the most part have a right to decide for themselves whether or not something is worth the risk; the problem is when any potentially harmful effects of deregulation or negligence are dismissed with a simple incantation of “that’s just the market aligning supply and demand”. Supply and demand, despite being an intuitive idealization, does not mean as much as one thinks: we’ve already established that the price of something can only objectively indicate value to the extent that there’s sufficient liquidity, that objective economic value is fundamentally linked to arbitrage, so if demand for something is hot that doesn’t necessarily make the price of it invalid, but to call it an assignment of value would be to contradict the objective idea of value that does not care for either personal preferences or subjective prognostications.

Unless, of course, we’re talking about a liquid market, in which case supply and demand will hew to these rules of value because any demand that is misaligned with the sticker price of the goods would be inevitably exploited and indirectly corrected by the actions of arbitrageurs. All of this translates very plainly to what we see at a glance in the real world: the parts of the market that are liquid are the boring parts where prices barely move at all because any deviation is pretty much immediately ironed out, whereas we see prices go all over the place during market bubbles and crashes, when people are chasing the prospect of future riches or running from the threat of future doom. The actual arbitrage happens at various cleavages between more and less liquid markets, where there’s a sweet spot between order and chaos, where these acts of arbitrage slowly tame new parts of the market and simultaneously open up chaotic new areas of commerce. But I digress.

I bring all this up not because I’m trying to say this or that price is invalid because bubbles are “irrational”, but to explain that markets are tools with a variety of functions and the specific function of equilibrating supply and demand can only be carried out under the conditions of liquidity that make them capable of objectively assigning value. But even then, this objective value is applicable only with regards to the market itself; it is not a universal definition of value, but a specific one that is objective insofar that it calibrates the market, the same way that whether a computer program is ultimately good or bad there’s still an objective way to say whether the underlying code works. Whether one should use a well-calibrated market to allocate goods is a question that can only be answered from a vantage point outside the marketplace, but even if the answer is “yes”, it is sophistry to assume that all markets are fundamentally about reallocating resources in order to optimize “value”. Unfortunately, the fiction of utility has obscured our view of what makes a market work and the fact that there’s a difference between what makes a market work and whether or not this or that market is a good idea whether or not it works.

All Models Are Sociopathic, But Some Are Useful

None of this is to say that less liquid (and consequently more speculative) markets are useless or don’t work at all, they just work differently and have nothing to do with anything about value, and to dismiss them as nothing more than frivolous casinos is just a different side of the same coin: where neoliberalism fetishizes markets as supernatural computers that can crunch the numbers and tell you the “value” of everything, technocrats build their own solipsistic mental models of how markets, or any other way that people choose to organize themselves, are supposed to work based on presumptions of what other people ought to want, only to find participants doing things that look “irrational” to them, causing them to presumptuously declare on account of their own myopic rationalizations that this or that kind of voluntary exchange shouldn’t be allowed, blissfully ignorant of their own lack of context or ability to anticipate emergent downstream effects.

That is not to say that all commercial activity should just be accepted (we can all think of possible “markets” that are heinous and should never be allowed), just that calling some activity “irrational” just because you don’t understand it or see a use for it completely disregards the fact that people value things insofar that they perform some kind of function. To say that something just has a given number of utility points instead of simply looking at why people value it (or at least understand that there is something specific they get out of it) is to ignore the territory in favor of some infinitely less nuanced map and develop a deep contempt for the countless extant relationships that define other peoples’ subjectivity. Going back to the example of the hammer and the screwdriver, it’s absurd outside of the context of money to say that one has more “utility” than the other because they serve entirely different functions, and they are apples and oranges so long as there is no fungibility between them. Now at this point I can hear muffled screams of “things have a certain utility based on the situation!”, but this is completely self-defeating: if all “utility” is specific to the situation, then that’s indistinguishable from everything being a matter of function and it makes no sense to think about this as some number to be arbitrarily maximized over the long term.

Unless, of course, one means that utility actually amounts to some kind of “happiness” that everything boils down to, and now we’re seriously in the deep end: not only does this assume that “happiness” is some kind of simple quantity, and not only does it falsely assume that we can measure this “happiness” (through either brain scans or surveys or some other kind of thinly veiled phrenology), it also implies that everyone is deep down inside some pig rolling in their own shit, er, rational pleasure maximizer.

Of course, this is not what makes people tick: was Achilles or Hector attempting to max out their pleasure receptors when they died in battle? Was Isaac Newton or Rosalind Franklin just trying to live laugh love their best life? No, people contextualize their actions according to what Bernard Williams calls “thick subjectivity”: an affective background against which the significance of one’s actions is thrown into relief.[5] As Charles Sanders Peirce could tell you, the meaning of any sign is dependent on its context (interpretant), and only those two things considered in tandem yield any kind of identifiable “object”. Our own actions and values cannot be understood outside of the story that each of us is living, and we evaluate our own decisions not on some disembodied number but on what they do to help us move that story to some kind of satisfying conclusion.

And by story, I don’t mean some subjective interpretation of events that just floats around in one’s head (whatever that would mean anyway) or gets declared by fiat with idle words, I mean an actual ongoing unfolding of events that provides necessary context for our actions. A hammer means something to me because I need to nail some pieces of wood together to finish making a bookshelf, because there is some kind of ongoing project (another term used by Williams) that I am currently living out. Any rush of dopamine or serotonin or magic feel-good juice I get from accomplishing it is just the internal workings of some navigation system and is not the end goal any more than my GPS saying “you’ve arrived” is the end goal when I drive to my friend’s house.

It’s not merely incidental that you can’t achieve “happiness” by any reasonable definition by simply punching in some cheat codes; that all the drugs, Netflix and casual sex in the world will eventually just burn out your receptors and leave you even more vulnerable to excruciating existential dread: your brain is built that way because otherwise you’d be completely incapable of any kind of learning, planning, or doing anything other than interminably tickling yourself and you’d just sit there and die, and once you die that’s the end of your story and you’ll find no function in anything.

And even if you’re fine with that, it goes without saying that there are other people who exist besides you. A narrative can be a strictly personal thing, but that’s not by any means always the case: a narrative, being an ongoing sequence of events, is not necessarily confined to a single person, and people end up playing many different roles in many different stories, each one a project that they contribute to in some way, shape or form. To put the cart before the horse and say that the end goal is some kind of emotional state rather than the endeavors that define the meaning of our emotions ultimately amounts to solipsism. This is not something that can be trivially dodged by saying “well I’d be considerate of other people’s happiness too! I’m not a monster!” or “well in the end I wouldn’t make people miserable because then I’d be miserable”, because the only way you can actually be considerate of anyone else’s happiness besides your own is to actually engage with the concrete realities that define other people’s subjectivity, that is, to willingly play a part in narratives that are bigger than you (and that can be something as simple as having general principles and sticking to them even when they’re inconvenient) instead of treating other people’s subjectivity as some black box.

Any concept of ethics that simply axiomatizes our emotions in lieu of considering the world in which they are embodied is one that has absolutely no way of providing any kind of practical or actionable guidance, as it says absolutely nothing about the potential consequences of a situation or about how to understand and respect the subjectivity of others. The only thing it could tell us is that it’s probably worse if ten people die than if one person dies; really deep nuanced stuff here. Add even the slightest perturbation to the mix, like say, what if you had to push someone into the trolley tracks to save five people, and the astrologers will run around in circles like headless chickens making the same arguments and counter-arguments over and over again while the world, if for no other reason than sheer necessity, continues making calls on how to deal with ongoing situations instead of waiting for the turing machine to halt.

And as long as we’re speaking of the trolley problem, this is a perfect example of why ethics is always about the texture of a situation and not about calculating on some abstraction: would you want to live in a society where it’s okay to push someone in front of a train so long as it saves lives? Would you want to be anywhere near the kind of person who reserves the right to be judge, jury and executioner if their calculations tell them it’s a net positive for society? “Well but you see that doesn’t matter, technically speaking it’s the same as if you were the conductor and had to pull a lever--” no, it’s not. It’s the same according to a specific model, but that’s one hundred percent the problem: models are crude approximations of reality, they work (when they work at all) by deliberately omitting 99.9999999...% of the real world and can therefore only be used as special tools for narrowly defined problems and not as almighty oracles.

“But Alex, that’s your model; what if I disagree?” Yes! You are absolutely free to disagree for whatever particular reasons you have. That’s my whole point: every situation has endless particulars to it and there’s no getting around the need to read the situation according to its specifics, and we may very well end up having to explain to each other why we see it the way we do and figure out some way to agree on what to do about it. We inevitably build models, mental or otherwise, in order to simplify and communicate difficult scenarios, but how we construct that model is something everybody has to do and the choices we make when doing so cannot be reduced to an algorithm. Does this mean I’m just suggesting all morality is a matter of knee-jerk intuition or that morality is purely “relative”? No! Just because I’m saying there are no shortcuts around abduction, it does not follow that it’s perfectly valid for people to just decide something’s right or wrong because that’s how they feel or that’s what society says; that’s just another way of ignoring that subjectivity is embedded in our actions and interactions in the real world and not some walled-off movie theater. What I’m saying is that if you attempt to reduce the moral import of everything to some number, you ultimately end up obstructing your view of the world’s nuances with a map of such coarse resolution that it amounts to little more than a facsimile, and in doing so deprive yourself of essentially all of the information necessary not only for practically dealing with the endless complexity of the real world but also for respecting and engaging with the subjectivity of others.

Function All The Way Down

In some ways it might seem like bad but ultimately harmless philosophy to assume some metaphysical backdrop of utility: after all, it’s common sense that in the absence of any other knowledge it’s worse to hear of ten people dying than one person dying or that when deciding how to handle a disaster you generally want to do what will save the most lives, and when more complicated scenarios people inevitably end up thinking about and debating the particulars anyway, so what’s the harm if someone posits that the goal ultimately is to maximize “utility” but in the real world we can’t achieve that by using naive first-order reasoning?

The problem is that believing that ethics is about getting the most bang for your buck inevitably invites such reasoning, as I’ve seen recently in the way that people have framed the debate about how to handle the current pandemic burning through the United States. Now, before I continue, there are very important specific points that I acknowledge: if people can’t get some kind of paycheck, whether from a job or elsewhere, they will starve, and if too much of a nation’s economic machinery shuts down it will make it more difficult to do things that would prevent the spread of the disease. I could go on and on, but I’m just trying to intimate that the argument I’m making is not a purely deontological one.

The issue, however, is that we’re being told by many to more or less “weigh” the economic “value” we’re losing from economic shutdowns against the lives we’re saving from enacting shutdowns; an argument similar to ones I’ve heard dismissing the potential severe risks of unrestrained transgenic engineering by saying we have to weigh the risk of catastrophe against the potential “value” these magical crops provide. The most glaring issue with looking at things this way is that it treats certain things as causally independent when they’re clearly not: people getting sick in droves and overwhelming healthcare systems, not to mention traumatizing future generations and destroying faith in our leadership, is not exactly a driver of long-run economic prosperity, and the issue with catastrophic risk is the one I talked about earlier: if you lose everything, then you have no means to win anything back.

At a glance, this looks like all one would need to do is recalibrate the utility calculations to consider some overlooked interactions and then re-run the math, but the very failure to consider these interactions stems from haphazardly slapping quantities of “utility” on things to begin with instead of interrogating their causal and functional connections with everything else. One cannot fix this problem by accounting for “both” the effects that one thing has on another and the inherent utility of a given thing because there isn’t such a thing as inherent utility: all utility is a matter of function.

But what about death? What about pain? Yes, these things are bad, very bad, but one doesn’t need to give them scores to actionably understand this. Death is not a mere “disutility”, being dead means you’re no longer in the game; nothing can have any function, nothing can move your story forward, once you’re dead. As for pain? I don’t need a number to tell me that I shouldn’t do something to cause someone to yelp in pain, or that mass immiseration is a thing to be avoided. Pain is bad because it’s by definition the opposite of what anybody wants and people value things insofar that they fulfill specific desires. Acknowledging the pain of others comes from respecting their desires and understanding that when somebody says “stop, you’re hurting me” you don’t say “prove it.” Absurd thought experiments such as the “utility monster”, a being that ostensibly feels pain a million times more intensely than the average human being, come from misguidedly thinking that pain is some intangible substance that just sits there inside of someone rather than an inseparable facet of the fundamentally embodied processes of striving that constitute our subjectivity.

Doesn’t this just mean that we still more or less need to find a way to minimize the total pain that the current disaster will cause in the long run? Does it not ultimately come down to a specific kind of utility calculation? There’s no denying that when one is dealing with a problem that affects a whole population of people, numbers are going to come into it, because the problem at hand literally involves multiple people and different numbers of people will suffer depending on what course of action is taken. All that being said, comparing numbers of human casualties, be they fatalities or instances of permanent morbidity, to the economic “value” we could miss out on callously assumes a simple interchangeability between human lives and dollars based on some hand-wavy amount of “utility” or “happiness” people get out of some generalized idea of “economic growth” instead of specifying what people get out of different kinds of economic activity.

Restaurants making money from people dining in might be part of economic growth, but what the hell kind of “value” does a situation where people crowd together while food passes between tons of different hands provide when it could easily kill customers and anyone those customers come into contact with later have? Yes, people absolutely need to make ends meet, and there are arguments to be made for jobs programs, but to suggest that we reopen the restaurants instead of finding some other way to get money to those in economic precarity is like suggesting you drop money from a helicopter but the banknotes have infectious pathogens slathered all over them.

This kind of lunacy is exactly what you get when you assert by fiat that you can indiscriminately compare the alleged “value” of two things, and it’s the same kind of logic as thinking that it makes sense to make a bet, no matter how big, so long as the odds are technically in your favor, even if losing it would mean catastrophe. As I showed with the example of uncertainty in the marketplace, the solution to figuring out whether to make such a bet is connecting it to the effect the bet will have on one’s future ability to utilize advantageous bets. Done right, this will tell you the value of any such bet in simple interchangeable terms because each of them unequivocally adds to or removes from a single number that represents not some arbitrary variable conjured up from thin air but the actual relevant consequences of one’s bet.

While you will never see such simple and clean logic outside of markets and gambling, the same logic can be loosely applied if you stop hallucinating arbitrary orderings imposed on qualitatively different things. One can, for example, understand that there are certain parts of the economy that need to continue running: if food stops being produced people will starve, if factories don’t make masks, there will be less ability to keep people from getting sick, and if people get sick in greater numbers it will not just hurt those people but also our society’s ability to provide for itself. I’m not saying it’s this simple and I am not plugging any specific economic agenda, only trying to show that economics is always about the interlocking constraints and affordances of systems and how these systems serve our needs, and that these things need to be identified instead of just hand-waved away with some unqualified pretext of “value”. To simply state that “the market” distributes goods according to how people subjectively value things is at best a grave misunderstanding and at worst a paternalistic and coercive apologia for unconditional unbridled consumerism at any and call costs.

And it’s not just individual lives that are on the line, what happens in the coming months will determine not just how many people suffer and die but also the fate of numerous ongoing collective projects that will determine what kind of world we leave for ourselves and future generations: when those in power, whether in politics or big business, decide it’s just time to surrender to the virus and not make any kind of sacrifice, they’re actually just running away from a pack of tigers and pushing people to the ground so they feast on those people instead. If, as a country, or for that matter as a world, we decide that this is acceptable and simply look on in horror as millions of people die what may have been preventable deaths, it will cause potentially irreversible damage to a shared understanding of one another’s humanity that’s been emergently engendered over countless generations and acts of courage and generosity, leaving us all in a state of shock even as we hear the nightly news tell us yet again that the monthly happiness and prosperity metrics have reached record highs.

Further reading

For further elaboration on the workings of the Kelly Criterion and how it connects the theoretical underpinnings of finance, economics, and probability, you can’t do better than Red Blooded Risk: A Secret History of Wall Street by Aaron Brown; and if you’re looking a more formal and technically rigorous explanation of why maximizing compound interest is a much more theoretically sound alternative to economic utility theory, check out The Ergodicity Problem in Economics by Ole B. Peters. As for understanding the idea of how traders are in the business of creating the market, a difficult but extremely worthwhile read is The Medium of Contingency by Elie Ayache.

As for my statements about understanding things as having function and no variables independent of the concept of function, this came from a hodgepodge of philosophical frameworks, but notably: Jane Jacobs’ The Nature of Economies, the collected essays of Charles Sanders Peirce, Spinoza’s Ethics, and Robert Rosen’s Life Itself.


[1]Of course, in the real world, it’s never so simple: as I write this, the pandemic has already caused some oil contracts to have negative prices. The reason why this happens is that when you own these contracts, you’re technically on the hook to accept delivery, which is costly since you can’t just have amazon deliver a few hundred barrels of oil to your apartment.

[2]If you’re interested in a basic technical explanation, consider a fair coin being flipped where you bet 20% of your money each time and start with $100. If it flips heads first and then tails (or the other way around) you gain $20 but then lose $24 (or lose $20 and then gain $16). With a fair coin of course, you should just not bet at all; the Kelly Criterion can only help you exploit an edge in your favor.

[3]Although I derived most of this conclusion in bits and pieces informally over the years, I have to give credit Ole Peters for his foundational research on ergodicity in economics since it was a huge help in snapping this all together in a very simple and clean way. More information on the works by him and other writers that helped engender the ideas in this essay can be found in the “further reading” section.

[4]https://meaningness.com/eggplant/accountability

[5]Elizabeth Povenelli, Economies of Abandonment