In Genuine Praise of So Called Folly: Why Only Nonoptimization Matters

The title of this article might seem a strange one coming from an economist. This is the third of three planned essays. The two preceding articles in this series, the first of which built a lens to observe life from a tautology and the second, took reasonable scientific inference from the first. In this article, I seek to persuade of something harder to prove with logic but eminently more important. That love (broadly defined) is, in fact, the only thing that truly matters.

The line of logic from the last two articles

-Life faces the dual mandate to maintain itself by getting energy from its environment and preventing its environment from destroying it.

-As a result, all life is essentially a predictive model of its environment, as it needs to be in order to interact with its environment efficiently.

-Everything that will be selected for has costs and benefits, including the very receptors that allow us to perceive the environment.

-The other animals in one’s environment, predators, prey, mates, and allies, are also part of the environment.

-For humans, the most important part of our environment is other humans.

-Don’t take current reality as a given. Babies are cute because they are useless and vulnerable. If we did not find them cute, we could not reproduce. We did not randomly or by chance come about our emotional machinery. At a high-level, everything broadly must exist for a reason.

-The cross-cultural normal human feelings about love, friendship, loyalty, and honor, especially in the pure forms of fiction, give strong evidence that we have receptors for such things in others.

-Given the exploitability of these receptors that create such strong feelings, this is further evidence that, in some regard, these things must all be part of our general nature, meaning that they were at least part of our evolutionary environment.

-We feel these things most strongly not when they coincide with narrow self-interest but instead when they contradict it. That is, they are obvious examples of non-optimization.

-Such non-optimization can theoretically develop if it allows for increased cooperation. Like all things, non-optimization is itself optimal.

Non-optimization is here defined as behavior that goes against the narrow self-interest of the individual.

In brief, non-optimization can come about in complex social species. Specifically, it solves the rational calculation problem with regard to cooperation—a problem in game theory known as time inconstancy. I can say I will still support you even when it is no longer in my best interest, and this would make you more willing to begin cooperating with me. The problem is that this is by itself not credible. If I am doing a rational calculation on the basis of narrow self-interest, I will always abandon you at your worst, precisely when you would need me the most. If I could convince you and others that I would stick by through thick and thin I would make a more attractive ally. But I wouldn’t be able to, as you as another rational being would be just as aware of my inevitable betrayal as I am.

There needs to be a way to credibly commit. For animals, this way is emotion. Feelings are those things that drive animals to action. Rational calculation requires energy, so feelings that create behavioral heuristics are the norm. An animal does not, for instance, think to itself, “boy, I sure do require additional energy to keep the inevitable entropic decay at bay, at least until I have acted in such a manner as to maximize the future flow of my genes through the generations!”. They are hungry, and so they seek food. This is why animal behavior can(but not necessarily does) become maladaptive when the environment is inconsistent with the expected environment. This is not to say that our genes loom over and explain all. For generalist species, in particular, a large part of genetic heritage is about how to change given environmental conditions (though this, too, must be found in the genes).

Given the importance of human cooperation, our feelings largely regard others. They allow us to commit believably and stay true even when it no longer makes sense. This is true of the positive emotions that I focus on, but also emotions such as revenge. “He who seeks revenge should dig two graves”. As the saying goes, revenge usually doesn’t align with our narrow self-interest. It solves the same kind of time inconsistency problem as love, but in this, it makes the threat that you will get back at someone who wronged you credible. This also fosters cooperation.

There is another saying, only when times are tough do you learn who your real friends are. Everyone has an incentive to appear a true friend but sadly many are not. The question from a narrow maximization perspective is why anyone would be a real friend. The answer that may be given by an economist using game theory would utilize reputation effects showing that while in the one case there might be a loss, the global gain from the reputation will still put the loyal friend ahead. Is that what goes on in the mind of a loyal friend? Does this explain why friends might allow themselves to be destroyed for their friend? A person jumping on a grenade isn’t thinking “this will surely maximize the expected present discounted value.”

It should be noted that, like all commitment devices, it is better from a narrow self-interest perspective to have others think we have one without actually having one. This is why it is right to be skeptical and why large displays against our narrow self-interest are much better evidence than words alone. This is also why should you find yourself in the unenviable position of finding out who your true friends are. It certainly won’t be all of them, likely not even most. Signals must be costly to contain information. That isn’t to say that even that costly signals cannot be expertly sent by those who lack what is being signaled. No signal can ever be perfect, though, as discussed in the last article, conditions can make it easier or harder to detect false signals. In the modern world, it has become harder.

If you are on the same page as me, we can continue to the larger, more important question.

What can truly matter? What can be said to have moral worth?

This question is an emergent feature of life, not a cosmic one. A universe with no intelligence would have no concept of mattering or moral worth. Ultimately as we are the ones determining the answer then the answer can only be found within ourselves.

Imagine a universe entirely devoid of life, just as vast as the one we now inhabit just with no minds to witness it. The idea of such a universe with trillions of planets and stars slowly marching to heat death might conjure some romantic vision in your thinking conceptualizing mind. But given no mind ever exists in that universe to view it is there any difference in its existence or nonexistence?

A universe devoid of life is also one necessarily devoid of moral worth.

Now imagine a planet of a long-dead advanced civilization. It had achieved complete automation. The complex machines that inhabit it are in no way sentient, simply carrying out their admittedly complicated programming. They continue to carry out their tasks for thousands of years in the absence of the civilization that benefited from them—making, breaking down, and gathering to make again the piles of unused production to be broken down and made again and again and again. Does this case have any real difference from the lifeless one? Yes, it once had sentients, and if we were to visit with our human eyes, we might marvel at it, thinking at what a fantastical civilization there must have been. But is there any point in its current production? Does it alone, forever undiscovered, have any moral worth? If the robots had sentience, feelings, hopes and dreams, many would easily say yes, but it does not. Just more complex versions of what currently exists on a factory floor.

Production alone does not make for mattering.

Imagine another planet, also forever isolated. This one has rudimentary life, single-cell organisms that unthinkingly vie for dominance via consumption and replication. It will never develop past this stage.

Life alone does not make for mattering.

Imagine another planet with more complex organisms, but they are ones with no finer feelings. They breed asexually and have developed no form of cooperation. It is a world purely red toothed and clawed. Though some of the more intelligent species might be capable of thought, the only emotion that drives them is hunger and dominance. A purely selfish world lacking entirely the concept of anything else. Is this, despite the added complexity, so very different from the single-cell organisms unthinkingly vying for dominance?

Complexity alone, intelligence alone, does not make for mattering.

So then what does? You can imagine in each example a story told. What would need to change to make it a meaningful one?

The machines could have feelings, friendships, and doubt their continued production in a world where such production is meaningless. The red-toothed and clawed world could have some softness to it. In fact, it is hard for us humans to truly imagine such a world. The aliens in our stories are, with few exceptions, those with basic human morality (usually with some twist). The hero making a noble sacrifice or showing kindness can win at least some of them over. But a truly narrowly self-interested species would have no machinery to win over. Deviations from optimization would be some combination of confusing and stupid to them. Pure folly.

Yes, I am discussing mattering from a human lens, but we humans could have no other. This, then, is my case. For us humans, non-optimization is the core to mattering, especially with regard to those finer positive feelings I broadly define as love.

Optimization is important, but only as it serves non-optimization.

This is not to say optimization does not have any use or purpose. It does very much so. Given a goal, it is better for it to be solved in a better rather than a worse way. This means we get either more of our goal or as much of our goal with less expense. Non-optimization is always short run costly[1]. If we can’t afford to sacrifice anything of ourselves, we can’t afford those finer feelings. This is the subject of many works of fiction, particularly post-apocalyptic. We must believe ourselves strong enough to weather betrayal and give of ourselves in ways that do not benefit us directly. A richer world is one where at least theoretically, this can occur more easily. Though no matter our riches, we can still be easily convinced we simply cannot afford to act in a nontransactional manner.

For most of us, this is, of course, nonsense. We are not one act of kindness away from hunger or usually anything but a mild inconvenience. That does not change the fact it is easy to believe for those focused on narrow goals. Those who seek only narrow gain and see themselves as always too poor to afford any deviation are those forever acting as if chained to necessity unable to make meaningful choices, only calculations.

Hunter gathers were/are far poorer than those living in developed conditions. While in many ways they are more focused on concrete, practical concerns like food and water that are not real concerns for us for the most part, most reports find the interactions between them far warmer, less transactional, and optimized. This is because they need to rely much more on their relationships than we do; those relationships are deeper and harder to substitute.

In this modern world having many surface level relationships that we know we can easily exchange, keeping ourselves safe. But being safe from harm isn’t enough. Imagine a heaven where one is guaranteed no harm, but also in order to accomplish this, nothing positive either. Wouldn’t such a life seem more like a hell?

Nonoptimization is a risk, I won’t lie. It is also necessary. You can afford it, at least to some extent, and the nonoptimization benefits not only others but also ultimately yourself. To live narrowly chasing only some obvious goal such as fame/money/power is not to live a life that can be described as deeply human. Every one of us(here I mean those capable of feeling love) needs some nontransactional parts of our lives to give the other parts their meaning.

If everything has a point, nothing does.

You might think of an example of something that matters and is not to do with nonoptimization. What about beauty?

Beauty itself is an emergent property of nonoptimization.

What is beauty, and what can experience it? If an ant is driven forward by stimuli from its antennae, is it experiencing beauty? What about a computer taking in information from a picture and identifying its components. Is it? If dealing with an intelligent but entirely predatory set of aliens, when would the concept of beauty come about?

People can be beautiful to us, both inside and out. Nature can be beautiful, music can be, and art can be. We might consider a world of truly kinder people a more beautiful one. It is another wide concept like love that encompasses too many things for too many different people to be defined with any level of precision. It, like the definition of what art itself is, must be fuzzy. It can be helpful in such cases to illustrate what such things are not.

Art can be many things, but the one thing art is not is purely useful. What is the difference between art and pornography?

Things that are created to be purely useful, without ornamentation, especially if made in the most cost-effective manner, are those furthest from being art. Art produced for purely commercial reasons, such as advertisement, can only become art if lifted out of their original context. Pornography is a commercial endeavor the difference between it and art cannot be specified in particulars such as nudity but in intent. We may see art in craftmanship, but the idea of such craftsmanship we call art is someone devoting themselves past the point it would make any sense to their trade. People pushed by the love of their craft instead of chasing money, utilitarian objects not created in a utilitarian method, things we call authentic.

Beauty is a kind of guide, a powerful positive feeling that moves us towards things that are long run advantageous for complicated reasons we can’t fully grasp or observe. If response functions are too simple, as in the case of the ant or the algorithm, there is no need for beauty. If it is something simple enough to be fully grasped, there is also no need for beauty.

Beauty is important because it points us to important things missed by reason, but it is something derived not from the cosmos but from ourselves, from our long and shared history.

I’m not saying that the world is always beautiful, always real, or always kind. Just that it can be more. And that the power to make it more is in each of our hands. Wherever there is something real that gives people an advantage, whether that be kindness, loyalty, or pain there will always be those who try to copy that signal for personal gain. So it is always good to go forward and try to embody positive traits deeply while realizing that all that glimmers is not gold.

Thus I make the case that even though non-optimization arose as a solution to complex optimization, it is the only thing that gives life any real meaning. If I convinced you of this, try your best to live a more beautiful life and always think of what you truly value. If I have not, and as I said, this is understandable anytime there is a jump from positive to normative, still give it a try, see how it feels. You can afford it.


[1] Actually, if we want to be really technical here, given narrow self-interest, optimization is always weakly dominant in the short run.

4 thoughts on “In Genuine Praise of So Called Folly: Why Only Nonoptimization Matters”

  1. Hi Mr. Rush, I enjoyed reading this article! Do you think courage is just a feeling, or more than a feeling, as “the courage to change feelings (for complex optimization)?”

    I was thinking a chain: “simple optimization_1 is for simple non-optimization_1, which were for complex optimization_2, which were for complex non-optimization_2……”

    Since many non-optimization goods (NPG) are not available in the market, do you think an adjustment of market is helpful? Considering money as either good in itself or good in getting other goods [from markets], should we put NPG to redefine currency or include more NPG in the market?

    Liked by 1 person

    1. I think nonoptimization and consumerism don’t really go together well. It is really hard given the selective pressure of profit to produce any kind of genuineness at scale(especially given how little consumers are willing to actually investigate past claims.) So I don’t think it is really compatible, the market layer is one that I think is destined to be a sort of mutual instrumentalization. I can’t see a way that it can be fixed, instead, I think we just need to advocate that people realize this and have a layer in their personal life that is not built on narrow optimization.
      Courage is a really good question and one I probably need to think more about before I can give a sufficiently thoughtful answer. My initial thought is that courage and bravery allow us to have the strength to act in ways that are nonoptimized.
      One of my favorite examples is from One Piece, when Luffy declares war against the world government in order to save his friends. We might all have a lot of the finer feelings but without courage, they can’t manifest in non-ideal circumstances.

      Like

  2. Why not think of non-optimization as a second-order optimization? Replace “optimization” with “truth” and then we have the liar’s paradox and Gödel’s incompleteness theorems. We just need a metalanguage to describe what kind of things are being optimized at a more basic level. So what matters is not nonoptimization but the fact that we are not the sort of creatures whose thoughts and emotions can be captured by first-order logic (and language, perhaps).

    Liked by 1 person

    1. There are always infinite possible ways of defining terms, and infinite lenses with which the same thing can be viewed. I try to think what is the most functional definition. Non-optimization to me is actually closest to the idea of instrumentalization, with first order optimization as you state there can be no interaction which is not also instrumentalizing the person you are interacting with. The reason I call it optimization and non optimization is likely in part my economics training and in part the layer at which non optimization was selected for(is optimal) is the level above the individual. As stated given narrow self-interest, optimization is always weakly dominant in the short run for an individual. The only reason it is short run and not necessarily long run optimization for the individual is evolution carved all those feelings into us.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s