Hedonic asymmetries

Creating really good outcomes for humanity seems hard. We get bored. If we don’t get bored, we still don’t like the idea of joy without variety. And joyful experiences only seems good if they are real and meaningful (in some sense we can’t easily pin down). And so on.

On the flip side, creating really bad outcomes seems much easier, running into none of the symmetric “problems.” So what gives?

I’ll argue that nature is basically out to get us, and it’s not a coincidence that making things good is so much harder than making them bad.

First: some other explanations

Two common answers (e.g. see here and comments):

  • The worst things that can quickly happen to an animal in nature are much worse than the best things that can quickly happen.
  • It’s easy to kill or maim an animal, but hard to make things go well, so “random” experiences are more likely to be bad than good.

I think both of these are real, but that the consideration in this post is at least as important.

Main argument: reward errors are asymmetric

Suppose that I’m building an RL agent who I want to achieve some goal in the world. I can imagine different kinds of errors:

  • Pessimism: the rewards are too low. Maybe the agent gets a really low reward even though nothing bad happened.
  • Optimism: the rewards are too high. Maybe the agent gets a really high reward even though nothing good happened, or gets no reward even though something bad happened.

Pessimistic errors are no big deal. The agent will randomly avoid behaviors that get penalized, but as long as those behaviors are reasonably rare (and aren’t the only way to get a good outcome) then that’s not too costly.

But optimistic errors are catastrophic. The agent will systematically seek out the behaviors that receive the high reward, and will use loopholes to avoid penalties when something actually bad happens. So even if these errors are extremely rare initially, they can totally mess up my agent.

When we try to create suffering by going off distribution, evolution doesn’t really care. It didn’t build the machinery to be robust.

But when we try to create incredibly good stable outcomes, we are fighting an adversarial game against evolution. Every animal forever has been playing that game using all the tricks it could learn, and evolution has patched every hole that they found.

In order to win this game, evolution can implement general strategies like boredom, or an aversion to meaningless pleasures. Each of these measures makes it harder for us to inadvertently find a loophole that gets us high reward.

Implications

Overall I think this is a relatively optimistic view: some of our asymmetrical intuitions about pleasure and pain may be miscalibrated for a world where we are able to outsmart evolution. I think evolution’s tricks just mean that creating good worlds is difficult rather than impossible, and that we will be able to create an incredibly good world as we become wiser.

It’s possible that evolution solved the overoptimism problem in a way that is actually universal—such that it is in fact impossible to create outcomes as good as the worst outcomes are bad. But I think that’s unlikely. Evolution’s solution only needed to be good enough to stop our ancestors from finding loopholes, and we are a much more challenging adversary.


2 thoughts on “Hedonic asymmetries

  1. I wonder if a kind of valence accelerationism is a position that could ever exist – the position that evolution ought to be aided in making us unsatisfied with most things, in order to drive us to even greater higher of ingenuity in inventing utopia. Most people are averse to wireheading – modifying our preferences to be satisfied with less rich and complex experiences. Would anyone then be willing to modify our preferences in the other direction to require more rich and complex experiences?

  2. But even if you are able to outsmart evolution – in order for the solution to be sustainable it has to be either:
    – be completely aligned with the universal optimization goal of life (Accelerating entropy/burning Gibbs free energy), which doesn’t ring true to me because the current hedonic asymmetries that you mentioned which are prevalent in nature are exactly the best solution of this optimization vector (based on the reasons you mentioned), Or as Schopenhauer eloquently said:
    “Pleasure is never as pleasant as we expected it to be and pain is always more painful. The pain in the world always outweighs the pleasure. If you don’t believe it, compare the respective feelings of two animals, one of which is eating the other.”
    – the other option which Is more realistic but still extremely hard is to create a system that prevents high-suffering more efficient beings to outcompete happier less efficient being, the only realistic options seem to be some kind of elaborate cooperation scheme or a singleton (meditations on moloch pops to mind)
    Any other solution will be outcompeted to oblivion

Leave a comment