Two-Envelopes Problem for Uncertainty about Brain-Size Valuation and Other Moral Questions

By Brian Tomasik

First published: . Last nontrivial update: .

Summary

When we're not sure whether to count small brains equally with big brains, it seems naively like we should maintain some probability that they are both equally valuable, and if this probability is not too tiny, small organisms will dominate big ones in utilitarian expected-value calculations due to numerosity. However, we could flip this argument around the other way, arguing that more is at stake if big brains matter a lot more than small ones. This paradox looks similar to the classic two-envelopes paradox. However, unlike the ordinary two-envelopes paradox, the problem is unresolvable in cases of moral uncertainty, because it ultimately boils down to the incomparability of different utility functions. That different moral theories can't be combined in a non-arbitrary way in a single "expected moral value" calculation is a central idea in the field of moral uncertainty.

One takeaway lesson from this discussion is that you should stop yourself "whenever you're tempted to" do an expected-value calculation over different possible degrees of weight that an organism might warrant based on moral rather than empirical considerations. (Expected-value calculations regarding sentience based on empirical considerations are fine.) For example, if you give 50% probability to caring about a chicken 2/3 as much as a human and a 50% probability to caring about a chicken 1/10 as much as a human, you can't naively combine these by saying that in expectation, a chicken is worth 0.5 * 2/3 + 0.5 * 1/10 = 0.38 as much as a human. Or, if you do take an average like this, keep in mind that it makes an implicit, arbitrary assumption about how to relatively scale two different utility functions. My preferred alternative is to think of my brain as having a moral parliament in which one faction uses the weight 2/3 and the other faction uses the weight 1/10. These two factions can bargain with one another and split altruistic resources toward their own preferred projects.

Note: For a Cliff's Notes version of this piece, you can just read the section "Factual vs. moral uncertainty".

Contents

Pascalian wagers on brain size

Should the moral weight of a brain's experiences depend on the size or complexity of that brain? This question has been debated by utilitarians, with adherents on both sides of the dispute. You might try to take a moral-uncertainty approach as follows:

Naive human-viewpoint wager. We don't know if insect-sized brains matter equally with human-sized brains or if they matter a lot less. If they matter equally, the universe contains a lot more (dis)value than if they matter less, so I should mostly act as though they matter equally on Pascalian grounds.

Alas, a Pascalian argument can be made in the other direction too:

Naive insect-viewpoint wager. We don't know if human-sized brains matter equally with insect-sized brains or if they matter a lot more. If humans matter a lot more, the universe contains more (dis)value, so I should mostly act as though humans matter more on Pascalian grounds.

As a practical matter, this second Pascalian update is less dramatic than the first one, because even if you assign moral weight in proportion to brain size, insects would still be more important, given that there are more insect than human neurons on Earth. Still, the idea behind the wager is similar in both cases, and if the world contained many orders of magnitude fewer insects but still more than humans, then the wager would push us strongly toward focusing on humans.

The "Naive insect-viewpoint wager" in defense of focusing on humans could be made much stronger if we consider possibilities more extreme than merely scaling moral weight by number of neurons. For example, maybe there's some chance we should scale moral weight by the square of number of neurons, or exp(number of neurons), or busybeaver_function(number of neurons). Of course, the more extreme versions of these superlinear functions would imply that, among other things, a human who has just 0.00001% more neurons than everyone else would outweigh all other humans combined in moral importance. Moreover, the biggest-brained animals would be completely unimportant compared with even insanely tiny chances of astronomically large minds.

The two-elephants problem

Two elephants and a human. Suppose naively that we measure brain size by number of neurons. An old, outdated estimatea suggested elephants had 23 billion neurons, compared with a human's 85 billion. For simplicity, say this is 1/4 as many.

Two elephants and one human are about to be afflicted with temporary pain. There are two buttons in front of us: one will let us stop the human from being hurt, and the other will stop the two elephants from being hurt. We can only press one button. Which should we choose?

First, suppose you plan to help the human. Say you think there's a 50% chance that moral weight scales in proportion to brain size and a 50% chance you should count each organism equally. If organisms are equal, then helping the elephants is twice as good because it saves 2 instead of 1 individuals. If you weight by brain size, then helping the elephants is only 2 * (1/4) = 1/2 as worthwhile as helping the human. 50% * 2 + 50% * 1/2 = 5/4 > 1, so you should actually help the elephants, not the human.

Now suppose you plan to help the elephants. If all animals count equally, then helping the 1 human is only 1/2 as good as helping the 2 elephants. If you weight by brain size, then helping the human is 4 times as good per organism, or 2 times as good overall, as helping the elephants. 50% * 1/2 + 50% * 2 = 5/4, so you should save the human instead.

This situation looks similar to the classic two-envelopes problem, in which a naive expected-value calculation suggests that you should keep switching which envelope you decide to take. In the two-elephants case, you might keep switching which button you think you should press.

Other examples of the moral-uncertainty two-envelopes problem

The brain-size two-envelopes problem is a special case of the general problem of moral uncertainty. "What's the relative importance between a brain with 20 billion neurons and one with 80 billion neurons?" is like "What's the relative importance between one person stubbing his toe for utilitarians vs. one person being told a small lie for anti-lying deontologists?" In the latter case, we can have the same sort of two-envelope wagers:

The next subsections consider some further examples of the two-envelopes problem for moral uncertainty.

Many-worlds interpretation

Note: I'm not an expert on this topic, and the way I describe things here may be misguided. Corrections are welcome.

In the many-worlds interpretation of quantum mechanics, when a parent universe splits into two child universes, does the moral importance thereby double, because we now have two worlds instead of one, or is the moral importance of each world divided in half, to align with their measures? Each split world "feels on the inside" just as real and meaningful as the original one, so shouldn't we count it equally with the parent? On the other hand, as far as I know, no uniquely new worlds are created by splitting: All that happens is that measure is reapportioned,b so it must be the measure rather than merely the existence of the universe that matters? Of course, we could, if we wanted, regard this reapportioning of measure as happening by creating new copies of old universes rather than by just changing the total measure of each unique universe. If there were new copies, then the (dis)value in those universes would be multiplied, and the multiverse would become more important over time.c

So do we want to regard quantum measure as dividing up the value of the universe over time (i.e., each universe becomes less important) or do we want to regard the parent universes as splitting into child universes that are each as important as the parent?d Say the value of the parent is 1, and say it splits over time into 1000 child universes, all with equal measure. Consider a choice between an action with payoff 2 today in the parent universe vs. 1 tomorrow in each child universe. If the children count less, then each one matters only 1/1000, so the value of acting today is 2*1 vs. a value of 1*(1/1000)*1000 = 1 for acting in each child universe tomorrow. On the other hand, if the child universes also count with weight 1, then the children matter just as much as the parent, so the value of acting later is 1*1000. Say we have 50% probability that the children should count equally with the parent. Then the expected value of acting tomorrow is (0.5)(1) + (0.5)(1000) = 500.5, which is greater than the expected value of 2 for acting today. This is analogous to insects dominating our calculations if we have some chance of counting them equally with big-brained animals.

But now look at it the other way around. Say the value of a child universe is 1. Then either the parent universe matters equally, or the parent universe matters 1000 times as much. If they matter equally, each having a moral importance of 1, then the comparison of acting today vs. tomorrow is 2*1 vs. 1*1000. On the other hand, if the parent matters 1000 times as much, then the comparison is 2*1000 vs. 1*1000. If our probabilities are 50% vs. 50% between counting the parent equally with the children or 1000 times as much, then the expected value for acting today is (0.5)(2)+(0.5)(2000) = 1001, which is greater than the expected value of 1000 for acting tomorrow. We should act today. But this is the opposite of the conclusion we reached when fixing the value of the parent and considering various possible values for the children. Once again, the two-envelopes paradox reigns.

Suffering in fundamental physics

In "Is There Suffering in Fundamental Physics?", I suggest reasons we might care a lot about micro-scale physical processes like atomic interactions. The main argument comes down to the sheer scale of fundamental physics: its computations astronomically preponderate over those that can be intentionally run by intelligent civilizations.

Naively, we might see a further prudential argument to take suffering in physics seriously: if fundamental physics does matter, then there would be vastly more suffering in the universe than if not. Therefore, our actions would be vastly more important if quarks can suffer, so we should act as though they can.

However, because the question of how much we care about electrons and protons is partly a moral one, the two-envelopes problem rears its head once more. We could argue in the reverse direction that either intelligent computations matter very little compared with physics, or they matter vastly more than physics. In the latter case, we'd be able to have more impact, so we should assume that intelligent computations dominate in importance compared with low-level physical processes.

Note that the two-envelopes problem doesn't infect the generic case for why fundamental physics may be overwhelmingly important, because that argument assumes a fixed moral exchange rate between, say, human suffering and hydrogen suffering and then points out the overwhelming numerosity of hydrogen atoms. Two-envelopes problems only arise when trying to take an expected value over situations involving different moral exchange rates.

Value pluralism on brain size

There are many plausible perspectives from which to look at the brain-size question, and each can feel intuitive in its own way.

Both of these approaches strike me as having merit, and not only am I not sure which one I would choose, but I might actually choose them both. In other words, more than merely having moral uncertainty between them, I might adopt a "value pluralism"e approach and decide to care about both simultaneously, with some trade ratio between the two. In this case, the value of an organism with brain size s would be V(s) = f(s) + w, where f(s) is the function that maps brain size to value (not necessarily linearly), and w is the weight I place on a whole organism independent of brain size. w needs to be chosen, but intuitively it seems plausible to me that I would set it such that one human's suffering would count as much as, maybe, thousands of insects suffering in a similar way. We can draw an analogy between this approach and the Connecticut Compromise for deciding representation in the US Congress for small vs. large states.

Note that if we have moral uncertainty over the value of w, then a two-envelopes problem returns. To see this, suppose we had uncertainty between whether to set w as 1 or 10100, while f(s) tends to have values around, say, 10 or 500 or something in that ballpark. If we set it at 10100, then V(s) is much bigger than if we set w as 1, so the w = 10100 branch of the calculations dominates. Effectively we don't care about brain size and only count number of individuals. But what if we instead flipped things around and considered V'(s) = w' * f(s) + 1 = V(s)/w, where w' = 1/w. Either w' is 1, or w' is 1/10100, and in the former case, V' is much larger than in the latter case, so our calculations are dominated by assuming w' = 1, i.e., that the size-weighted term f(s) matters quite a bit.

If we had no uncertainty about w, we wouldn't have a two-envelopes problem, but once we do have such uncertainty, the issue rears its head once more.

I'm doubtful that a value function like V(s) = s + w is the right approach, because if w is not trivially small, then for small critters like insects and bacteria, w might be much bigger than s, but then a single insect and a single bacterium would have close to the same value, which seems wrong. More plausible is to have a V(s) function that grows significantly with s but less than linearly, at least above a certain minimal threshold.

Uncertainty between traditional vs. negative utilitarianism

Consider a person, Jim, who lives a relatively pain-free childhood. At age 23, Jim develops cancer and suffers through it for several months before death.

A traditional utilitarian (TU) would probably consider Jim's life to be positive on balance. For instance, the TU might say that Jim experienced 10 times more happiness than suffering during his life.

In contrast, a weak negative utilitarian (NU) would consider Jim's extreme suffering to be more serious than the TU thought it was. The NU points out that at some points during Jim's cancer, Jim felt so much pain that he wished to be dead. The NU thinks Jim experienced 10 times more suffering than happiness in his life.

Now suppose we're morally uncertain between TU and NU, assigning 50% probability to each. First, let's say the TU assigns values of +10 and -1 to Jim's happiness and suffering, respectively. Since the NU consider's Jim's suffering more serious than the TU did, and in fact, the NU thinks Jim experienced 10 times more suffering than happiness, the NU's moral assignments could be written as +10 happiness and -100 suffering. These numbers are on average much bigger than the TU's numbers, so the NU's moral evaluation will tend to dominate naive expected-value calculations over moral uncertainty. For instance, using these numbers and 50% probability for each of TU and NU, Jim's life was net negative in expectation: 0.5 * (10 - 1) + 0.5 * (10 - 100) < 0.

But we can flip this around. The TU again assigns values of +10 to happiness and -1 to suffering. Now let's suppose that the NU agrees on how bad the suffering was (-1) but merely gives less moral weight to happiness (+0.1, ten times less). Now the TU's numbers are on average much bigger than the NU's, so the TU's moral perspective will tend to dominate in naive expected-value calculations over moral uncertainty. Using these numbers and 50% probability for each of TU and NU, Jim's life was net positive in expectation: 0.5 * (10 - 1) + 0.5 * (0.1 - 1) > 0.

Factual vs. moral uncertainty

If we have a single utility function and merely have factual uncertainty, our two-envelopes problem can be solved. However, when there are at least two different utility functions being combined, the problem is not soluble.

One utility function

Suppose you have a single utility function, u, that weighs brains based their physical characteristics. For example, suppose for simplicity that u measures pleasure as the the number of neurons firing per second in precisely specified hedonic hotspots in the brain. Imagine that we can create either one happy human or two happy elephants for one minute before they painlessly and instantaneously disappear again. We want to create whichever being(s) will experience the most happiness according to our utility function. Because we've studied human brains in detail, we already known that u(human) = 1 in some units. We haven't studied elephant brains as much yet, but we think there's a 50% chance that u(two elephants) = 1/2 and a 50% chance that u(two elephants) = 2.

At first glance, we might seem to have a two-envelopes problem here. First focusing on the human, we see there's a 50% chance that the combined elephant happiness is 1/2 as much and a 50% chance that it's 2 times as much. Meanwhile, relative to the two elephants, there's a 50% chance that human happiness is 1/2 as much and a 50% chance that human happiness is 2 times as much.

However, this reasoning based on relative comparisons ignores the magnitudes of potential "wins" and "losses" by switching. In particular, suppose we initially plan to create the human and consider switching to create the two elephants instead. There's a 50% chance that utility would decrease from 1 to 0.5 by doing so and a 50% chance that utility would increase from 1 to 2 by doing so. The expected value of switching to elephants is 0.5 * (0.5 - 1) + 0.5 * (2 - 1) = 0.25. And if we had initially planned to create the elephants, a similar calculation would have shown that the expected value of switching to the human is -0.25.

This makes sense, because if we just compute expected utility on the original utility function, we see that the expected utility of the human is 1, while that of the elephants is 0.5 * 0.5 + 0.5 * 2 = 1.25.

Multiple utility functions

The intractable version of the two-envelopes problem comes when we introduce moral uncertainty, which means dealing with more than one utility function.

In particular, imagine that we have one utility function f1 according to which two elephants matter half as much as a human. f1(human) = 1, while f1(two elephants) = 1/2. Meanwhile, another utility function f2 weighs one elephant equally to one human, so that f2(two elephants) = 2 while f2(human) = 1. Suppose we assign 50% probability to f1 and 50% to f2. These probabilities could refer to uncertainty about which view is objectively correct (for a moral realist) or which view you would come to adopt upon learning and thinking more about the relevant issues (for a moral anti-realist).

Once again we seem to have a two-envelopes paradox. Relative to the value of a human, there's a 50% chance that two elephants have twice as much value (if f2 is true), and a 50% chance they have half as much value (if f1 is true). Meanwhile, relative to the value of two elephants, there's a 50% chance a human matters twice as much (if f1 is true) and a 50% chance a human matters half as much (if f2 is true).

We could naively try to compute expected utility and say that the expected value of creating two elephants is 50% * f1(two elephants) + 50% * f2(two elephants) = 50% * 1/2 + 50% * 2 = 1.25, which is greater than the expected value of 1 for creating the human. However, this doesn't work the way it did in the case of a single utility function, because utility functions can be rescaled arbitrarily, and there's no "right" way to compare different utility functions. For example, the utility function 1000 * f1 is equivalent to the utility function f1, since both utility functions imply the same behavior for a utilitarian. However, if we use 1000 * f1 instead of f1, our naive expected-value calculation now favors the human:

We can make either f1 or f2 win out by scaling it by a large multiplicative factor. And there's no obvious choice for what the "right" value of the multiplicative factor should be.

This is similar to the difficulty of combining utilitarianism with deontology under moral uncertainty. Does a deontological prohibition on stealing get infinite weight, thereby overriding all (finite) utilitarian considerations? Does the stealing prohibition get weight of -100? -5? There's no right answer. There are just different moral views, and there's no unique way to shoehorn two different moral views onto the same yardstick of value. The case where we have uncertainty about the relative moral weights of different minds fools us into thinking that the uncertainty can be handled with an expected-value calculation, because unlike in the deontology case, both views are utilitarian and just differ numerically. But it's not so.

Two envelopes and interpersonal utility comparisons

The two-envelopes problem is actually the same as the problem of interpersonal comparisons of utility in a different guise.

Return to the "Two elephants and a human" example from the very beginning of this piece. Let u1 be the utility function that values human and elephant suffering equally. That is, u1(help 1 human) = u1(help 1 elephant) = 1. Let u2 be the utility function that cares 4 times as much about a human: u2(help 1 human) = 1 but u2(help 1 elephant) = 1/4. Suppose the "true" utility function u is either u1 or u2. Naively, it looks like we can compute an expected utility as

E[u] = (1/2) u1 + (1/2) u2.

But this forgets a fundamental fact about utility functions: They're invariant to positive affine transformations. This ability to rescale the utility functions arbitrarily is exactly what gives rise to the two-envelopes paradox. To see this, let's first consider helping the two elephants. The naive expected utility would be

E[u(help 2 elephants)] = (1/2) * 2 * u1(help 1 elephant) + (1/2) * 2 * u2(help 1 elephant) = (1/2) * 2 * 1 + (1/2) * 2 * 1/4 = 5/4.

Meanwhile, you can calculate that E[u(help 1 human)] comes out to 1/2 + 1/2 = 1.

But we can rescale u1 and u2 without changing the preference orderings they represent. In particular, let's multiply u1 by 1/2 and u2 by 2. With this change, you can compute that E[u(help 2 elephants)] = 1/2 + 1/2 = 1, while E[u(help 1 human)] = 1/4 + 1 = 5/4. These calculations exactly reproduce those done in the earlier "Two elephants and a human" discussion.

Given that the two-envelopes problem for moral uncertainty is isomorphic to the problem of interpersonal utility comparisons, we can apply various strategies from the latter to the former. For example, we could use the "zero-one rule" in which we normalize utility such that "Your maximum utility = 1. Your minimum utility = 0."f

Let's apply the zero-one rule to the "Two elephants and a human" example. Suppose the worst possible outcome is that all three individuals get hurt, and the best possible outcome is that no one does. First consider the action of helping the two elephants. If humans and elephants matter equally, then helping the elephants changes the situation from the worst case (utility = 0) to 2/3 toward the best case, because 2/3 of the individuals are helped. So the increase in utility here is 2/3. Meanwhile, if an elephant matters only 1/4 as much as a human, helping the two elephants moves us from utility of 0 to utility of 1/3, since the suffering of the human matters twice as much as the suffering of both the elephants we spared. The expected utility over moral uncertainty for helping the two elephants is then (1/2) * (2/3) + (1/2) * (1/3) = 1/2. Meanwhile, for the action of saving the human, you can compute that the expected utility is (1/2) * (1/3) + (1/2) * (2/3) = 1/2. So interestingly, the zero-one rule tells us to be indifferent in this particular case.

In other cases, the zero-one rule gives more substantive recommendations. For example, if you're uncertain whether only your own welfare matters or whether the welfare of all animals on Earth matters, and if you assign decent probability to each possibility, then the zero-one rule probably advises you to be selfish (at least if this would actually increase your own welfare), because it's probably a lot easier to achieve, say, a given increase in your own normalized welfare than the same-sized increase in the normalized welfare of all animals. (Of course, we might debate what counts as the worst vs. best possible utility values here. Is the "best possible utility" that which can be achieved given your existing brain, which has a hedonic treadmill? Or does the "best possible utility" include the possibility of rewiring your brain to bypass hedonic adaptation, to be massively bigger, to last forever, etc.?)

Needless to say, I don't favor egoism on the grounds of the above argument, and more generally, I don't agree with a blanket application of the zero-one rule in cases of moral uncertainty (just as many philosophers reject it as an approach to interpersonal utility comparisons). Unfortunately, there don't seem to be satisfactory approaches to interpersonal utility comparisons, and for the same reason, I'm doubtful about satisfactory approaches to moral uncertainty (although I haven't read much of the literature on this topic).

AlexMennen (2013)

AlexMennen (2013) discusses the problem of interpersonal utility comparisons and points out that a similar issue also afflicts individual utility functions if you're uncertain about your utility function, such as in cases of moral uncertainty. AlexMennen (2013) gives an example that I think is an instance of the two-envelopes problem:

Let's say that, in another 3-state world (with states A, B, and C) you know you prefer B over A, and C over B, but you are uncertain between the possibilities that you prefer C over A by twice the margin that you prefer B over A, and that you prefer C over A by 10 times the margin that you prefer B over A. You assign a 50% probability to each. Now suppose you face a choice between B and a lottery that has a 20% chance of giving you C and an 80% chance of giving you A. If you define the utility of A as 0 utils and the utility of B as 1 util, then the utility values (in utils) are u1(A)=0, u1(B)=1, u1(C)=2, u2(A)=0, u2(B)=1, u2(C)=10, so the expected utility of choosing B is 1 util, and the expected utility of the lottery is .5*(.2*2 + .8*0) + .5*(.2*10 + .8*0) = 1.2 utils, so the lottery is better. But if you instead define the utility of A as 0 utils and the utility of C as 1 util, then u1(A)=0, u1(B)=.5, u1(C)=1, u2(A)=0, u2(B)=.1, and u2(C)=1, so the expected utility of B is .5*.5 + .5*.1 = .3 utils, and the expected utility of the lottery is .2*1 + .8*0 = .2 utils, so B is better. The result changes depending on how we define a util, even though we are modeling the same knowledge over preferences in each situation.

If you take outcome B as the standard for defining a utility value of 1, then the chance of a high value for outcome C dominates the expected-value calculation, while if you take outcome C as the standard for defining a utility value of 1, the chance of a high value for outcome B dominates the calculation.

Acknowledgements

DanielLC first pointed out to me in 2009 that the Pascalian argument for caring about small brains can be flipped around into a Pascalian argument for large brains. The comparison to the two-envelopes problem was suggested to me by Carl Shulman in 2013, whose additional thoughts on this matter have been interesting. Some questions from Jacy Reese in 2018 helped me clarify parts of this piece.

Footnotes

  1. The "23 billion" figure comes from an old, incorrect version of Wikipedia's "List of animals by number of neurons". I'm continuing to use it here so that my example will still work, but keep in mind that it's not factually accurate. A better estimate for elephant neurons comes from an interview with Suzana Herculano-Houzel. She reports that elephants have 257 billion neurons, "BUT 98% of those neurons are located in the elephant cerebellum". If we think cerebellar neurons count less than cortical neurons, then the effective number of neurons would be much lower than this.

    By the way, elephant brain mass is roughly 3.5 times that of humans, which almost perfectly aligns with relative numbers of neurons.

      (back)

  2. From "Parallel Universes" by Max Tegmark:

    If physics is unitary, then the standard picture of how quantum fluctuations operated early in the big bang must change. These fluctuations did not generate initial conditions at random. Rather they generated a quantum superposition of all possible initial conditions, which coexisted simultaneously. Decoherence then caused these initial conditions to behave classically in separate quantum branches. Here is the crucial point: the distribution of outcomes on different quantum branches in a given Hubble volume (Level III [quantum many-worlds multiverse]) is identical to the distribution of outcomes in different Hubble volumes within a single quantum branch (Level I [big universe containing many Hubble volumes]). This property of the quantum fluctuations is known in statistical mechanics as ergodicity.

    The same reasoning applies to Level II [inflationary multiverse]. The process of symmetry breaking did not produce a unique outcome but rather a superposition of all outcomes, which rapidly went their separate ways. So if physical constants, spacetime dimensionality and so on can vary among parallel quantum branches at Level III, then they will also vary among parallel universes at Level II.

    In other words, the Level III multiverse adds nothing new beyond Level I and Level II, just more indistinguishable copies of the same universes--the same old story lines playing out again and again in other quantum branches. The passionate debate about Everett's theory therefore seems to be ending in a grand anticlimax, with the discovery of less controversial multiverses (Levels I and II) that are equally large.

    [...] Does the number of universes exponentially increase over time? The surprising answer is no. From the bird perspective, there is of course only one quantum universe. From the frog perspective, what matters is the number of universes that are distinguishable at a given instant--that is, the number of noticeably different Hubble volumes. Imagine moving planets to random new locations, imagine having married someone else, and so on. At the quantum level, there are 10 to the 10118 universes with temperatures below 108 kelvins. That is a vast number, but a finite one.

    From the frog perspective, the evolution of the wave function corresponds to a never-ending sliding from one of these 10 to the 10118 states to another. Now you are in universe A, the one in which you are reading this sentence. Now you are in universe B, the one in which you are reading this other sentence. Put differently, universe B has an observer identical to one in universe A, except with an extra instant of memories. All possible states exist at every instant, so the passage of time may be in the eye of the beholder [...].

      (back)

  3. If we also used this "multiplication of copies" approach for anthropics, then it would be likely we'd be extremely far down on the quantum tree. Of course, we could, if we wanted, divorce anthropic weight from moral weight.

    If we do think splitting increases the importance of the universe, our valuation will be dominated by what happens in the last few seconds of existence. David Wallace notes: "Everettian branching is ubiquitous: agents branch all the time (trillions of times per second at least, though really any count is arbitrary)." Since physics will exist far longer than intelligent life, a view that our universe's importance increases trillions of times per second would require that suffering in physics dominate altruistic calculations.  (back)

  4. Note that regardless of which way we go on this question, the premise behind quantum suicide makes no sense for altruists. Quantum suicide only makes sense if you care about at least one copy of you existing, regardless of its measure. In contrast, altruists care about the number of copies or the amount of measure for worlds with good outcomes.  (back)
  5. Some ethical views claim to be foundationally monist rather than pluralist in that they care only about one fundamental value like happiness. But on closer inspection, happiness is not a single entity but is rather a complex web of components, and our valuation of a hedonic system with many parts is typically some complex function that weighs the importance of various attributes. In other words, almost everyone is actually a value pluralist at some level.  (back)
  6. I believe this is also a standard proposal in the field of moral uncertainty. In Wiblin (2018), Will MacAskill explains:

    One naïve way of doing it might be you look at what’s the best option and the worst option across all different moral views, and then you say, "Okay, I’m going to let all of those be equal so that every option’s best option and worst option is equally good. That’s how I make comparisons of value or choice worthiness across different theories…" That obviously doesn’t work for theories that are unbounded, that have no best or worst.

      (back)