Suppose you reflect a bunch and land on a linear[1] utility function. Suppose in our universe, the most efficient way to increase your utility is creating many "widgets." We'll launch von Neumann probes to claim resources in distant galaxies. With your share of the lightcone, you can cause there to be (say)
Good news! There's probably many other universes out there — not just Everett branches but also very different universes, many not made of atoms. Some are much better suited for increasing your utility than this one, while this one has some comparative advantage. The gains from trade are astronomical. You can't actually communicate with other universes, but you can (given mature technology) simulate them to cooperate with them acausally. So we'll simulate other universes to figure out what everyone[2] values, then determine what goods our universe can produce to best promote those values (perhaps relative to other universes). And then we'll fill the universe with goods accordingly.
In particular, say our universe is comparatively advantaged in producing "diamonds." If diamond-lovers lived in universes comparatively advantaged in increasing your utility (whether by producing tons of widgets or by producing goods much more efficient at increasing your utility than widgets), that would present large gains from trade. And trade doesn't need to be one-to-one; it can be like we produce diamonds in order to get credit from diamond-lovers—we couldn't get more credit from others by producing anything else—and we spend that credit to get utility somewhere in the multiverse. But actually the market analogy and comparative advantage aren't quite right; it might be more like everyone adopts a compromise/aggregate utility function, then pursues that than trading.[3]
Producing what you prefer is much like defecting in the prisoner's dilemma. If everyone instead produced goods based on a combination of what everyone values and what their universe is efficient at, everyone would be better off. Fortunately, we can simulate others in order to only cooperate with the people who cooperate, and thus we can incentivize everyone to cooperate. So I think if people mostly get acausal trade right, then our universe will mostly be tiled[4] with "diamonds."
On acausal cooperation in general, see Nguyen and Aldred 2024a.[5]
Why are there gains from trade with other universes? Here I focus on how different universes have different comparative advantages;[6] I think that's most important. Additionally, the production possibilities frontier between different goods within one universe may be better than linear.[7] And disagreements about the prior, including attitudes about infinities and weights on various universes, may create opportunities for trade.
Epistemic status: seems correct. I asked some experts about my thesis; comments in footnote.[8] This idea isn't novel; for example, Nguyen and Aldred 2024b observes that different universes have different comparative advantages. I wrote this post because I hadn't heard my thesis—the cosmic endowment will basically be used for whatever our universe is good at producing—before, nor read anything on what acausal cooperation actually entails about the cosmic endowment. But after writing the post, comments suggest the thesis might be banal among experts.
In addition to commenting below, if you have minor nitpicks or confusions you can comment on this doc.
- ^
At least beyond some low threshold.
- ^
Everyone who does acausal cooperation.
(Maybe everyone, across universes, will converge to the same preferences, in which case trade is moot.)
- ^
You might make deals that look good from behind the veil of ignorance, even after the veil has been lifted. (I think this is related to "updatelessness.") For example:
- Redistribute some power from preferences that tend to be held by people in high-value universes toward egalitarianism
- Redistribute some power from preferences that tend to be held by people who have power within a universe toward egalitarianism
- Decrease variance in your preference-satisfaction if you're relatively risk-averse, or increase it (and increase EV) if you're relatively scope-sensitive and not-risk-averse
- Give you relatively more power in futures where the multiverse turns out to be higher-stakes (to your preferences)
- Futures where you hold your convictions strongly
- Futures where your preferences entail caring a lot about marginal resources, or where your preferences can absorb a large chunk of the multiverse's resources extremely effectively (if it makes sense to talk about non-normalized amount-of-caring between preference-systems)
Such trades are a generalization of nuances like everyone should coordinate to disincentivize coercion and everyone should reward people who opted to grow the pie rather than grab power.
All that is disanalogous to the individuals have resources and trade in a market picture — and if we're doing maximize an aggregate utility function rather than trade, comparative advantage may not matter (especially if the utility function is linear-ish). Setting all that aside, I think the market analogy is helpful because some economics principles translate. For example, if Alice highly values just putting a specific QR code somewhere in your universe, that doesn't mean you can capture any surplus by creating that QR code, since other acausal-cooperators in your universe would also be interested in trading and your competition would drive the price down to the cost of creating the QR code.
Oesterheld 2017 also discusses updatelessness in this context.
- ^
But note there may be some diversity within "diamonds," either because some people value diversity or for practical reasons like different parts of the universe being suited for different goods.
- ^
People distinguish acausal trade, which involves reciprocity, from ECL, which just involves correlation. I don't think the difference matters here. I think you'll still simulate lots of other universes in order to do ECL well.
- ^
Nuance: one weird source of comparative advantage is that some people may have preferences that are not straightforwardly scope-sensitive — instead they want a whole small universe to be used in some particular way, or they want a small structure to be placed in many universes. Perhaps the 90% of universes that aren't huge and aren't great for producing particular goods will mostly be used for weird stuff like that — that's their comparative advantage, if 10% of universes are dramatically better for producing many goods or high-quality goods.
- ^
In particular, maybe if several agents each create some compromise good, rather than naively creating whatever most efficiently gives them utility, they're all better off. Discussion of acausal cooperation usually focuses on this consideration, but I think it's less important than specialization between universes. There could be compromises because (A) many preferences themselves are compatible with a wide variety of goods — preferences like I dislike suffering and I want all sentient beings to have somewhat happy and meaningful lives, but I don't need them to be extremely happy/meaningful and I don't need there to be lots of beings. Or there could be compromises because (B) there's opportunity for compromise even between apparently rigid preferences like maximize paperclips and maximize staples and maximize happiness. I intuit that (B) is mostly false; others have intuited that it's importantly true.
Separately, maybe there's more than one kind of input resource and different goods use different sets of inputs.
- ^
Anthony DiGiovanni said (off the cuff, missing context):
My impression was that people working on acausal trade were already aware of this thesis and agreed with it. Or at least agreed that "produce the goods you're most comparatively advantaged at" is sort of the baseline policy for acausal trade, which you might deviate from based on other considerations about the economics. I might be missing some important nuance here though.
Emery Cooper said (off the cuff, missing context):
I expect porous and situational values to be a significant part of the picture (not in terms of resources, but in terms of value). I also expect that some value systems will care about within-universe diversity, and that this might be relatively cheap to satisfy. I also think that compromise goods will be a big deal because e.g. lots of values pertain to groups of complex minds (so have overlapping costs) and are partially compatible. All of that said, it also seems plausible that there will be strong comparative advantages when trading with universes with different laws of physics such that universes should specialize a bunch.
Another expert said they hadn't fully considered it.
Discuss