Anthropics

Archived Posts from this Category

Where In the World Am I?

Posted by steven on 01 Oct 2007 | Tagged as: Anthropics

An important concept in anthropic reasoning is “indexical uncertainty”. Where normal uncertainty is uncertainty about what the universe looks like, indexical uncertainty is uncertainty about where in the universe you are located.

I claim that all indexical uncertainty can be reduced to normal uncertainty plus multiple instantiation of observers. I don’t know if this is controversial, but it has interesting implications and it’s worth explaining.

Suppose a mad scientist makes two people, and puts each in a room. One room is blue on the outside, the other red. If I’m one of these people, I’m uncertain what the color of my room is. Because this concerns my place in the world, on the face of it, it’s a case of indexical uncertainty.

Now suppose first that the mad scientist didn’t particularly care about making our experiences exactly the same. Maybe I’m looking at the ceiling ten seconds after being created, and the person in the other room is looking at the floor. Or maybe I’m male and the person in the other room is female. Then my indexical uncertainty about the color of the room I’m in is really the same as my uncertainty about whether the mad scientist made the male in the blue room and the female in the red room, or vice versa. But this is normal uncertainty. It’s uncertainty about what the universe is like.

Then suppose, instead, that the mad scientist did make our mind states exactly the same. In that case, one possible way to see things — perhaps the only way — is that I have nothing to be uncertain about — it’s a certain fact that I’m in both rooms. If the mad scientist opens the doors, I should expect, as a certain fact, to diverge into two minds, one that sees blue and one that sees red.

So maybe we don’t need indexical uncertainty after all.

At this point I should say that I got confused… but maybe someone else can pick up the train of thought, so I’ll post this anyway. Eliminating indexical uncertainty should make it possible to think through the paradoxes of anthropic reasoning starting from principles we already understand.

Do Simulations Matter?

Posted by steven on 27 Jul 2007 | Tagged as: Anthropics, Philosophy

The Simulation Argument, formulated by Nick Bostrom, aims to show that you’re probably inside a simulated world. It assumes that enough civilizations like ours go on to spawn posthuman descendants that create many such worlds, and that enough of those worlds are like Earth. In all of spacetime, simulated versions of our civilization then outnumber originals. (Note that the Simulation Argument is not the same thing as the conclusion that you’re in a simulation.)

I think the argument and assumptions could hold up. If so, what does that mean we should do? Robin Hanson has made some suggestions. It seems to me, though, that (to a first approximation) the possibility of being in a simulation should make no difference to the behavior of a non-egoist agent. Here’s a quick informal argument.

Imagine you’re in a huge tree. It’s foggy, so you can’t tell whether you’re at the trunk or at one of a subset of the tree’s (sub-)branches. There are many branches and only one trunk, so you can assume you’re probably at a branch. You feel a strange urge to apply chemicals to the wood, and have two choices. One chemical, BranchKiller, is deadly to branches but not the main trunk; the other chemical, TrunkKiller, is deadly to the main trunk but not the branches. Assume you like the tree and want to save as much of it as possible.

Tell me if you're getting tired of these.

In this situation, you should clearly apply BranchKiller, not TrunkKiller. Since you’re probably at a branch, it’s true that BranchKiller is more likely to harm the tree. But if you are at the trunk, the effects will spread to all branches. If you had some clones, one at each trunk or branch you think you might find yourself, then using TrunkKiller would always kill the entire tree, and using BranchKiller would always kill only part of the tree.

Now imagine the tree is the universe, the trunk is base-level reality, and the branches are simulated worlds. It’s possible that people in simulated worlds could do things to affect their fate that wouldn’t work in base-level reality. But for every decision made in a simulated world, base-level reality trumps that decision. We in base-level reality get to decide (if only very indirectly) what universes, if any, are created, and whether they can be influenced post-creation. Making sacrifices in base-level reality for gain in simulated worlds is like killing the tree’s trunk to save its branches. Unless we can very reliably help simulated worlds at very little cost in base-level reality, it seems to me we can just ignore the simulation issue entirely.

Update: see the comments for some corrections and clarifications.

Anthropic Reasoning

Posted by steven on 25 Jul 2007 | Tagged as: Anthropics, Philosophy

Observational selection effects are biases created when different hypotheses make different predictions on the existence of observers. For example, you could argue that just from the fact that our solar system has an Earthlike planet in it, we can’t conclude solar systems with Earthlike planets are typical; if observers evolve only on such planets, then that is what they’ll observe no matter what a typical system looks like.

Some people have taken to calling the study of how to deal with these effects “anthropics”. There’s a lot of subtle philosophy that goes into this, and I have yet to find any one account that I think gets it all right. I may do a post with actual content later, but here are some (mutually inconsistent) works I found particularly enlightening. If you combined some of the ideas here you might end up with something good: