Anthropic principles agree on bigger future filters

I finished my honours thesis, so this blog is back on. The thesis is downloadable here and also from the blue box in the lower right sidebar. I’ll blog some other interesting bits soon.

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

Figure 1: SIA likes possible worlds with big populations at our stage, which means small past filters, which means big future filters.

SIA says the probability of being in a possible world is proportional to the number of people it contains who you could be. SSA says it’s proportional to the fraction of people (or some other reference class) it contains who you could be. FNC says the probability of being in a possible world is proportional to the chance of anyone in that world having exactly your experiences. That chance is more the larger the population of people like you in relevant ways, so FNC generally gets similar answers to SIA. For a lengthier account of all these, see here.

SIA increases expectations of larger future filter steps because it favours smaller past filter steps. Since there is a minimum total filter size, this means it favours big future steps. This I have explained before. See Figure 1. Radford Neal has demonstrated similar results with FNC.

Figure 2: A larger filter between future stages in our reference class makes the population at our own stage a larger proportion of the total population. This increases the probability under SSA.

SSA can give a variety of results according to reference class choice. Generally it directly increases expectations of both larger future filter steps and smaller past filter steps, but only for those steps between stages of development that are at least partially included in the reference class.

For instance if the reference class includes all human-like things, perhaps it stretches from ourselves to very similar future people who have avoided many existential risks. In this case, SSA increases the chances of large filter steps between these stages, but says little about filter steps before us, or after the future people in our reference class. This is basically the Doomsday Argument – larger filters in our future mean fewer future people relative to us. See Figure 2.

Figure 3: In the world with the larger early filter, the population at many stages including ours is smaller relative to some early stages. This makes the population at our stage a smaller proportion of the whole, which makes that world less likely. (The populations at each stage are a function of the population per relevant solar system as well as the chance of a solar system reaching that stage, which is not illustrated here).

With a reference class that stretches to creatures in filter stages back before us, SSA increases the chances of smaller past filters steps between those stages. This is because those filters make observers at almost all stages of development (including ours) less plentiful relative to at least one earlier stage of creatures in our reference class. This makes those at our own stage a smaller proportion of the population of the reference class. See Figure 3.

The predictions of the different principles differ in details such as the extent of the probability shift and the effect of timing. However it is not necessary to resolve anthropic disagreement to believe we have underestimated the chances of larger filters in our future. As long as we think something like one of the above three principles is likely to be correct, we should update our expectations already.

30 responses to “Anthropic principles agree on bigger future filters

  1. Fantastic job; great figures. :) Therefore, alas, we are all DOOMED. :(

  2. Could you post a pdf as well, for easy Kindle reading?

  3. I trust you will be at least linking to this on LW.

  4. Does your work imply that we should put more effort into creating an intelligence explosion?

    Let’s say mankind has four fates:

    (A) We don’t create an intelligence explosion and colonize the galaxy.
    (B) We don’t create an intelligence explosion and soon go extinct.
    (C) We create a utopian intelligence explosion and colonize the galaxy.
    (D) We unintentionally create a malevolent AI god that captures all the free energy in the galaxy and so destroys all life other than itself.

    Your work, if I understand it correctly, shows that (B) is almost certainly our fate. But your work shouldn’t influence our belief about the probability of (C) relative to the probability of (D). Let’s assume that knowing we would be in (C) or (D) would increase our estimate of our chances of survival.

    Let’s now assume that if we put more resources into AI research we increase both the probability of (C) and (D) but don’t lower the [probability of (C)]/[Probability of (D)]. Does your work show that we shouldn’t be able to significantly raise the probabilities of (C) and (D) but to the extent we could raise these probabilities we would have a greater chance of survival?

    Now let’s assume there is a fate (E) in which to avoid the great filter we seek to create an AI god that will create a utopia on Earth but will prevent us from ever leaving our solar system. Should it be easier to achieve (E) than (C) perhaps because (E) makes it harder to apply the anthropic principle?

    • Quite possibly, but there are other considerations I will write more about soon.

      I’m not sure what you mean in your last paragraph by ‘(E) makes it harder to apply the anthropic principle’ – do you mean that outcome is not vastly reduced in probability by either anthropic principle, so should be easier to achieve? In that case yes that outcome isn’t reduced much in probability, but it sounds pretty unlikely to be a large part of the filter to begin with, without reason for civilizations to begin such behaviour.

      • By “makes it harder to apply the anthropic principle” I meant that committing to change your future population levels for anthropic reasons creates unintuitive results (such as it being easier to achieve (E) than (C)) and perhaps these unintuitive results arise because the anthropic principle doesn’t apply to situations in which they will be encountered.

        If we have some scientific theory which says we are doomed but our theory doesn’t seem to make sense if X is zero than we should seek to make X zero.

  5. The thesis is downloadable from the blue box in the lower right sidebar.

    I may be looking right through it, but I can’t see this blue box. In the right sidebar I see the following subheadings:

    Recent Comments
    Popular now:
    Subscribe:
    Email Subscription
    What it’s about
    Archives

    Thanks!

  6. Pingback: Accelerating Future » Katja Grace Honors Thesis Now Available

  7. This kind of argument doesn’t seem to work very well if we are in the future *already* – inside a simulation.

  8. We know more than that we are “human-like things”. We know all kinds of things about the world – including what historical era we were born into. We don’t need to consider the possibility that we are future creatures – because we already know that we aren’t. Hide that information from us from birth – and we might reason that way – but the more facts you hide from an agent, the more likely it is to draw the wrong conclusions.

    • William B Swift

      Among other things we know that there were probably very severe early filters. See Ward and Brownlee, Rare Earth for a summary. Their conclusion is that, because of all the things that could have gone differently in the early Earth, primitive (bacterial) life is likely to be fairly common, but multi-cellular life, much less intelligent life, is probably much less common than previously thought.

  9. Pingback: Light cone eating AI explosions are not filters | Meteuphoric

  10. This paper cites Radford Neal’s paper “Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning”.

    Yet Radford Neal’s paper includes sections entitled:

    “Why the Doomsday Argument must be wrong” and “Defusing the Doomsday Argument with SIA or FNC”.

    He says things like: “there are several reasons for rejecting the Doomsday Argument that I regard as convincing, even without a detailed understanding of why it is wrong.” and “Similar arguments refute the form of the Doomsday Argument where there are many intelligent species.”

    He is evidently *against* Doomsday-related arguments, yet the citation here apparently suggests that he somehow *favours* them. What gives?

  11. He’s against SSA Doomsday arguments, but his counter produces its own Doomsday argument, which he likes better since it is affected by more sorts of empirical evidence. The FNC Doomsday argument is near the end of the paper, with discussion of simulation, etc.

  12. I think that the chances of anyone 5 years in the future having “exactly my experiences” is pretty miniscule. People fitting that description will not exist any more. Instead there will be people with much fancier mobile phones.

  13. I predict new adherents of quantum immortality joining me. :)

    With quantum immortality, it doesn’t matter if there’s a great filter ahead of us, as long as the probability of dying is not 100% and the probability of indefinite dystopia is low.

    Any takers?

    • Based on my understanding of quantum immortality, it implies that honest adherents (as you claim to be) will suicide in all situations that don’t meet with your special approval because you don’t care about your measure, just the quality of your experiences in the few quantum narratives where you exist.

      If you had really set up the experiment/manipulation correctly (say, with a automatic kill switch based on some measure or another) shouldn’t you be dead in almost all the quantum narratives that contain me? And in the one’s where I see you being “not dead”, shouldn’t you have won the lottery several times by now?

      • To clarify, I didn’t mean that I will be committing suicide. What I meant is that a filter step that kills everybody is not relevant because it just reduces our measure.

        Personally, I don’t care about the global measure. But my friends/family and I do care about not being separated, so we’re not about to perform quantum suicide. A group quantum suicide setup has a good chance of malfunctioning in a way that separates people. This is because there will be a good chance of some dying while others don’t.

  14. There’s alot of big words on this page, but you guys can’t figure out how to create free energy huh?

  15. Pingback: SIA says AI is no big threat | Meteuphoric

  16. Pingback: SIA and the Two Dimensional Doomsday Argument | Meteuphoric

  17. From the perspective of creating a prior probability distribution over numbers of [insert reference class] as a function of time, this approach seems a bit problematic because of normalization problems – it assigns infinite relative probability to infinite numbers of people. I feel like it would be more profitable to include more data about our world, enough so that we get a normalizable distribution for something like “humans living in the year 2010.” Furthermore, including more information should quite quickly improve our chances, since it will remove that pesky infinity that’s tipping the scales against us.

  18. Pingback: More anthropic warnings at H+ | Meteuphoric

  19. This is just the fermi paradox and it can’t be distilled down to this without answering some fundamental questions. We are just like everybody else (there’s lots of us) but then where are they? OR we are just like everybody else (i.e. our chances of existing are vanishingly small) .

    If it’s the first then we’re in the fermi paradox. If it’s the second then we can’t say anything because all we know is that the chance of us existing is vanishingly small. We do not have the information to say that since our population is “large” then it’s statistically likely to fall. We just don’t have the answers to say whether we’re out there in the 98th percentile or not.

    And the fermi paradox is only a paradox if in fact we accept the assumption that intelligent species go traveling from star system to star system. Maybe they don’t.

Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s