William Newcomb proposed a famous thought experiment now called Newcomb’s Paradox. Robert Nozick first brought it to the attention of the wider world and put it this way:
Suppose a being in whose power to predict your choices you have enormous confidence. (One might tell a science-fiction story about a being from another planet, with an advanced technology and science, who you know to be friendly, etc.) You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and furthermore you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being’s prediction about your choice in the situation to be discussed will be correct.
There are two boxes, (Bl) and (82). ($1) contains $ 1000. (B2) contains either $1000000 ($M), or nothing. What the content of (B2) depends upon will be described in a moment.
You have a choice between two actions:
1) taking what is in both boxes
2) taking only what is in the second box.
Furthermore, and you know this, the being knows that you know this, and so on:
(I) If the being predicts you will take what is in both boxes, he does not put the $M in the second box.
(II) If the being predicts you will take only what is in the second box, he does put the $¡¿ in the second box.
The situation is as follows. First the being makes its prediction. Then it puts the $¡¿ in the second box, or does not, depending upon what it has predicted. Then you make your choice. What do you do?
There are two plausible looking and highly intuitive arguments which require different decisions. The prôblem is to explain why one of them is not legitimately applied to this choice situation.
There is another option that Nozick and (at least most of) the rest of the literature on this miss: