Newcomb's problem

I just recently read again Eliezer's article about Newcomb's problem.

To summarize the "problem":

It's Christmas, and a superintelligent being called Omega from another dimension comes to your living room and leaves you 2 boxes. The boxes are rigged as follows:
  1. Box A is transparent and contains $1,000.
  2. Box B is opaque and contains either $1,000,000 or nothing.
  3. You can take either both boxes or only box B.
  4. Omega has filled box B with a million dollars if, and only if, it has predicted that you will take only box B. If Omega predicts that you will take both boxes, then box B contains nothing.
  5. Omega is not present when you make your decision. It has already left, and will not return to you again.
  6. However, Omega is superintelligent. It has been observed delivering boxes like this before, and has never been observed to predict incorrectly. People who take only box B always get $1,000,000, and people who take both boxes always find box B empty, netting them $1,000.
So where's the dilemma? You take only box B and pocket the million, right? Why doubt the superintelligence?

Well, there are some confused people that would like to persuade you that the rational thing is to take both boxes. Here is how they argue. Omega has already left, so the state of box B is already determined. It is either full, or it is empty. If it is full, then taking both boxes nets you $1,001,000, as opposed to $1,000,000 if you only take box B. But if box B is empty, then taking both nets you $1,000, which is more than $0 if you take only box B in this case (being empty).

So you should take both boxes. Then, because Omega has predicted you will do so, box B is empty, and you get only $1,000.

I am writing this because, apparently, intelligent people have actually spent considerable time arguing about whether it is "rational" to take only box B, or whether a rational person "should" take both boxes.

How people can get genuinely confused about this eludes me. Quite obviously, the way the problem is framed, there are only two possible futures to choose from. Either there's future F1 where you take box B, and it contains a million, because Omega always predicts correctly. Or there's future F2 where you take both boxes, and you get $1,000. The very framing of the problem dictates that future F3, where you take both boxes and find both of them full, is impossible or very implausible. Likewise impossible or very implausible is F4, where you take only box B and find it empty.

So then the supposed "rationalists" come and say, hey, we don't believe the framing of the problem. Omega has already departed, so future F3 must be possible. So we take both boxes. But hey, we believe the framing of the problem after all. Omega knew that I would pick both boxes, so box B is empty. What a paradox!

Well, yes, usually, if you try to believe two mutually exclusive things simultaneously, you get yourself into a paradox. Either you believe the framing of the problem, or you don't. If you believe that Omega's predictions are always correct, you take only box B. If you believe that Omega is correct X% of the time, then your decision depends on your estimate of X, and there's no paradox either way.

But you don't simultaneously believe that Omega could be wrong, but then again, it must always be right by definition. Believing both is simply stupid.

And as for those who say that it is rational to pick both boxes even believing that Omega's predictions are always and unfailingly correct... well. I rest my case.

Comments

tim said…
well thought out and conveyed.
Of course "Past performance is no guarantee of future returns" is a statement many will be familiar with, and would imply the belief in the possibilities of f3 and f4.
I myself would approach the problem from a cost benefit perspective. 50/50 chance for a 1000 times return, I'll take that anyday and everyday! I choose box b.
I realize my reply does not address the point of your argument, but i thought i'd way in on my response to the riddle anyway
denis bider said…
If past performance is not a guarantee of future returns (i.e. we refuse to believe that Omega is absolutely and always correct), then we are making an estimate that Omega is correct in X fraction of cases. If we have observed Omega predict correctly in 50 cases, then it would be unreasonable to infer that X is much lower than 98%. So it still makes overwhelming sense to pick only box B, even if we expect Omega to not be 100% correct, but close to that.
Daniel said…
As far as i'm concerned, it's a no brainer.

Far more interesting is the Monty-Hall problem that rose to prominence in 1990 after columnist Marilyn vos Savant answered her reader's question about the mentioned problem.
I for one have not heard of a problem(one that is a basic one, not some cutting-edge math), that would have led astray so many PhD mathematicians, as it is evident from here:
http://www.marilynvossavant.com/articles/gameshow.html

Admittedly I also went for the wrong answer when I first heard of it :)
denis bider said…
Hey Daniel,

thanks a lot for that link. I finally got around to reading that Marilyn Vos Savant article, and really enjoyed it. :-)

Popular posts from this blog

When monospace fonts aren't: The Unicode character width nightmare

VS 2015 projects: "One or more errors occurred"

Is the internet ready for DMARC with p=reject?