## Disappointment

Posted by *steven* on *15 Aug 2007* at *07:55 pm* | Tagged as: *Probability*

Steven Landsburg, in his book *The Armchair Economist*, writes:

[I]f you chose this book randomly off the shelf, it would be as likely to exceed your expectations as to fall short of them. But you didn’t choose it randomly off the shelf. Rational consumer that you are, you chose it because it was one of the few available books that you expected to be among the very best. Unfortunately, that makes it one of the few available books whose quality you are most likely to have overestimated. Under the circumstances, to read it is to court disappointment.

Sounds reasonable, doesn’t it? You underestimate some books, you overestimate some others, and if a book seems to you to be the best, then it’s more likely that you overestimated it than that you underestimated it. Landsburg goes on to apply the same reasoning to potential marriage partners before entering into the chapter’s main topic, the “winner’s curse”, which means that if you bid higher than anyone else in an auction, you probably overestimated the item’s value.

It’s certainly possible for a rational thinker to be systematically disappointed in a subset of possible outcomes. If you don’t know whether it’s going to be cloudy, you can expect the weather to disappoint you in case of cloudiness. Likewise, in the case of the winner’s curse, you will be either in the situation where your bid is higher than all other bids, or you’re in the situation where it’s not. In the former case, you probably overestimated the value; in the latter case, you probably underestimated the value. You just don’t know which is true.

But there’s something wrong with the books (and marriage) example. Just before you start reading a book, you already know what subset of outcomes you’re in — you know you’re reading the book that seemed like the best buy, and you should already have taken this into account somehow. A rational thinker can never expect, unconditionally, to be disappointed. If you already know it’s going to be cloudy, you can expect the weather to disappoint you in case of rain, but you can’t expect the weather to disappoint you on the whole. So the the reasoning I quoted must be flawed.

I suspect the problem is with orthodox methods of statistical inference.

An “unbiased estimator” of book quality is one that, for each possible book quality, is equally likely to come out higher or lower than the actual value. More precisely, the estimator’s expectation value conditional on each possible actual value is equal to that actual value. Maybe the perceived interestingness of the cover and title is such an estimator.

If you take one of these estimators and use it as your expectation for the book’s quality, then yes, you are likely to be disappointed. It will be common for high estimates to be caused by random errors as well as quality; there is an effect here called “regression to the mean”. But you don’t want an “unbiased” estimator. You want an estimator that’s “biased” in the direction of your prior knowledge. The right way to get such an estimator is to start with a prior probability distribution for book quality, use Bayes’s theorem together with data like how interesting the cover looks to turn it into a posterior probability distribution, and then take the expected value. With this estimator, you can no longer expect to be disappointed. You will be disappointed some of the time, but not systematically; the expected value for your disappointment is zero.

Consistent with this reasoning, I didn’t find Landsburg’s book disappointing at all. But spurn Bayes, and you will find yourself regularly disappointed as a consumer, in your love life, and everywhere else. Don’t say you weren’t warned.

on 24 Aug 2007 at 9:04 am #SvanteI agree that Landsburg seems to underestimate the human capability to think bayesian.

It is common sense that things that seem good are generally “too good to be true”, which is learned from experience. But that cannot always be true either, so the disappointment estimator falls into an infinite regress of over- and underestimation, which should converge to zero.

Maybe the human trait of cynisism has evolved to counterweigh this Landsburg effect.