# Being wrong may be rational

This past Sunday, economist Paul Krugman was lamenting in a book review of Justin Fox’s book “Myth of the Rational Market” (which he liked very much) that despite this current financial crisis and previous crises, like the failure of the hedge fund Long Term Capital Management, people still believe in efficient markets as strongly as ever. The efficient market hypothesis is the basis of most of modern finance and assumes that the price of a security is always correct and that you can never beat the market.  So artificial bubbles should never occur.  Krugman wonders what it will take to ever change people’s minds.

I want to show here that there might be no amount of evidence that will ever change their minds and they can still be perfectly rational in the Bayesian sense.  The argument can also apply to all other controversial topics.  I think it is generally believed in intellectual circles that the reason there is so much disagreement on these issues is that the other side is either stupid, deluded or irrational.   I want to point out that believing in something completely wrong even in the face of overwhelming evidence may arise in perfectly rational beings.  That is not to say that faulty reasoning does not exist and can be dangerous.  It just explains why two perfectly reasonable and intelligent people can disagree so alarmingly.

Consider the very simple case of a hypothesis H, like “the market is not efficient” and there is some data D, like a financial crisis.  Then from Bayes rule, the probability that the hypothesis is true given the data is $P(H|D)=P(D | H)P(H)/P(D)$, where P(D|H) is the likelihood function (probability of obtaining the data given the hypothesis is true), P(H) is the prior probability the hypothesis is true, and P(D) is the probability of obtaining the data.  Thus the odds that the hypothesis is true to it being false $\overline{H}$ is

$\displaystyle \frac{P(H|D)}{P(\overline{H}|D)} = \frac{P(D|H)}{P(D|\overline{H})}\frac{P(H)}{P(\overline{H})}$

So, you see that even if two people have identical likelihood functions (i.e. reasoning ability) and have the same data, they can still come to completely different conclusions depending on their priors.  For example, let’s say two people agree that the odds of a crisis occurring given that efficient markets are false is 100 to 1.  So whatever the prior odds against an efficient market was before, it is now 100 times greater after the crisis.  So for someone who may have believed that efficient markets had even odds of being false now believes it is 100 to 1 that it wrong.  But, someone who originally believed that the odds of an efficient market were false  was one in a million now believes it is one in ten thousand.  Hence, given enough events they will eventually change their minds.  However, suppose there is a person who believed that the probability of an efficient market is false is zero.  Then they are completely unaffected by the data and no amount of data can ever convince them.  If a hypothesis has zero prior support then it can never be validated no matter what the data.

You could argue that a zero prior is faulty to start with but that is not a failure of reasoning.  In fact, it is easy to see how a prior could be zero.  Suppose you are a naive student and you take a class from a Fama or a Miller who implicitly assumes that the market is efficient. You could easily end up believing that it is a law of nature like gravity.  There could even be an amplification effect in that the professor may have some doubt about the idea but for pedagogical or other purposes will not bring it up in her class.   Then the next generation is even more certain and eventually it becomes dogma.

So why are there efficient market doubters?  Well I think that there are probably some neural mechanisms that sets a minimum doubt level in every person.  Some people have complete certainty in their beliefs while others have doubts about everything.  I believe that this doubt level is innate and could be related to genes governing certain ion channels in the brain.  So some of the students will not completely believe in the efficient market.   However, given that doubt level seems to be broadly distributed in the population there must be advantages for maintaining diversity in a population.  A community of pure believers is dangerous (e.g. Jonestown) but one of doubters may end up starving to death because they can never decide on what to do.  A balance of the two may be necessary to get things done but also have a reality check.  This also means that some wrong ideas will only disappear when the zero doubt holders take them to the grave.

## 10 thoughts on “Being wrong may be rational”

1. Daniel says:

There is some interesting research on the (unfortunately named*) right-wing authoritarian personality.

Using your terminology, a person with a RWA bent could simply be someone with a tendency towards 0 or 1 priors.

* The concept would probably be better received if it were defined in a politically-neutral way.

Like

2. Doubt level is probably correlated with political orientation but I didn’t want to go there.

Like

3. Carson-

Nice post. I think the probabilities and resulting disaggreement situation it is a little worse than you say. It seems your formula can apply to an individual thinker. We are actually pretty good at holding non-transitive relationships between our priors. So we will sometimes argue different sides of an issue based on a result we think in the moment is “unreasonable” do to some recent experience or reframing of the problem. We also seem to change our priors to fight a person or big abstract idea when we think winning over the opponent is more important than how things work.

Like

4. Hi Scott,

I agree that it can be a lot worse. With multiple hypotheses it is possible to have nonzero prior and still never have your mind changed. I was trying to show that even if you are perfectly rational, you can still be wrong. Basically, some arguments can never be settled.

Like

5. I think the “efficient market hypothesis” can never be proved or disproved, since it is ill-defined and therefore cannot be unambiguously tested.

What does “correct price” mean, given that there is no formula for this, and fundamental analysts frequently disagree on the value of a company?

The more pragmatic definition based on being able to “beat the market” is too ambiguous too: Did Warren Buffett prove the market inefficient, or was he simply a lucky survivor among many fallen investors?

Fama has formulated market efficiency in terms of being able to profit from different types of information available, namely past prices (weak efficiency), public (semi-strong), and both public and private information (strong efficiency), but I still think it’d be a nightmare–if not impossible–to test these hypotheses.

Like

6. Artur,

Are you saying that events simply give no information to update priors?

Like

7. I guess in your Bayesian language I’m saying that there are no priors, since the hypothesis are not defined to begin with.

Can you outline an unambiguous test of “efficiency”, whatever that means?

Like

8. In the Bayesian view, a probability can be assigned to any notion, statement or concept. The question then, is whether or not an event changes the prior, which depends on the likelihood function. You believe that events can’t update your prior about efficiency so your likelihood function is flat. I was just pointing out that two people with the same likelihood function could still come to different conclusions. You’re interested in the scientific question of whether or not you can prove efficiency. That is an interesting question but I was talking about the meta question of why people would believe in efficiency and what would change their minds.

Like

9. […] work, some arguments can never be resolved. This includes political and economic issues (e.g. efficient markets) and also the debate between evolution and creationism.  I think many scientists feel that the way […]

Like

10. […] For a more technical treatment of Bayesian inference, see here. I posted previously (see here) that I thought that drastic differences in prior probabilities is why people don’t seem to […]

Like