Boltzmann’s Brain and Universe

One of the recent results of string theory is the revitalization of an old idea for the origin of the universe first proposed by Boltzmann.  This was nicely summarized in an article by Dennis Overbye in the New York Times. Cosmologist Sean Carroll has also blogged about this multiple times  (e.g. see here and here). Boltzmann suggested that the universe, which is not in thermal equilibrium, could have arisen as a fluctuation from a bigger universe in a state of thermal equilibrium.  (This involves issues of the second law of thermodynamics and the arrow of time, which I’ll post on at some later point.)  A paper by Dyson, Kleban and Susskind in 2002, set off a round of debates in the cosmology community because this idea leads to what is now called the Boltzmann’s brain paradox.  The details are nicely summarized in Carroll’s posts. Basically, the idea is that if a universe could arise out of a quantum fluctuation then a disembodied brain should also be able to pop into existence and since a brain is much smaller than the entire universe then it should be more probable.  So, why is it that we are not disembodied brains?

I had two thoughts when I first heard about this paradox.  The first was – how do you know you’re not a disembodied brain? and the second was –  it is not necessarily true that the brain is simpler than the whole universe.  What the cosmologists seem to be ignoring or discounting is nonlinear dynamics and computation.  The fact that the brain is contained in the universe  doesn’t mean it must be simpler.  They don’t take into account the possibility that the Kolmogorov complexity, which is the smallest description of an entity, of the universe is smaller than that of the brain.  So although the universe is much bigger than the brain and contains many brains among other things, it may in fact be less complex.  Personally, I happen to like the spontaneous fluctuation idea for the origin of our universe.

Continue reading

Cost of health care

The New York Times  had a nice summary of what is known as Baumol’s cost disease for an explanation of why health care costs will always rise faster than inflation.  The explanation is quite elegant in my opinion and can also explain why costs for education and arts will also keep increasing at a rapid rate.  The example Baumol (and his colleague Bowen) use is that it takes the same number of people to play a Mozart string quartet as it did in the 18th century and yet musicians are paid so much more now.  Hence, the cost of music has increased with no corresponding increase in productivity.  Generally, wages should only increase because of a net gain in productivity.  Hence, a manufacturing plant  today has far fewer people than a century ago but they get paid more and produce more.  However, a violinist today is exactly the same efficient as she was a century ago.  Baumol argued that it was competition with other sectors of the economy that allowed the wages of artists to go up.  If you didn’t give musicians a living wage then there would be no musicians.

Applied to the health care industry, the implication is that medicine is just as labour intensive  and no more productive as it was before yet the salaries keep going up.   I think this is not quite correct and it is the complement or corollary of cost disease, which I’ll call productivity disease, that is the culprit for health care cost increases.  Health care is substantially more productive and efficient than before but this increase in productivity does not decrease cost but increases it.  For example, consider the victims of  a car crash.  Fifty years ago, they would probably just die and there would be no health care costs.  Now, they are evacuated by emergency personnel who resuscitate them on the way to hospital where they are given blood transfusions, undergo surgery, etc.  If they survive, they may then require months or years of physical therapy, expensive medication and so forth.  The increase in productivity leads to more health care and an increase in cost.  Hence, the better the health care industry gets at keeping you alive, the more expensive it becomes.

I feel that the panic over the rapid increase in health care costs is misplaced.  Why shouldn’t a civilized society be spending 90% of its GDP on health care?  After all, what is more important,  being healthy or having a flat screen TV?  I do think that eventually, cost disease or productivity disease, will saturate.  Perhaps we are at the steepest part of the cost curve right now because our health care system is good at keeping people alive but does not make them well enough so that they don’t need extended and expensive care.  If technology increased to the point that illness and injury could be treated instantly then costs would surely level off or even decrease.  For example, a minor cut a century ago could lead to a severe infection that could require hospitalization,  expensive treatment and result in death.  Now, you can treat a cut at home with some antibiotic ointment and a bandage.   We can certainly try to restrain some abuses and overuse of the health cares system by changing the incentive structure but perhaps we should also accept that a continuous increase in health care costs is inevitable and even desirable.

Wall Street Compensation

The popular zeitgeist seems to be that we need to put a cap on Wall Street bonuses.  However, I believe that everyone is missing the point.  The real question is why banks are so profitable that they can give out huge bonuses at all.  What should a company do with huge profits?  Isn’t it more egalitarian in some sense to give it to the employees rather than the shareholders?

So why are banks so profitable?  I can see at least three possible reasons.  The first is that banks have very large fluctuations in revenue so in up years they reap huge profits but then they lose a lot in down years.  However, since the government bails them out when they go down, they have an effective ratchet so that they only make money. The second is that the people on Wall Street are just so much smarter and talented that they just create more wealth.  The third reason is that Wall Street banks have an effective monopoly.

So let’s break down these hypotheses.  The first one probably has some merit because these banks take highly leveraged positions so that they can make a lot of money.  If you borrow and bet big then you’ll win or lose really big.  Getting bailed out whenever you lose sure comes in handy as well.  I can’t fully buy the second explanation.  I’m sure the people on Wall Street are very smart but I doubt that they’re a lot smarter than people in Silicon Valley, big pharma or academia, which are all less profitable (especially academia).  A physicist on Wall Street can easily make (at least in the hey days) ten to a hundred times more than a full professor at a prestigious university but there is no way she is ten times smarter.  Now some would argue that the professor may lack some personality traits that are necessary for success on the street (or they are unaware that they could be making more money) and that may be partially true but I think there are plenty of aggressive smart quantitative people that are not taking in huge bonuses.  While the likes of Warren Buffet and George Soros are simply better than everyone else, their talents are nonalgorithmic and not what the big banks use to make money.

That then brings us to explanation three.  There must clearly be barriers to entry and advantages for being big that the banks enjoy.  If there were no such advantages then their would be more firms and more people on the street eroding the profitability.  Microsoft does so well because it is a monopoly.  The big three US car companies only did really well when they were an oligopoly.  Google has an effective monopoly (except in China).

So what exactly are these barriers?  Well I think one is that the fixed costs of doing business on the street are pretty high so being big gives you a definite edge.  Small entities simply can’t compete because they lack the infrastructure, personnel and capital.  Now, if the advantage increases as you get larger then you end up with a classic winner-take-all network.  The firms that have a slight edge get bigger, which amplifies their advantage and that crowds out the weaker firms.  Being big also allows you to make bigger bets.  Repealing the Glass-Steagall Act, which separated commercial banking from investment banking, also gave an advantage to banks, because they had access to even more capital to leverage into even bigger bets.  There is probably some collusion between the big banks as well to keep other firms out.  Hence, it seems to me that if we want to curb Wall Street excesses then reducing bonuses is not the answer at all.  That may be the only fair thing taking place on Wall Street.  What we really should do is to re-institute Glass-Steagall and eliminate the monopoly power the banks have.

Connecting the dots

As I posted previously, I am highly skeptical that any foolproof system can be developed to screen for potential threats.  My previous argument was that in order to have a zero false negative rate for identifying terrorists, it would be impossible to not also have a relatively significant false positive rate.  In other words, the only way to guarantee that a terrorist doesn’t board a plane is to not let anyone board.  A way to visualize this graphically is with what is called a receiver operating characteristic or ROC curve, which is a plot of the true positive rate versus the false positive rate for a binary classification test as some parameter, usually a discrimination threshold, is changed.  Ideally, one would like  a curve that jumps to a true positive rate of 1 for zero false positive rate.  The area under the ROC curve (AROC) is the usual measure for how good a discriminator is.  So a perfect discriminator has AROC = 1.  In  my experience with biological systems,  it is pretty difficult to make a test with an AROC of greater than 90%.    Additionally, ROC curves are usually somewhat smooth so that they only reach true positve rate = 1  at false positive rate = 1.

Practicalities aside, is there any mathematical reason why a perfect or near perfect discriminator couldn’t be designed?  This to me is the more interesting question.  My guess is that deciding if a person is a terrorist is an NP hard question, which is why it is so insidious.   For any NP problem, it is simple to verify the answer but hard to find one.   Connecting all the dots to show that someone is a terrorist is a straightforward matter if you already know that they are a terrorist.  This  is also true of proving the Riemann Hypothesis or solving the 3D Ising model.  The  solution is obvious if you know the answer. If terrorist finding is NP hard, then that means for a large enough population and I think 5 billion qualifies, then no method nor achievable amount of computational power is sufficient to do the job perfectly.