One of the nice consequence of the finiteness of human existence is that there can exist complete solutions to some of our problems. For example, I used to leave the gasoline (petrol for non-Americans) cap of my car on top of the gas pump every once in a while. This has now been completely solved by the ludicrously simple solution of tethering the cap to the car. I could still drive off with the gas cap dangling but I wouldn’t lose it. The same goes for locking myself out of my car. The advent of remote control locks has also eliminated this problem. Because human reaction time is finite, there is also an absolute threshold for internet bandwidth above which the web browser will seem instantaneous for loading pages and simple computations. Given our finite lifespan, there is also a threshold for the amount of disk space required to store every document, video, and photo we will ever want. The converse is that are also more books in existence than we can possibly read in a life time although there will always be just a finite number of books by specific authors that we may enjoy. I think one strategy for life is to make finite as many things as possible because then there is a chance for a complete solution.
Don’t miss Steve Strogatz’s new series on math in the New York Times. Once again Steve manages to make math both interesting and understandable.
In an effort to make published science to be less wrong, psychologist Brian Nosek and collaborators have started what is called the Open Science Framework. The idea is that all results from experiments can be openly documented for everyone to see. This way, negative results that are locked away in the proverbial “file drawer”, will be available. In light of the fact that many high impact results turn out to be wrong (e.g see here and here), we definitely needed to do something and I think this is a good start. You can hear Nosek talk about this on Econtalk here.
The game theory world was stunned recently when Bill Press and Freeman Dyson found a new strategy to the iterated prisoner’s dilemma (IPD) game. They show how you can extort an opponent such that the only way they can maximize their payoff is to give you an even higher payoff. The paper, published in PNAS (link here) with a commentary (link here), is so clever and brilliant that I thought it would be worthwhile to write a pedagogical summary for those that are unfamiliar with some of the methods and concepts they use. This paper shows how knowing a little bit of linear algebra can go a really long way to exploring deep ideas.
In the classic prisoner’s dilemma, two prisoner’s are interrogated separately. They have two choices. If they both stay silent (cooperate) they get each get a year in prison. If one confesses (defects) while the other stays silent then the defector is released while the cooperator gets 5 years. If both defect then they both get 3 years in prison. Hence, even though the highest utility for both of them is to both cooperate, the only logical thing to do is to defect. You can watch this played out on the British television show Golden Balls (see example here). Usually the payout is expressed as a reward so if they both cooperate they both get 3 points, if one defects and the other cooperates then the defector gets 5 points and the cooperator gets zero, and if they both defect they both get 1 point each. Thus, the combined reward is higher if they both cooperate but since they can’t trust their opponent it is only logical to defect and get at least 1 point.
The prisoner’s dilema changes if you play the game repeatedly because you can now adjust to your opponent and it is not immediately obvious what the best strategy is. Robert Axelrod brought the IPD to public attention when he organized a tournament three decades ago. The results are published in his 1984 book The Evolution of Cooperation. I first learned about the results in Douglas Hofstader’s Metamagical Themas column in Scientific American in the early 1980s. Axelrod invited a number of game theorists to submit strategies to play IPD and the winner submitted by Anatol Rappaport was called tit-for-tat, where you always cooperate first and then do whatever your opponent does. Since this was a cooperative strategy with retribution, people have been using this example of how cooperation could evolve ever since those results. Press and Dyson now show that you can win by being nasty. Details of the calculations are below the fold.