Altruism and Tribalism

There has always been a puzzle in evolutionary biology as to how altruism arose. On first flush, it would seem that sacrificing oneself for another would be detrimental to passing on genes that foster altruism. However, Darwin himself thought that altruism could arise if humans were organized into hostile tribes. From the Descent of Man he notes that the tribes that had more “courageous, sympathetic and faithful members who were always ready to…aid and defend each other… would spread and be victorious over other tribes.” A recent paper in Science by Samuel Bowles presents a calculation that supports Darwin’s hypothesis.

If this hypothesis is correct, then altruism required lethal hostility to flourish and survive. Our capacity for great acts of sacrifice and empathy may go hand in hand with our capacity for brutality and selfishness. It may be why a person can simultaneously be a racist and a humanist. It may also mean that the sectarian violence we are currently witnessing and have witnessed throughout history may be as part of being human as caring for an ailing neighbor or taking a bullet for a friend. Our propensity for kindness may go hand in hand with that of bigotry and violence. It may be that the more homogeneous we become, the less altruistic we may be. Perhaps there may be an important societal role for spectator sports. Cheering for the home team may give us that sense of tribalism and triumph that we need. Maybe, just maybe, hating that cross-town rival makes us kinder in the office and on the roads. What irony that would be.

Advertisements

The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the “data” to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won’t help you understand a problem. I’ve extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven’t put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I’m a proponent of the Hopfield Hypothesis. That is not to say I don’t think there will be mechanisms, and important ones at that, yet to be discovered. I’m sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don’t have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.

The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the “data” to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won’t help you understand a problem. I’ve extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven’t put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I’m a proponent of the Hopfield Hypothesis. That is not to say I don’t think there will be mechanisms, and important ones at that, yet to be discovered. I’m sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don’t have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.

Wealth and Taxes

Milton Freedman, the renowned Chicago economist, died yesterday at the age of 94. His monetary and free-market ideas have strongly influenced US and world economic policy for the past quarter century. However, the most recent US elections suggest that there may be mood shift taking place.

Currently, the US government spends about 25% of the gross domestic product (GDP) and obtains most of it through taxes and not an insignificant portion through borrowing. The current cost of Social Security is about 4.2% of GDP and is projected to rise to 6.3% by 2030. Medicare’s annual cost currently represents 2.5% of GDP but is rising very rapidly and is projected to pass Social Security expenditures in 20 years and reach 11% by 2080. You can find all of this information and more at http://www.socialsecurity.gov. Thus unless the US starts to curtail benefits and spending or increases taxes it is headed for a budget crisis.

The argument by the free-marketeers is that we need to go to some form of personal savings accounts, so instead of contributing to the government’s social security system, you save for your retirement yourself. The program would be modelled after the 401(K) tax deferred retirement plan, in which companies instead of providing a defined benefits pension plan would match contributions to the employee’s own 401(K) plan. This then transfers the risk from the employer and government onto the individual.

The idea of Milton Friedman and his followers is that the government should reduce both spending and taxes and the result will be higher economic growth and prosperity for all. It is probably true that lowering taxes does help to increase the wealth of those already well off. However, there is a huge dissipative drag on wealth creation and unless you are above some threshold, extra income probably just goes into expenses or pays down some debt.

For most of the population, the largest expense is housing. The value of a house and rent is mostly just market value, so if the market is tight then any increase in income probably just gets manifested in higher real estate prices. There is an argument that one of the consequences of women entering the job force was that houses became more expensive. While everyone seems to think that real estate is a great investment (and maybe it would be if you bought rental property), you cannot realize any financial gain until you sell and unless you plan on downsizing or moving someplace where real estate value is lower, you won’t see the returns as extra income for wealth generation.

The two other major and rapidly growing expenses for the average person are healthcare and college tuition. There is lots of talk about how to reduce costs but I think the increase in cost is real. Thirty years ago, there were limited medical tests and treatments. There were no MRI’s, PET scans, costly medications especially for chronic conditions and so forth. For those who have access to good health care, the increased cost is probably worth it. Likewise, in the past universities needed little more than books, blackboards and chalk. Now there are computers, wireless internet, shuttle buses to drive students home, extra security for dormitories, 24 hour gyms, and so forth.

The commonality between healthcare and education is that they are necessarily collectively run institutions. The choice is whether to run these privately or publicly. If the choice is to go private, then public funding can be reduced and the cost savings can be returned to the tax payers who must then pay for these services themselves. However, the likely result is that you only get the service you can afford. A tax cut leading to an increase of 10% or 20% in income makes a huge difference if you have a large income but very little if you don’t.

If we decide to fund these and other services publicly then we’ll need to raise taxes. The problem is that there is a ratchet effect. With the real estate bubble of the past five years, half the population has bought houses they can barely afford and the other half has cashed out the increased value of their house in home equity loans and spent it. The result is that even a small increase in taxes could hurt a lot of families. So if there is a tax increase, the only viable way of doing so is to only tax those that can afford it.

War on Obesity

There is a humourous and somewhat sad article in the New York Times last Sunday on the stigmatization of the obese. The article points out a recent research article that calculates that because of American’s increasing girth, a billion extra gallons of gasoline (petrol for you Europeans) are burned each year. That means an extra 3.8 million tons of carbon dioxide. So, yes, obesity is now linked to climate change.

There is a lot of talk these days about the obesity epidemic and what to do about it. Many people still believe it is a lifestyle choice. The molecular biologists in the field believe that it is a genetic problem and can only be solved pharmaceutically. Not surprisingly, those most vocal about the magic pill fix also seem to have the most patents and biotech ventures on the side. While both of these points of view are probably true in some sense, they both kind of miss the point. I think that the main reason people are gaining weight is that for our current environment, it is the natural thing to do.

We live in a world where food is extremely cheap and plentiful and exercise is optional. The most logical thing to do it seems is to gain weight and plenty of it. The health consequences of this extra fat will likely not affect most people for many years. Although the incidence of insulin resistance and diabetes is increasing, it is still not clear if moderate weight gain is really all that bad. To quote Katherine Flegal of the Centers for disease Control and Prevention from the Times article: “Yes, obesity is to blame for all the evils of modern life, except somehow, weirdly, it is not killing people enough. In fact that’s why there are all these fat people around. They just won’t die.”

So what should we do about it? After, three years in this field, I’ve come to the conclusion that there really isn’t much we can do about it on the individual level. Our metabolic systems are so geared to acquiring calories that I believe any pharmaceutical option will likely not be effective in the long run and/or have many side effects. From studies our lab has done on food records, it is quite clear that people generally have no idea how much they eat. I doubt people can will themselves to lose weight. I think the only thing that would work is a wholesale change of our society that would increase the cost or reduce the availability of food and motorized transportation. This is definitely not going to happen by choice or design. So barring a great depression or massive crop failure (which could happen), I think we’re just going to have to live with all the extra weight.

Strong AI

In the late eighties and early nineties, Roger Penrose in two books, presented an argument that the brain cannot be algorithmic and thus the AI program is doomed to failure. Unfortunately, he also proposed that a new theory of quantum gravity may be necessary to understand the brain and consciousness and so his ideas were largely ignored by the neuroscience community. However, I think his argument for the noncomputational aspect of brain function was actually well thought out and deserved more attention. I personally believe the argument is flawed but it does stir up some interesting questions.

Penrose’s argument is essentially based on the theorems of Godel, Turing and Church. Godel showed that for any formal system, there will be statements that are true but not provable within that system. Hence, formal systems are incomplete in that there will always be undecidable statements. Turing then showed that for any computer (or any algorithmic system), there exist programs that we know and can prove will not stop but no computation on that computer can ever determine this fact. Penrose then argued that since we (at least Turing and Godel) can determine the truth of such undecidable statements, then we (they) could not be doing that computationally or algorithmically.

The implications are quite profound. It means more than just the futility of traditional AI. It also means the brain cannot even be simulated on a computer because any simulation on an algorithmic machine implies the outputs are also algorithmic. Pushing it further, if the brain is based on physical principles, then this implies that physics itself (or at least aspects of it) can’t be simulated on a computer either. This is why Penrose was led to postulate that there must be some new physics out there that is beyond computation. The idea is really not that crazy if you think about it. However, it is definitely not air tight.

I think the hole in Penrose’s argument is that he believes that we actually can circumvent Godel’s theorem and decide undecidable problems. However, I don’t think that this is necessarily true. We don’t know what formal system our brain happens to be using so don’t know which undecidable statements happen to be true but we can’t prove. The ability to prove Godel’s theorem and to decide truths for other formal systems that are not ours could be implemented computationally. So, the existence of Godel’s and Turing’s theorems does not necessarily imply that the brain is noncomputational.

Furthermore, it is doubtful that the formal system of our brains are even constant in time or conserved between individuals. More likely, our brain and hence formal system is constantly changing because of random environmental inputs. Thus, Penrose’s argument for the futility of traditional AI may be correct. A truly human-like intelligent machine couldn’t be built from a fixed formal system that is knowable. It may need to arise from a massively parallel learning system that constantly changes its axioms. Thus even if you could measure the formal system at some point in time, it would be changed before you could use this knowledge. This would be the equivalent of an uncertainty principle for the brain.

Penrose also rules out the role of randomness in breaking algorithmicity. He argues that randomness can be mimicked by an algorithmic pseudo-random number generator. I don’t see why this is the case. Perhaps, true randomness is beyond computation. This then leads to the question of where randomness actually comes from. Perhaps it is a vestige of the initial conditions of the universe. And where did that come from? Well we may need a theory of quantum gravity to figure that one out. Hmm, maybe Penrose was right afterall:).

Fractional Reserve Banking and Inflation

In a comment to a previous post, the question of why there is inflation arose. Being a complete neophyte in economics, I began to think about this question. Along the way, I discovered some very interesting things about how the monetary system works. I’m not sure if I can answer the question correctly but here is my unqualified answer.

The main mechanism behind inflation seems to be what is known as fractional reserve banking. Here by mechanism, I don’t mean what economic factors drive inflation, but simply how does extra money get into the economy. When you get a bank loan, they don’t dig into their vault and give you the money. Instead, they simply put those dollars into your bank account. The money is basically created out of thin air. All the bank is required to do is to make sure that they have enough reserves to cover some fraction of their loans. It’s a complicated formula but it amounts to something like ten to fifteen percent. Each night, the banks must balance their books and they partially do this by borrowing money from the US Federal Reserve which lends at the Fed rate. In that way the Fed can influence the money supply in the economy. The amazing thing about this system is that in principle the money supply could be any size. When money is lent to you and you buy something from someone else, they can deposit that money back into the bank which can then be lent out again while only keeping ten percent in reserve.

So, when interest rates are low, the money supply expands and we get inflation or a bubble. When interest rates increase, the money supply can shrink and then we can have a slowdown in the economy, a recession or a bursting of a bubble (as we are experiencing now in real estate). If the money supply was completely static then if the economy grew we would experience deflation. (This is happening in some sectors like electronics and food where the cost of production is decreasing faster than inflation.) The problem with deflation is that people then tend to wait before they buy things and that can retard economic growth. So, the Fed tries to engineer a small amount of inflation to keep things going. When the economy is too heated then it raises the rate slightly to keep it in check, which is what the Fed has done for the past two years.

I think one of the reasons why inflation has been relatively benign these past few years even with low interest rates is that the extra cash has been used to fuel the internet bubble followed by the real estate bubble and also our savings rate is so low that banks don’t have enough reserve to further inflate the money supply. However, just to make sure I end on a gloomy note, Nouriel Roubini is predicting a recession in 2007 triggered by the bursting of the housing bubble. So interest rates may actually be coming down again.