Altruism and Tribalism

There has always been a puzzle in evolutionary biology as to how altruism arose. On first flush, it would seem that sacrificing oneself for another would be detrimental to passing on genes that foster altruism. However, Darwin himself thought that altruism could arise if humans were organized into hostile tribes. From the Descent of Man he notes that the tribes that had more “courageous, sympathetic and faithful members who were always ready to…aid and defend each other… would spread and be victorious over other tribes.” A recent paper in Science by Samuel Bowles presents a calculation that supports Darwin’s hypothesis.

If this hypothesis is correct, then altruism required lethal hostility to flourish and survive. Our capacity for great acts of sacrifice and empathy may go hand in hand with our capacity for brutality and selfishness. It may be why a person can simultaneously be a racist and a humanist. It may also mean that the sectarian violence we are currently witnessing and have witnessed throughout history may be as part of being human as caring for an ailing neighbor or taking a bullet for a friend. Our propensity for kindness may go hand in hand with that of bigotry and violence. It may be that the more homogeneous we become, the less altruistic we may be. Perhaps there may be an important societal role for spectator sports. Cheering for the home team may give us that sense of tribalism and triumph that we need. Maybe, just maybe, hating that cross-town rival makes us kinder in the office and on the roads. What irony that would be.

The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the “data” to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won’t help you understand a problem. I’ve extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven’t put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I’m a proponent of the Hopfield Hypothesis. That is not to say I don’t think there will be mechanisms, and important ones at that, yet to be discovered. I’m sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don’t have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.

The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the “data” to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won’t help you understand a problem. I’ve extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven’t put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I’m a proponent of the Hopfield Hypothesis. That is not to say I don’t think there will be mechanisms, and important ones at that, yet to be discovered. I’m sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don’t have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.

Wealth and Taxes

Milton Freedman, the renowned Chicago economist, died yesterday at the age of 94. His monetary and free-market ideas have strongly influenced US and world economic policy for the past quarter century. However, the most recent US elections suggest that there may be mood shift taking place.

Currently, the US government spends about 25% of the gross domestic product (GDP) and obtains most of it through taxes and not an insignificant portion through borrowing. The current cost of Social Security is about 4.2% of GDP and is projected to rise to 6.3% by 2030. Medicare’s annual cost currently represents 2.5% of GDP but is rising very rapidly and is projected to pass Social Security expenditures in 20 years and reach 11% by 2080. You can find all of this information and more at http://www.socialsecurity.gov. Thus unless the US starts to curtail benefits and spending or increases taxes it is headed for a budget crisis.

The argument by the free-marketeers is that we need to go to some form of personal savings accounts, so instead of contributing to the government’s social security system, you save for your retirement yourself. The program would be modelled after the 401(K) tax deferred retirement plan, in which companies instead of providing a defined benefits pension plan would match contributions to the employee’s own 401(K) plan. This then transfers the risk from the employer and government onto the individual.

The idea of Milton Friedman and his followers is that the government should reduce both spending and taxes and the result will be higher economic growth and prosperity for all. It is probably true that lowering taxes does help to increase the wealth of those already well off. However, there is a huge dissipative drag on wealth creation and unless you are above some threshold, extra income probably just goes into expenses or pays down some debt.

For most of the population, the largest expense is housing. The value of a house and rent is mostly just market value, so if the market is tight then any increase in income probably just gets manifested in higher real estate prices. There is an argument that one of the consequences of women entering the job force was that houses became more expensive. While everyone seems to think that real estate is a great investment (and maybe it would be if you bought rental property), you cannot realize any financial gain until you sell and unless you plan on downsizing or moving someplace where real estate value is lower, you won’t see the returns as extra income for wealth generation.

The two other major and rapidly growing expenses for the average person are healthcare and college tuition. There is lots of talk about how to reduce costs but I think the increase in cost is real. Thirty years ago, there were limited medical tests and treatments. There were no MRI’s, PET scans, costly medications especially for chronic conditions and so forth. For those who have access to good health care, the increased cost is probably worth it. Likewise, in the past universities needed little more than books, blackboards and chalk. Now there are computers, wireless internet, shuttle buses to drive students home, extra security for dormitories, 24 hour gyms, and so forth.

The commonality between healthcare and education is that they are necessarily collectively run institutions. The choice is whether to run these privately or publicly. If the choice is to go private, then public funding can be reduced and the cost savings can be returned to the tax payers who must then pay for these services themselves. However, the likely result is that you only get the service you can afford. A tax cut leading to an increase of 10% or 20% in income makes a huge difference if you have a large income but very little if you don’t.

If we decide to fund these and other services publicly then we’ll need to raise taxes. The problem is that there is a ratchet effect. With the real estate bubble of the past five years, half the population has bought houses they can barely afford and the other half has cashed out the increased value of their house in home equity loans and spent it. The result is that even a small increase in taxes could hurt a lot of families. So if there is a tax increase, the only viable way of doing so is to only tax those that can afford it.

War on Obesity

There is a humourous and somewhat sad article in the New York Times last Sunday on the stigmatization of the obese. The article points out a recent research article that calculates that because of American’s increasing girth, a billion extra gallons of gasoline (petrol for you Europeans) are burned each year. That means an extra 3.8 million tons of carbon dioxide. So, yes, obesity is now linked to climate change.

There is a lot of talk these days about the obesity epidemic and what to do about it. Many people still believe it is a lifestyle choice. The molecular biologists in the field believe that it is a genetic problem and can only be solved pharmaceutically. Not surprisingly, those most vocal about the magic pill fix also seem to have the most patents and biotech ventures on the side. While both of these points of view are probably true in some sense, they both kind of miss the point. I think that the main reason people are gaining weight is that for our current environment, it is the natural thing to do.

We live in a world where food is extremely cheap and plentiful and exercise is optional. The most logical thing to do it seems is to gain weight and plenty of it. The health consequences of this extra fat will likely not affect most people for many years. Although the incidence of insulin resistance and diabetes is increasing, it is still not clear if moderate weight gain is really all that bad. To quote Katherine Flegal of the Centers for disease Control and Prevention from the Times article: “Yes, obesity is to blame for all the evils of modern life, except somehow, weirdly, it is not killing people enough. In fact that’s why there are all these fat people around. They just won’t die.”

So what should we do about it? After, three years in this field, I’ve come to the conclusion that there really isn’t much we can do about it on the individual level. Our metabolic systems are so geared to acquiring calories that I believe any pharmaceutical option will likely not be effective in the long run and/or have many side effects. From studies our lab has done on food records, it is quite clear that people generally have no idea how much they eat. I doubt people can will themselves to lose weight. I think the only thing that would work is a wholesale change of our society that would increase the cost or reduce the availability of food and motorized transportation. This is definitely not going to happen by choice or design. So barring a great depression or massive crop failure (which could happen), I think we’re just going to have to live with all the extra weight.

Strong AI

In the late eighties and early nineties, Roger Penrose in two books, presented an argument that the brain cannot be algorithmic and thus the AI program is doomed to failure. Unfortunately, he also proposed that a new theory of quantum gravity may be necessary to understand the brain and consciousness and so his ideas were largely ignored by the neuroscience community. However, I think his argument for the noncomputational aspect of brain function was actually well thought out and deserved more attention. I personally believe the argument is flawed but it does stir up some interesting questions.

Penrose’s argument is essentially based on the theorems of Godel, Turing and Church. Godel showed that for any formal system, there will be statements that are true but not provable within that system. Hence, formal systems are incomplete in that there will always be undecidable statements. Turing then showed that for any computer (or any algorithmic system), there exist programs that we know and can prove will not stop but no computation on that computer can ever determine this fact. Penrose then argued that since we (at least Turing and Godel) can determine the truth of such undecidable statements, then we (they) could not be doing that computationally or algorithmically.

The implications are quite profound. It means more than just the futility of traditional AI. It also means the brain cannot even be simulated on a computer because any simulation on an algorithmic machine implies the outputs are also algorithmic. Pushing it further, if the brain is based on physical principles, then this implies that physics itself (or at least aspects of it) can’t be simulated on a computer either. This is why Penrose was led to postulate that there must be some new physics out there that is beyond computation. The idea is really not that crazy if you think about it. However, it is definitely not air tight.

I think the hole in Penrose’s argument is that he believes that we actually can circumvent Godel’s theorem and decide undecidable problems. However, I don’t think that this is necessarily true. We don’t know what formal system our brain happens to be using so don’t know which undecidable statements happen to be true but we can’t prove. The ability to prove Godel’s theorem and to decide truths for other formal systems that are not ours could be implemented computationally. So, the existence of Godel’s and Turing’s theorems does not necessarily imply that the brain is noncomputational.

Furthermore, it is doubtful that the formal system of our brains are even constant in time or conserved between individuals. More likely, our brain and hence formal system is constantly changing because of random environmental inputs. Thus, Penrose’s argument for the futility of traditional AI may be correct. A truly human-like intelligent machine couldn’t be built from a fixed formal system that is knowable. It may need to arise from a massively parallel learning system that constantly changes its axioms. Thus even if you could measure the formal system at some point in time, it would be changed before you could use this knowledge. This would be the equivalent of an uncertainty principle for the brain.

Penrose also rules out the role of randomness in breaking algorithmicity. He argues that randomness can be mimicked by an algorithmic pseudo-random number generator. I don’t see why this is the case. Perhaps, true randomness is beyond computation. This then leads to the question of where randomness actually comes from. Perhaps it is a vestige of the initial conditions of the universe. And where did that come from? Well we may need a theory of quantum gravity to figure that one out. Hmm, maybe Penrose was right afterall:).

Fractional Reserve Banking and Inflation

In a comment to a previous post, the question of why there is inflation arose. Being a complete neophyte in economics, I began to think about this question. Along the way, I discovered some very interesting things about how the monetary system works. I’m not sure if I can answer the question correctly but here is my unqualified answer.

The main mechanism behind inflation seems to be what is known as fractional reserve banking. Here by mechanism, I don’t mean what economic factors drive inflation, but simply how does extra money get into the economy. When you get a bank loan, they don’t dig into their vault and give you the money. Instead, they simply put those dollars into your bank account. The money is basically created out of thin air. All the bank is required to do is to make sure that they have enough reserves to cover some fraction of their loans. It’s a complicated formula but it amounts to something like ten to fifteen percent. Each night, the banks must balance their books and they partially do this by borrowing money from the US Federal Reserve which lends at the Fed rate. In that way the Fed can influence the money supply in the economy. The amazing thing about this system is that in principle the money supply could be any size. When money is lent to you and you buy something from someone else, they can deposit that money back into the bank which can then be lent out again while only keeping ten percent in reserve.

So, when interest rates are low, the money supply expands and we get inflation or a bubble. When interest rates increase, the money supply can shrink and then we can have a slowdown in the economy, a recession or a bursting of a bubble (as we are experiencing now in real estate). If the money supply was completely static then if the economy grew we would experience deflation. (This is happening in some sectors like electronics and food where the cost of production is decreasing faster than inflation.) The problem with deflation is that people then tend to wait before they buy things and that can retard economic growth. So, the Fed tries to engineer a small amount of inflation to keep things going. When the economy is too heated then it raises the rate slightly to keep it in check, which is what the Fed has done for the past two years.

I think one of the reasons why inflation has been relatively benign these past few years even with low interest rates is that the extra cash has been used to fuel the internet bubble followed by the real estate bubble and also our savings rate is so low that banks don’t have enough reserve to further inflate the money supply. However, just to make sure I end on a gloomy note, Nouriel Roubini is predicting a recession in 2007 triggered by the bursting of the housing bubble. So interest rates may actually be coming down again.

Rotten Eggs

There is a terrifying article by Peter Ward in this month’s Scientific American. There may be strong evidence that several of the last few great extinctions may be due in part to global warming. There is a clear geochemical signature that the most recent one 65 million years ago that wiped out the dinosaurs was caused by an asteroid strike in the Yucatan peninsula but the Great Dying at the end of the Permian 250 million years ago for example (see my previous post here) looks completely different.

As I wrote before, this extinction was marked by anoxia in the oceans. What I didn’t write was that biomarkers such as certain lipids have been found in the ancient strata that indicate the presence of lots of photosynthetic green sulfur bacteria. For energy, they oxidize hydrogen sulfide (H2S) and convert it into sulfur. This means that the oceans were enriched with H2S at that time. If the oxygen level is sufficiently high then the H2S can be confined to the deep ocean by oxygen diffusing downwards. However, if the oxygen level drops enough the H2S will bubble to the surface. In addition to being foul smelling and poisonous, the H2S can also destroy the ozone layer and increasing UV radiation.

The circumstances that lead to this outcome could have been triggered by global warming from
massive volcanic activity that spewed tons of C02 into the atmosphere. This heated the oceans and made it harder to absorb oxygen. The extinction would begin in the ocean and then spread to land. A less intense version of this scenario may have taken place as recently as 54 million years ago at the end of the Paleocene era. During that time the concentration of C02 was about 1000 parts per million. We are currently at 385 and at current rates could reach 1000 by the end of the next century. So, if you start smelling rotten eggs on your stroll along the beach…

A new global fixed point

James Lovelock’s most recent book – The Ages of Gaia, argues that the earth is headed for a new fixed point at a much elevated temperature. He cites several mechanisms that are providing positive feedback to rising temperatures. One example is the emission of dimethylsulfide (DMS) by ocean phytoplankton into the atmosphere. DMS is what makes the ocean smell like well the ocean. DMS also contributes to cloud cover which increases the albedo of the earth. For small temperature increases and increases in UV radiation, phytoplankton upregulate DMS release and provide negative feedback However, recent evidence (I must admit that I haven’t checked the primary source) suggests that for a very large increase in temperature, DMS release may actually decrease and lead to positive feedback. Lovelock also believes that the increased global temperatures will turn the Amazon rain forest into a savannah leading to even less carbon sequestration. Studies in the UK have found that as temperatures increase, CO2 is being released from the ground at a higher rate. Aquatic cyanobacteria (also known as blue-green algae), which are photosynthetic and may be the largest carbon sink we have, may also down regulate CO2 intake with increasing temperatures. The bottom line is that relying on Gaia to provide negative feedback to our fossil fuel use may not be a viable option. The earth may transition to a new fixed point where the temperature may be as much as 10 degrees warmer. This could turn much of the currently temperate zones into deserts. Lovelock believes it will end civilization as we know it. Even I think this is probably an overly bleak prediction but it is definitely something to think about.

Breast Milk

Last week in the New York Times Science section, there was an article that reported on a national breast-feeding awareness program that is suggesting that not breast-feeding your children is tantamount to negligence. Now, I’m all for breast feeding, I think it probably is the best food for an infant. However, I think there is also lots of misinformation about why breast feeding is good.

One of the reasons I often hear is that breast milk confers extra immunity to the infant. I get this from everyone including pediatricians and scientists, and it also appeared in that Times article. The argument stems from the fact that breast milk contains immunoglobulins and lymphocytes. Well, let’s think about this more clearly. Immunoglobulins or antibodies are proteins and digestion breaks proteins down into amino acids. That is why pharma always tries to develop small molecule drugs. Protein-based drugs must be injected. You can’t take them orally. Now, in certain mammal species, neonates possess pathways to transport proteins in breast milk directly into the blood stream. The jury is still out on humans but it looks like we’re not in that category. So, I’m sorry to say, your baby is probably not getting extra antibodies from breast milk.

So why should a baby be breast fed? One thing that has been found is that the microflora of breast-milk fed babies differ significantly from formula-fed babies. You’re body has more bacterial cells than it’s own cells so having the right mix of micro-organisms is very important. I think this is the main reason we should push breast milk. Establishing the correct microflora environment is probably crucial for digestion and fending off infections. Additionally, the contents of breast milk will vary depending on what the mother eats. Formula always tastes the same. Breast fed babies have been known to prefer what their mothers eat and to have more diversity in their food preferences in general. Cultural tastes may partially be propagated through breast feeding. Also, babies seem to control the amount they eat much better with breast feeding than with a bottle. That may partially be because parents encourage their babies to finish each bottle even if the baby is already full. The bottom line is that their are many benefits to breastfeeding but obtaining antibodies is not one of them.

Hybrid fallacy and car seats

I think the Toyoto Prius is a great car. It’s efficient and looks cool. However, hybrids are not the panacea for our energy problems they are made out to be. For one, the actual gains in fuel efficiency over a well designed gasoline car is not as great as presumed. In fact, if you mostly do highway driving, it may actually do worse because of the extra weight. A diesel powered car is likely to do better. A more insidious problem is that for the most part car companies are not producing hybrids so that they are more fuel efficient but because they can make for a more powerful engine. Ford and Lexus both have SUV hybrids that don’t have much higher fuel efficiency than their conventional counterparts but do have a lot more horse power. Despite this shortcoming, these cars still get to go on the HOV lanes. Plug-in-hybrids have more promise if they are used around town but won’t be of much help for long haul travel.

I don’t fully blame the car companies because that public likes and wants big powerful cars. I think one reason is the infant car seat. We can barely fit our Britax Roundabout seat into our Honda CRV (yes, we own an SUV but it is mostly a tall Civic). We probably couldn’t get two in and three is definitely out. In the old days, people would just stuff their kids into the back seat. Now, if you have three kids under eight you would need a minivan or equivalent. I’m pretty sure there could be ways to engineer a fuel efficient car that can safely transport three kids but it would require a collaboration between auto manufacturers and car seat companies. Gas would probably have to hit four or five dollars a gallon before such a thing would happen is my guess. Ultimately, we have to change the way we live and work. The demise of the personal car could be the best thing to happen to cities since the invention of the subterranean sewer.

Corn-ethanol

Given the recent high price of gasoline, one of the proposed replacements is ethanol derived from corn. The problem with this idea is that it may take almost as much fossil fuel to make the stuff. Today in the New York Times, Michael Pollan, author of The Omnivore’s dilemma: A Natural History of Four Meals, has an op-ed piece arguing against corn-based ethanol. Future Pundit also has a recent posting with links to economic and thermodynamic analyses of ethanol production. Currently, the federal government offers a tax break of 54 cents for every gallon of ethanol produced and levies a tariff of 54 cents a gallon on imported ethanol. We also subsidize the farming of corn, which takes a lot of fertilizer, pesticides and tractors, all of which use fossil fuels of some sort. Depending on how you do the estimate, it may even take more than a gallon of fossil fuel to produce one gallon of ethanol from corn. Ethanol could make sense if it is derived from a crop that is more efficient like switch grass or sugar cane. But with the strength of the corn lobby, those other options may never get a chance. Maybe it’s time that we stop subsidizing the growth of all that corn. We produce way more than we can eat and high fructose corn syrup may be part of why we’re getting fat and diabetic.

ILC or OWL?

In today’s New York Times, Dennis Overbye writes that a National Academy of Sciences panel recommends that the United States spends up to a half a billion dollars in the next five years to ensure that the International Linear Collider (ILC) is built on American soil. They’re study says that American physics will lose its leadership in particle physics which “would erode the base of science and technology that has fueled innovation, provided intellectual and cultural inspiration and bolstered national security over the last century.” I must confess that I became interested in physics primarily because of the exciting developments in particle physics in the early seventies. It lead me to a degree in physics which ultimately, through a very circuitous route, into theoretical biology. Although I had a lot of catching up to do in biology, I think my physics training was excellent preparation for what I do day to day.

That being said, in our current financial climate where the science budget is being cut in real terms, I doubt that if I had a half a billion dollars to spend, I would put it into particle physics. I think I would rather spend it on a technological push towards alternative energy sources, such as the International Test Fusion Reactor (ITER) or a really big telescope. The European Southern Observatory organization currently has a proposal for building a 100 m diameter Overwhelmingly Large Telescope (OWL). This telescope could detect images 1000 times fainter than the Hubble Space Telescope.

I’m all for promoting projects that may inspire the next generation of physics students but I’m also for spending money on something that I know will guarantee interesting results. My knowledge of the current status of high energy physics is admittedly low but I think the ILC will still be orders of magnitude away from testing string theory for example. Wasn’t the scuttled Superconducting Super Collider dubbed the Desertron because it may not find anything at all? Maybe the ILC could shed light on the nature of dark matter and dark energy, which I believe is a pressing problem, but I think a better telescope has a higher chance of providing us with more insights in those areas. The pictures will also be a lot cooler!

Economics and Cosmology

On Monday of this week, Paul Krugman wrote in the New York Times that the income disparity we currently see is not due to the increasing leverage of education as had been suggested by new Fed chair Ben Bernanke but rather a result of the recent rise of a narrow oligarchy. He points to data showing that only those with incomes in the 99th percentile were really reaping the benefits of increased productivity. College educated people in fact had lost real income in recent years. In an earlier post, I pointed out that the top 400 richest Americans make up of 1% of the entire U.S. GDP.

The prevailing supply side mantra is that “a rising tide floats all boats. ” The theory is that increasing wealth at the top will lead to greater investment and higher productivity to which the entire nation will benefit. Unfortunately, this is thinking linearly and not exponentially. When I briefly flirted with Wall Street a decade ago, I was amused to learn that finance was similar to cosmology in that both fields involve stochastic fluctuations on an exponentially expanding manifold. The implication is that small differences will eventually lead to huge differences and this could be an explanation of both galaxy clustering and the growing income disparity.

Only small inhomogeneities in the initial conditions and growth rate can lead to wide disparities in wealth. This is especially true because we must compare all rates against the inflation rate. If you’re growth rate is below inflation then you’re wealth is essentially heading towards zero. Taxes can serve as a means to slow down the growth rate and nonlinearly saturate the growth for those with great wealth. Cutting taxes on income from capital gains and dividends, which mostly apply to those with disposable income, will only further accelerate the disparity between the rich and poor. The only solution is to try to keep the individual income growth rates as homogeneous as possible and above inflation. Trying to equalize initial conditions (i.e. current wealth) may be harder to achieve politically.

The Greenspan strategy was simply to try to keep inflation in check. However, I think we need to manipulate our current tax system to alleviate the problem. Perhaps we could have a floating tax rate that is calculated on the fly to partially homogenize everyone’s growth rate. Another option we could explore is to tax wealth directly rather than income. The one thing we cannot do is to stay the current course.

Patent Law

For all those addicted to blackberry wireless email, U.S. District Judge James Spencer ruled out an immediate injunction on the service. This case shows how dysfunctional the whole patent system has become. A Virginia-based patent holding company NTP Inc is claiming that Canadian-based RIM has infringed on five patents to operate the blackberry system. The judge has ruled that he accepts this claim although he’s not quite ready to pull the plug today. A collective sigh of relief could be heard across the country from millions of “crackberry” users including many in U.S. federal departments like the CIA. RIM has taken a counter offensive and challenged the validity of the patents. The U.S. Patent and Trademark office has thrown out one NTP patent already and RIM claims it has now issued rulings invalidating the other four as well.

I can see how a patent is essential to allow a small fledgling company with a great new idea to get a foothold on the market before another bigger company can step in. However, it is another matter to take out a patent on an obvious idea and then sit on it so you can later sue some other company that takes the time and investment to make it commercially viable. The whole concept of a patent holding company is repulsive to me. Now, every software and biotech company is looking over its shoulder to see where it may be blindsided by some overly general patent issued years earlier. Instead of encouraging innovation and development, the current laws discourage it.

The entire U.S. patent system needs to be overhauled. It is plainly ridiculous to allow patents on DNA sequences or trivial ideas like ‘one click’ purchasing on a website. It doesn’t take a genius to think of wireless email. However, it was RIM that actually got it to work and have it be universally adopted. I think software is intrinsically different from say mechanical devices in that the source code can be kept a secret. For example, it would be like someone patenting a lawn mower but not disclosing the mechanism. The concept of a lawn mower should not be patentable, only the implementation. But that is exactly what is happening for software. I think software patents could have a place but they must be held to a higher standard. A patent should only be protected if a company clearly has a head start using it over some other infringing company. If someone just sits on a patent, they should get no protection.

Free Speech

This current uproar over the Danish cartoons has gotten me to think about what is free speech. It certainly doesn’t mean you can say anything you want. Clearly, laws against slander and libel do not violate the Constitution. Hate speech also does not receive First Amendment protection. The 1942 Supreme Court decision Chaplinsky v. New Hampshire decided that “fighting words”, which incite an immediate fighting response “are no essential part of any exposition of ideas, and are of such slight social value as a step to the truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”

If free speech doesn’t mean you can say anything you want with impunity then what is it exactly. My personal view is that free speech is not a free pass to express any thought but a safeguard to protect those that lack power to criticize those in power. So a governement scientist should be able to complain that science is being distorted for political ends without the fear of losing his job or a newspaper should be able to claim that a certain politician is corrupt without the fear of being shutdown.

Should European newspapers be allowed to print cartoons that are inflammatory towards Muslims? Having not seen the cartoons, my assumption is that the intention was to criticize certain elements of the Muslim world for using religion to incite violence. However, given that images of the Prophet Mohammed are deemed sacrilegious, I think this could have been done in an editorial essay. The Muslim community is clearly marginalized in European society so this is not the same as say burning the US flag. Also, given that there are laws against denying the Holocaust in many European countries, I think banning images considered sacrilegious to Muslims would not be inconsistent. I am a proponent of free speech but I do believe there can be limits.

Balls and Brains

Biology is all about trade-offs and it turns out that at least in bats, there is a conflict between the size of sexual organs and the brain. In the current issue of the Proceedings of the Royal Society B, a paper reports on the effects of sexual selection on neocortex and testes sizes in bats. The authors find that the testes of males are much larger in species where the females are promiscuous, than those in species where the females exhibit fidelity. The inverse relationship held for brains. The theory is that both organs are expensive metabolically (especially since bats fly). If females have many partners, the crucial competition is between the sperm and the bat with the most wins. On the other hand, if females are selective about their partners for a given breeding cycle then the quantity of sperm produced by a given male is not so important.

It’s not so clear why female faithfulness should promote a larger brain in males. One argument is that there may be a genetic constraint in that genes for both organs are co-expressed. In an earlier post, I wrote about the hypothesis that if females select for a trait in males, then any genes for that trait residing on the X chromosome would be very effectively selected for since males only carry one X. However, this genetic constraint may be a product of other selection pressures. It could simply be that selective females prefer more intelligent mates.

Stardust

This coming Sunday, if all goes well, the Stardust probe will land in the Utah desert carrying microscopic dust gathered from Comet Wild 2 and interstellar space. This was a seven-year, 2.88 billion mile, 200 million dollar mission. You can follow the exciting progress at the official NASA link. The comet and interstellar dust is captured in an aerogel array mounted on the spacecraft.

It is expected that a few thousand cometary dust grains and just 45 interstellar submicroscopic grains will be captured by the collector. The grains will be embedded at high speed into the gel and create tracks like subatomic particles in a bubble chamber. Digital images will be taken and must be analyzed by hand. It is expected that 30,000 person hours will be required to examine 1.5 million images. NASA is asking for volunteers to take part through a Seti@home-like project called Stardust@home. Each volunteer must first pass a test where they must find a few tracks on a sample image. If two of four volunteers for a given image finds a track it will then be subjected to the scrutiny of 100 more volunteers. If it still passes muster it will then be examined by a crack team of Berkeley undergraduates. The dust grains will then be extracted by a specially designed microtweeser which wasn’t even developed until after Stardust was launched. I’m dying to find out what sorts of things they’ll find. Who knows, maybe they’ll be some organic molecules and life on earth really was seeded from space.

Humans In Space

In 2004, President Bush presented a new agenda for NASA that included human missions to the moon and Mars. The report is in the President’s Commission on Moon, Mars and Beyond. I grew up during the space age and one of the most inspirational moments of my life was witnessing Neil Armstrong walking on the moon in 1969. It certainly was a factor in my decision to pursue a career in science. However, I now firmly believe that manned space flight is intrinsically flawed and should not be supported by the government.

The reasons are two fold. The first is that there is no scientific purpose that requires humans and the second is that humans are very badly adapted to space. Other than bringing back moon rocks (which could now be done with robots) and fixing the Hubble telescope (which could soon be done by robots), there has been no contribution to space science from human exploration. The only science that has been done is the study of the effects of space on humans and the conclusion is that we don’t belong there. The weightlessness causes severe muscle atrophy and bone loss. Even more problematic is the high levels of ionizing radiation present in space. The extra cost required for human over robotic missions is astronomical.

The only reason we should go to Mars is the same reason we should climb Mt. Everest. I think this is perfectly fine and honourable but I don’t think we should support such a junket with federal dollars. Let some maverick billionaire fund the operation. By the time we actually have the technology to be able to colonize other planets or even go beyond the solar system our knowledge of biology and artificial intelligence will have also greatly advanced. Instead of sending people we could send human embryos or better yet just the genetic codes. Once we arrive at our destination we can simply grow our colonizers from available organic molecules. Robots could raise and educate the first generation. This seems far more sensible than sending people in a lead lined cabin or putting them in suspended animation (at least until we get that Star Trek warp drive, force field and artifical gravity generator working).