Moore’s law and science

It’s been almost twenty years since I finished graduate school. According to Moore’s law, (which I’ll take to be a doubling every two years),  computer power should have increased by a factor of a thousand  over that time.  I remember in the early-to-mid nineties when I had a 95 MHz Pentium processor.  I now  have a two year old Mac with 8 processors running at 3 GHz. Hence, depending on how you count, Moore’s law seems to have held over the last two decades.  My question then is what progress in science has resulted because of this vast increase in computer power.   Even though we all can carry in our briefcase  the equivalent of a Cray supercomputer from twenty years ago, it is not at all obvious to me what developments have been achieved as a result. This increase in power can be made particularly concrete if we consider that chess programs running on laptops today (e.g. here) can beat the IBM computer Deep Blue that defeated Gary Kasporov in 1997.

One thing that comes to my mind is that the human genome project may not have been completed as quickly without fast computing power to piece together all the overlapping short reads required in the shotgun sequencing method.  However, I think it would have only delayed the project by months if we were stuck with 1990 computing power.  Perhaps, weather modeling has improved greatly.  I don’t hear people complaining as much about forecasts these days.  Computing power will probably play a major role if we start to fully sequence large numbers of people.  In computational neuroscience, it has allowed projects such as Blue Brain to proceed although it is not clear what will be achieved by it.  Otherwise, I can’t really think of a major breakthrough that occurred just because of an increase in computing power.  We haven’t cured cancer.  We still can’t predict earthquakes.  We still don’t understand the brain.

In my own work, I’ve only fully exploited Moore’s law twice.  The first was in doing large-scale simulations of the Kuramoto-Sivashinsky equation (see here and here) and the second is right now in a GWAS data analysis project where I need to manipulate very large matrices. It is true that we can now do Bayesian inference and model comparison on larger models.  However, the curse of dimensionality strongly works against us here.  If you wanted to sample a parameter at just ten values (which is extremely conservative), then you would add a factor of ten in computer time for each new parameter.  With a current desktop computer, I feel confident that we can test a model of say fifteen parameters with poor prior information about where the parameters should be.  Going to twenty four parameters would be an increase of 10^9 or a billion and going to a hundred parameters would require Moore’s law to hold for another 500 or so years.  This is why I’m skeptical about realistic modeling.  It can only work if we have a very good idea of what the parameter values should be and getting all that data does not follow directly from Moore’s law.   In many ways, the main impact of faster computers is to allow me to be lazier.  Instead of working hard to optimize programs written in C or C++, I can now cobble together hastily written code in Matlab.  Instead of thinking more clearly about what a model should do, I can just run multiple simulations and try it out.  Hence, I think using computing power wisely is a great arbitrage opportunity.

One thought on “Moore’s law and science

  1. I agree, it certainly allows more exploratory modeling and analysis, and computational complexity of algorithms is still very important even when computing power increases exponentially over time.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s