# The Matrix and consciousness

The philosopher David Chalmers has an interesting article on the philosophy of the Matrix, which you can download here.  In case you haven’t seen the movie, the premise is that in the future machines have taken over the earth and they use humans as an energy source.  The humans are housed in these pods and to keep them entertained, they’re minds are connected to “the Matrix” where they believe they live in a turn of the (21st) century world.  My very first blog post was on how the energy source premise was inconsistent thermodynamically.

Chalmers points out an interesting conundrum for people living in the Matrix.  Given that the simulation is completely realistic then the simulated bodies in the Matrix should function like real bodies.  If that is the case then what do the simulated brains do?   Are they an exact replica of the real brains in the pods?   If  the simulated brains controlled the simulated bodies then what do the pod brains do?  Are they just spectators?  Chalmers posits that the outputs of the two brains  could be averaged to control the Matrix bodies.  It seems to me that if the pod brains have some control of the simulated body then it would be possible for the simulated humans to do an experiment to show that their brains in the simulation are not completely responsible for controlling their bodies or for their conscious experience.  They could then empirically deduce that they live in a dualistic world where mind is not exactly the same as brain.  This is unlike a self-contained simulation where the simulated brains control simulated bodies entirely.  In that situation, you could not tell if you lived in a simulation.

# Science Museums

When I was a child, I lived across the street from the Ontario Science Centre.  I loved the place and would go quite often.  When it first opened 40 years ago, the Science Centre was quite innovative in  its use of interactive exhibits and demonstrations as well as its architecture.  It drapes over  the side of a valley.  I still remember the excitement of riding down the escalators to the lowest levels where my favourite exhibits were.

I went back to the Science Centre this past weekend for the first time in several decades.  It has changed quite a bit but some of the old exhibits still exist in a room called the Science Arcade.  The architecture looks a little dated on the outside but holds up fairly well on the inside.  As I walked around, I wondered whether people actually learn anything at these museums.  There are lots of neat things to play with but do they actually get it.  An example is an exhibit of a Cartesian Diver, which consists of a small glass fish inside a cylinder of water.  The fish is partially filled with water.  The visitor pushes a button that pumps air into the cylinder and the fish sinks to the bottom.  However, there wasn’t a detailed explanation of how it works.  The write up basically said that as air is pumped into the cylinder the pressure rises and squeezes the air inside the fish.  It didn’t say explicitly that the fish had a hole in it so that water could move in and out and as the air in the fish was being squeezed by the water moving in due to the increased pressure, the fish became less buoyant and thus sank.  I saw a boy watch the fish sink and say, “how did that happen?”  Perhaps, the exhibit will spur his curiosity to learn more about it.

I believe the current idea of curators who design science museums and exhibits is that science museums should try to make science fun and cool.  Thus the exhibits need to been highly interactive and entertaining.   Maybe this is the right strategy and people do get a lot out of visiting science museums.  I really don’t know.  The National Academy of Sciences has a report, which I haven’t read, on this very issue.  I think having a science literate public is more important now than ever.   Do science museums play an important role in educating the public?

# Economics of science funding

This month’s issue of Technology Review has a story about economists studying how to best fund science to maximize productivity. One of the points in the article, which comes from this working paper,  is that researchers who receive funding that is long term and rewards risk taking, like those from the Howard Hughes Medical Institute (HHMI), produce more high impact papers than their counterparts who receive approximately equivalent funding from the NIH that has more strings attached.  From the article:

Natural experiments also allow economists to study how different types of grants affect scientists. For instance, it turns out that scientists whose funding affords them unusual long-term freedom in the lab are more likely to generate breakthroughs, according to a November 2009 working paper by Azoulay, Graff Zivin, and Gustavo Manso, an assistant professor at Sloan.

To reach this conclusion, they compared the productivity of two groups of scientists from 1998 through 2006: investigators at the Howard Hughes Medical Institute (HHMI) in Maryland and researchers given NIH grants. The HHMI scientists were encouraged to take risks and received five years of financial support, with a two-year grace period after funding was terminated. The standard NIH grants, known as R01 grants, lasted three to five years, and recipients were monitored more closely; funding ceased immediately if the grant was not renewed. The researchers found that papers by the HHMI scientists were far more likely to be heavily cited and covered a broader range of subjects. Those scientists also mentored more young colleagues who went on to win prizes.

# The P!=NP buzz

In case you haven’t been following the buzz, Vinay Deolalikar of HP labs, posted a proof that $P\ne NP$ on his web page last Friday.  This is one  of the Clay Millenium Math Prize Problems.  The math and computer science community immediately mobilized after the announcement to check the proof.  A summary of the events  can be found here.  Many of the world’s foremost mathematicians and computer scientists spent several intense days discussing the proof.  At this point,  they have yet to reach a conclusion, although the paper has already been revised twice in response to the comments.   I was quite impressed at how cordial and supportive the whole process went.  Although many of the checkers were skeptical, everyone, which included Fields medalists, were always supportive and polite. Their attitude was that even if the proof was wrong, they were thankful of Deolalikar for bringing the community together and getting everyone excited.  I find that the camaraderie seen in mathematics, is often lacking in other fields.   It  seems like the nastiness in a discipline is inversely proportional to the clarity of the progress.

# Lanier on the technological religion

Computer scientist and visionary Jaron Lanier has written an interesting opinion piece in the New York Times today.  His thesis is that the hype surrounding recent advances in computation and robotics is approaching religious zealotry where artificial intelligence is made out to be much more advanced and scary then it really is.  His argument is that even though he is not sitting around waiting for the Singularity,  he can still make progress in his work.  Unlike the fact that a belief in quantum mechanics (or at least band theory) is essential for work on electronics, a belief in strong artificial intelligence is not in computer science.  I saw Lanier speak at the Computational Neuroscience meeting a decade ago.  I found it ironic that although he was a leader in computer vision, his talk was mostly low tech and devoid of cool visuals.  Below is an excerpt from his piece.

WHEN we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on — with the machines and with ourselves. So, why, aside from the theatrical appeal to consumers and reporters, must engineering results so often be presented in Frankensteinian light?

The answer is simply that computer scientists are human, and are as terrified by the human condition as anyone else. We, the technical elite, seek some way of thinking that gives us an answer to death, for instance. This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.

Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. Yes, this sounds like many different science fiction movies. Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists.

It should go without saying that we can’t count on the appearance of a soul-detecting sensor that will verify that a person’s consciousness has been virtualized and immortalized. There is certainly no such sensor with us today to confirm metaphysical ideas about people, or even to recognize the contents of the human brain. All thoughts about consciousness, souls and the like are bound up equally in faith, which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture.

# Bayesian model comparison

This is the follow up post to my earlier one on the Markov Chain Monte Carlo (MCMC) method for fitting models to data.  I really should have covered Bayesian parameter estimation before this post but as an inadvertent  demonstration of the simplicity of the Bayesian approach I’ll present these ideas in random order.  Although it won’t be immediately apparent in this post, the MCMC is the only computational method you ever really need to learn to do all Bayesian inference.

It often comes up in biology and other fields that you have several possible models that could explain some data and you want to know which model is best.  The first thing you could do is to see which model fits the data best.  However, if a model has more parameters then certainly it will fit the model better.  You could always have a model with as many parameters as data points and that would fit the data perfectly.  So, it is necessary to balance how good you fit the data with the complexity of the model.  There have been several proposals to address this issue.  For example, the Akaike Information Criterion (AIC) or Bayes Information Criterion (BIC).  However, as I will show here, these criteria are just approximations to Bayesian model comparison.