Archive for January, 2009

Negative feedback

January 30, 2009

Each day we hear more and more bad news about global warming. Just in the past week alone, we find that old growth forests are thinning leading to perhaps more C02 in the atmosphere (van Mantgem et al. Widespread Increase of Tree Mortality Rates in the Western United States, Science 23 January 2009:Vol. 323. no. 5913, pp. 521 – 524) and that the damage currently done by CO2 emissions is largely irreversible for at least 1000 years (Solomon et al., Irreversible climate change due to carbon dioxide emissions, PNAS).  Now, I certainly do believe that the effects of added greenhouse gases are real and we are seeing evidence of global warming, but all the high profile reports always seem to show positive feedback (i.e. warming leads to more warming).  It makes me wonder about the negative feedback.

If we really are in a situation with only positive feedback then that means either 1) We are already past the point of no return and we’re headed to some new fixed point so we may as well just forget about it and live it up while we can or 2) there never was a stable fixed point to begin with so that we’ve been lucky to have been around at all.  I doubt either of these two possibilities are true.  Given that C02 levels and temperatures were confined to a fairly narrow band for most of the geological record implies to me that climate was near a stable fixed point or at least regulated to some degree.  Sure there have been warmer periods in the past and periodic ice ages but for the most part temperatures have been remarkably stable. After all, we could be like Venus and have an atmosphere of 97% CO2 with a temperature of 500^\circ C or be like Mars and have a very thin atmosphere and be cold with large fluctuations in temperature.  We do benefit from a magnetic field that shields us from the solar wind and keeps our atmosphere from getting blown away (losing our magnetic field is what really keeps me up at night) but I think our biosphere is in some sort of homeostasis.

However, if we are (were) at a stable fixed point then the first response to warming should be negative feedback opposing an increase in temperature.  Let’s consider a naive toy model of temperature variation around  the mean that behaves like \dot{T}\simeq -a T + bT^2\cdots.  So as long as T<a/b then the temperature will be stabilized by negative feedback that is proportional to the deviation from the fixed point.  Beyond this temperature, we enter an unstable regime where there is only positive nonlinear feedback.  There can also be neutral or marginal stability where a and b are both zero, in which case we would see neither negative nor positive feedback.

Now, obviously this model is wrong and the climate is much more complex where negative and positive feedback can coexist.   However if we are in a regime of just positive feedback then there probably is nothing we can do about climate change save drastic geo-engineering methods like spreading sulfur in the atmosphere to block sunlight or using giant pumps to suck CO2 out of the air.  If this is true then we should know about it.  However, it could also be that the instances of negative feedback are simply not reported or investigated.  I don’t think there is a conspiracy but it probably does a scientist no good to find good news right now where careers are made by finding the worst possible news – glaciers in Greenland and the Anatarctic are melting,  rivers are drying up, the oceans are rising.  This is what gets published in the high profile journals and written about in the New York Times.

If this is even partially true that means we really don’t know what will happen. Unless we have a full accounting of all the forces we can’t know if the climate is stable, unstable or marginal.   I do recall a time where the reporting in the popular press was more balanced and peopled talked about possibilities such as increasing albedo due to clouds as a negative feedback mechanism. Nowadays it’s just doom and gloom.  This is  probably due to overcompensating for the fact that climate change had not been taken seriously over this past decade and so there was the concern that any good news would  diminish the urgency to do something.  However, given that almost everyone believes in global warming now, it might be time to devote some attention to possible mitigating responses to increasing CO2 and temperatures.

Humans and machines

January 23, 2009

One of the consequences of computer science theory  is that humans will likely never be completely replaceable by machines.   They will certainly be (and already have been) exceeded by machines in many facets but there will always remain instances where machines will have no real advantage or even a disadvantage.  My argument is not based on any sentiment that humans are not machines.  In fact it is just the opposite.  Given that we are nothing more than computers, there are some problems that are too hard to be solved by any machine so the best they can do is to come up with heuristic approaches and this may not be any different from what humans do.  There will also be things that humans do that are not worth reproducing with machines.

Computational complexity theory is about understanding the difficulty of solving problems algorithmically.  In particular, it avoids questions of implementation and focuses on asymptotic relationships between the size of the problem and the resources (i.e. time and memory) needed to solve the problem.  The only problems computers can reasonably solve are those where the time and memory required only grow weakly with the size of the problem and weakly means being bounded by a polynomial in the size.   The difficulty of problems are classified into complexity classes.  P is the set of problems solvable in polynomial time.  For example, matrix multiplication is in P.  Naively it would take n^3 steps for an n\times n matrix but in fact it can be done even faster using Stassen’s algorithm.  Machines are basically limited to solving problems in class P (If you want to be a stickler, they are limited to a related class called BPP , which includes problems in P).

Now, there is also a class of problems that can be verified in polynomial time called NP.  An example is the traveling salesman decision problem, which asks if a tour that takes a salesman through a set of cities exactly once is shorter than some given distance.  While a solution is easily verified there is no known algorithm in P for solving this problem.  Another example of an NP problem is theorem proving.  Anyone can verify a theorem in polynomial time but there are no known sure fire ways to generate theorems.  The most famous problem in computer science is whether or not P is equal to NP.  That is, if a problem can be verified in polynomial time, can it also be solved in general in polynomial time. Most computer scientists believe that P is not equal to NP and there are strong reasons for why.  There is a famous result by Razborov and Rudich (A. A. Razborov and S. Rudich (1997). “Natural proofs”. Journal of Computer and System Sciences 55: 24–35), that essentially says that the reason why it is so hard to prove P is not NP is that P is not NP.

So if indeed P is not NP then there are a whole class of everyday problems like scheduling airlines or proving theorems that cannot be solved efficiently by computers.  Solving such problems require ad hoc approximate methods, using analogies or plain luck, which are things that humans are good at.  Even if we could design a super duper fast machine that thinks exactly like humans, there may still be problems that some person sitting in a cafe will solve that the machine will not.   Being faster and bigger helps but it is not always enough.  This is one reason why computers will not completely replace humans.

The second reason is that there are some things that humans engage in that are unique to humans like culture, art and humour.  That is not to say that a machine couldn’t also do those things but it would have to learn to be human first and there is no algorithm to do that.  The only way a machine could learn to be human would be to live as a human on human time and length scales.  They would have to start as little things and grow bigger, have family and friends, go to school, eat food, smell flowers, watch bad movies, have their heart broken, drive a car, and so forth.  These experiences form our priors from which spring our capabilities and creativity.  Perhaps there are shortcuts for machines to  learn to be human but I doubt that you could capture everything without going through it.  There are some things that are hard to simulate like what it feels like to take a hot shower,  the smell of an old house, or the pain of a shoulder dislocation.  Capturing these aspects of life are intrinsic to something like comedy.  My guess is that when we do start to have intelligent machines they will develop their own culture, art  and humour, which we won’t comprehend.  I can see humans being eventually supplanted by machines, but replaced, no.

Kinetic theory of coupled oscillators

January 17, 2009

Last week, I gave a physics colloquium at the Catholic University of America about recent work on using kinetic theory and field theory approaches to analyze finite-size corrections to networks of coupled oscillators.  My slides are here although they are converted from Keynote so the movies don’t work.   Coupled oscillators arise in  contexts as diverse as the brain, synchronized flashing of fireflies, coupled Josephson junctions, or unstable modes of the Millennium bridge in London.  Steve Strogatz’s book Sync gives a popular account of the field.  My talk considers the Kuramoto model

\dot{\theta}_i = \omega_i+\frac{K}{N}\sum_j\sin(\theta_j-\theta_i)   (1)

where the frequencies \omega are drawn from a fixed distribution g(\omega). The model describes the dynamics of the phases \theta of an all-to-all connected network of oscillators.  It can be considered to be the weak coupling limit of a set of nonlinear oscillators with different natural frequencies and a synchronizing phase response curve.

(more…)

Cortical dynamics of visual competition

January 9, 2009

In December I gave a talk at the new and beautiful Howard Hughes  Janelia Farm Research Campus in Virginia.  I talked about my work on how we resolve ambiguous or multiple stimuli.  My slides are here although the movies don’t work.   The talk is based mostly on work in two papers (C.R. Laing and C.C. Chow. `A spiking neuron model for binocular rivalry’, J. Comp. Neurosci. 12, 39-53 (2002) and S. Moldakarimov, J.E. Rollenhagen, C.R. Olson, and C.C. Chow, ‘ Competitive dynamics in cortical responses to visual stimuli’, Journal of Neurophysiology 94, 3388-3396 (2005), both of which can be downloaded from here) with some new stuff that Hedi Soula and I have been working on (and mostly off) for the past four years.  I’m hoping that we’ll finally finish the paper this year.

When the eyes are presented with multiple or ambiguous stimulus, several things can happen.  You can perceive  multiple images, which is what you usually do when you  look out into the natural world.  You can resolve an ambiguity. For example, you could be presented with a dark and shadowy image and your brain  decides on what is figure and what is ground.   However, if you are presented with something truly ambiguous like the Necker cube then your perception will be multistable.  You’ll see one thing and then another and back again.  The most striking form of multistable perception is binocular rivalry, which occurs when each eye is presented with a completely unrelated image like horizontal stripes to the left eye and vertical stripes to the right eye.  For a range of contrasts, your perception will alternate between vertical and horizontal stripes.  The alternations are stochastic with a gamma-like  distribution for the dominance times with a mean of a second or so, which varies from person to  person.

(more…)


Follow

Get every new post delivered to your Inbox.

Join 118 other followers