Selection of the week

November 21, 2014

Here is violinist Itzhak Perlman playing Belgian composer Joseph-Hector Fiocco’s Allegro from the late Baroque period.  If you like speed, this is as fast as any rock guitar solo.

Race against the machine

November 11, 2014

One of my favourite museums is the National Palace Museum (Gu Gong) in Taipei, Taiwan. It houses part of the Chinese imperial collection, which was taken to Taiwan in 1948 during the Chinese civil war by Chiang Kai-shek. Beijing has its own version but Chiang took the good stuff. He wasn’t much of a leader or military mind but he did know good art. When I view the incredible objects in that museum and others, I am somewhat saddened that the skill and know-how required to make such beautiful things either no longer exists or is rapidly vanishing. This loss of skill is apparent just walking around American cities much less those of Europe and Asia. The stone masons that carved the wonderful details on the Wrigley Building in Chicago are all gone, which brings me to this moving story about passing the exceedingly stringent test to be a London cabbie (story here).

In order to be an official London black cab driver, you must know how to get between any two points in London in as efficient a manner as possible. Aspiring cabbies often take years to attain the mastery required to pass their test. Neural imaging has found that their hippocampus, where memories are thought to be formed, is larger than normal and it even gets larger as they study. The man profiled in the story quit his job and studied full-time for three years to pass! They’ll ride around London on a scooter memorizing every possible landmark that a person may ask to be dropped off at. Currently, cabbies can outperform GPS and Google Maps (I’ve been led astray many a time by Google Maps) but it’s only a matter of time. I hope that the cabbie tradition lives on after that day just as I hope that stone masons make a comeback.

Crawford Prize

November 8, 2014

The SIAM activity group on dynamical systems is seeking nominations for the J.D. Crawford Prize. J.D. was a marvelous applied mathematician/theoretical physicist who tragically died in his forties the day before I started my job at the University of Pittsburgh. The deadline is November 15. Thus far, we (I’m on the committee) haven’t received many nominations so the odds are good. So please give us more work and send in your nominations. The information for where to send it is here.

Selection of the week

November 7, 2014

Kurt Masur conducting the Leipzig Gewandhaus Orchestra in a rendition of the first movement of Nikolai Rimsky-Korsakov’s Scheherazade (The Sea and Sinbad’s Ship).

How does the cortex compute?

November 5, 2014

Gary Marcus, Adam Marblestone, and Thomas Dean have an opinion piece in Science this week challenging the notion of the “canonical cortical circuit”. They have a longer and open version here. Their claim is that the cortex is probably doing a variety of different computations, which they list in their longer paper. The piece has prompted responses by a number of people including Terry Sejnowski and Stephen Grossberg on the connectionist listserv (Check the November archive here).

What’s wrong with neuroscience

November 2, 2014

Here is a cute parable in Frontiers in Neuroscience from cognitive scientist Joshua Brown at Indiana Univeristy.  It mirrors a lot of what I’ve been saying for the past few years:

 

The tale of the neuroscientists and the computer:  Why mechanistic theory matters

http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00349/full

A little over a decade ago, a biologist asked the question “Can a biologist fix a radio?” (Lazebnik, 2002). That question framed an amusing yet profound discussion of which methods are most appropriate to understand the inner workings of a system, such as a radio. For the engineer, the answer is straightforward: you trace out the transistors, resistors, capacitors etc., and then draw an electrical circuit diagram. At that point you have understood how the radio works and have sufficient information to reproduce its function. For the biologist, as Lazebnik suggests, the answer is more complicated. You first get a hundred radios, snip out one transistor in each, and observe what happens. Perhaps the radio will make a peculiar buzzing noise that is statistically significant across the population of radios, which indicates that the transistor is necessary to make the sound normal. Or perhaps we should snip out a resistor, and then homogenize it to find out the relative composition of silicon, carbon, etc. We might find that certain compositions correlate with louder volumes, for example, or that if we modify the composition, the radio volume decreases. In the end, we might draw a kind of neat box-and-arrow diagram, in which the antenna feeds to the circuit board, and the circuit board feeds to the speaker, and the microphone feeds to the recording circuit, and so on, based on these empirical studies. The only problem is that this does not actually show how the radio works, at least not in any way that would allow us to reproduce the function of the radio given the diagram. As Lazebnik argues, even though we could multiply experiments to add pieces of the diagram, we still won’t really understand how the radio works. To paraphrase Feynmann, if we cannot recreate it, then perhaps we have not understood it (Eliasmith and Trujillo, 2014; Hawking, 2001).

Lazebnik’s argument should not be construed to disparage biological research in general. There are abundant examples of how molecular biology has led to breakthroughs, including many if not all of the pharmaceuticals currently on the market. Likewise, research in psychology has provided countless insights that have led to useful interventions, for instance in cognitive behavioral therapy (Rothbaum et al., 2000). These are valuable ends in and of themselves. Still, are we missing greater breakthroughs by not asking the right questions that would illuminate the larger picture? Within the fields of systems, cognitive, and behavioral neuroscience in particular, I fear we are in danger of losing the meaning of the Question “how does it work”? As the saying goes, if you have a hammer, everything starts to look like a nail. Having been trained in engineering as well as neuroscience and psychology, I find all of the methods of these disciplines useful. Still, many researchers are especially well-trained in psychology, and so the research questions focus predominantly on understanding which brain regions carry out which psychological or cognitive functions, following the established paradigms of psychological research. This has resulted in the question being often reframed as “what brain regions are active during what psychological processes”, or the more sophisticated “what networks are active”, instead of “what mechanisms are necessary to reproduce the essential cognitive functions and activity patterns in the system.” To illustrate the significance of this difference, consider a computer. How does it work?

**The Tale**

Once upon a time, a group of neuroscientists happened upon a computer (Carandini, 2012). Not knowing how it worked, they each decided to find out how it sensed a variety of inputs and generated the sophisticated output seen on its display. The EEG researcher quickly went to work, putting an EEG cap on the motherboard and measuring voltages at various points all over it, including on the outer case for a reference point. She found that when the hard disk was accessed, the disk controller showed higher voltages on average, and especially more power in the higher frequency bands. When there was a lot of computation, a lot of activity was seen around the CPU. Furthermore, the CPU showed increased activity in a way that is time-locked to computational demands. “See here,” the researcher declared, “we now have a fairly temporally precise picture of which regions are active, and with what frequency spectra.” But has she really understood how the computer works?

Next, the enterprising physicist and cognitive neuroscientist came along. “We don’t have enough spatial resolution to see inside the computer,” they said. So they developed a new imaging technique by which activity can be measured, called the Metabolic Radiation Imaging (MRI) camera, which now measures the heat (infrared) given off by each part of the computer in the course of its operations. At first, they found simply that lots of math operations lead to heat given off by certain parts of the CPU, and that memory storage involved the RAM, and that file operations engaged the hard disk. A flurry of papers followed, showing that the CPU and other areas are activated by a variety of applications such as word-processing, speech recognition, game play, display updating, storing new memories, retrieving from memory, etc.

Eventually, the MRI researchers gained a crucial insight, namely that none of these components can be understood properly in isolation; they must understand the network. Now the field shifts, and they begin to look at interactions among regions. Before long, a series of high profile papers emerge showing that file access does not just involve the disks. It involves a network of regions including the CPU, the RAM, the disk controller, and the disk. They know this because when they experimentally increase the file access, all of these regions show correlated increases in activity. Next, they find that the CPU is a kind of hub region, because its activity at various times correlates with activity in other regions, such as the display adapter, the disk controller, the RAM, and the USB ports, depending on what task they require the computer to perform.

Next, one of the MRI researchers has the further insight to study the computer while it is idle. He finds that there is a network involving the CPU, the memory, and the hard disk, as (unbeknownst to them) the idle computer occasionally swaps virtual memory on and off of the disk and monitors its internal temperature. This resting network is slightly different across different computers in a way that correlates with their processor speed, memory capacity, etc., and thus it is possible to predict various capacities and properties of a given computer by measuring its activity pattern when idle. Another flurry of publications results. In this way, the neuroscientists continue to refine their understanding of the network interactions among parts of the computer. They can in fact use these developments to diagnose computer problems. After studying 25 normal computers and comparing them against 25 computers with broken disk controllers, they find that the connectivity between the CPU and the disk controller is reduced in those with broken disk controllers. This allows them to use MRI to diagnose other computers with broken disk controllers. They conclude that the disk controller plays a key role in mediating disk access, and this is confirmed with a statistical mediation analysis. Someone even develops the technique of Directional Trunk Imaging (DTI) to characterize the structure of the ribbon cables (fiber tract) from the disk controller to the hard disk, and the results match the functional correlations between the hard disk and disk controller. But for all this, have they really understood how the computer works?

The neurophysiologist spoke up. “Listen here”, he said. “You have found the larger patterns, but you don’t know what the individual circuits are doing.” He then probes individual circuit points within the computer, measuring the time course of the voltage. After meticulously advancing a very fine electrode in 10 micron increments through the hard material (dura mater) covering the CPU, he finds a voltage. The particular region shows brief “bursts” of positive voltage when the CPU is carrying out math operations. As this is the math co-processor unit (unbeknownst to the neurophysiologist), the particular circuit path is only active when a certain bit of a floating point representation is active. With careful observation, the neurophysiologist identifies this “cell” as responding stochastically when certain numbers are presented for computation. The cell therefore has a relatively broad but weak receptive field for certain numbers. Similar investigations of nearby regions of the CPU yield similar results, while antidromic stimulation reveals inputs from related number-representing regions. In the end, the neurophysiologist concludes that the cells in this particular CPU region have receptive fields that respond to different kinds of numbers, so this must be a number representation area.

Finally the neuropsychologist comes along. She argues (quite reasonably) that despite all of these findings of network interactions and voltage signals, we cannot infer that a given region is necessary without lesion studies. The neuropsychologist then gathers a hundred computers that have had hammer blows to various parts of the motherboard, extension cards, and disks. After testing their abilities extensively, she carefully selects just the few that have a specific problem with the video output. She finds that among computers that don’t display video properly, there is an overlapping area of damage to the video card. This means of course that the video card is necessary for proper video monitor functioning. Other similar discoveries follow regarding the hard disks and the USB ports, and now we have a map of which regions are necessary for various functions. But for all of this, have the neuroscientists really understood how the computer works?

**The Moral**

As the above tale illustrates, despite all of our current sophisticated methods, we in neuroscience are still in a kind of early stage of scientific endeavor; we continue to discover many effects but lack a proportionally strong standard model for understanding how they all derive from mechanistic principles. There are nonetheless many individual mathematical and computational neural models. The Hodgkin-Huxley equations (Hodgkin and Huxley, 1952), Integrate-and-fire model (Izhikevich, 2003), Genesis (Bower and Beeman, 1994), SPAUN (Eliasmith et al., 2012), and Blue Brain project (Markram, 2006) are only a few examples of the models, modeling toolkits, and frameworks available, besides many others more focused on particular phenomena. Still, there are many different kinds of neuroscience models, and even many different frameworks for modeling. This means that there is no one theoretical lingua franca against which to evaluate empirical results, or to generate new predictions. Instead, there is a patchwork of models that treat some phenomena, and large gaps where there are no models relevant to existing phenomena. The moral of the story is not that the brain is a computer. The moral of the story is twofold: first, that we sorely need a foundational mechanistic, computational framework to understand how the elements of the brain work together to form functional units and ultimately generate the complex cognitive behaviors we study. Second, it is not enough for models to exist—their premises and implications must be understood by those on the front lines of empirical research.

**The Path Forward**

A more unified model shared by the community is not out of reach for neuroscience. Such exists in physics (e.g. the standard model), engineering (e.g. circuit theory), and chemistry. To move forward, we need to consider placing a similar level of value on theoretical neuroscience as for example the field of physics places on theoretical physics. We need to train neuroscientists and psychologists early in their careers in not just statistics, but also in mathematical and computational modeling, as well as dynamical systems theory and even engineering. Computational theories exist (Marr, 1982), and empirical neuroscience is advancing, but we need to develop the relationships between them. This is not to say that all neuroscientists should spend their time building computational models. Rather, every neuroscientist should at least possess literacy in modeling as no less important than, for example, anatomy. Our graduate programs generally need improvement on this front. For faculty, if one is in a soft money position or on the tenure clock and cannot afford the time to learn or develop theories, then why not collaborate with someone who can? If we really care about the question of how the brain works, we must not delude ourselves into thinking that simply collecting more empirical results will automatically tell us how the brain works any more than measuring the heat coming from computer parts will tell us how the computer works. Instead, our experiments should address the questions of what mechanisms might account for an effect, and how to test and falsify specific mechanistic hypotheses (Platt, 1964).

Selection of the week

October 31, 2014

The legendary twentieth century violinist David Oistrakh playing Claude Debussy’s Clair de Lune, from over 50 years ago.  Unfortunately, I can’t listen to this piece without being reminded of the 2001 film Ocean’s Eleven. It wasn’t a bad film per se but I certainly don’t want to have it be associated with one of the iconic pieces from the classical repertoire.

 

 

How we got to now

October 30, 2014

I’ve been watching the highly entertaining PBS series “How we got to now” based on the book and hosted by Steven Johnson.  You can catch the episodes here for a limited time.  Hurry though because the first two episodes expire tomorrow.

Robert Solow

October 29, 2014

Don’t miss Nobel Laureate Robert Solow on econtalk.  Even at age 90, he’s still as sharp as ever.

http://www.econtalk.org/archives/2014/10/robert_solow_on.html

The Ebola response

October 25, 2014

The real failure of the Ebola response is not that a physician went bowling after returning from West Africa but that there are not more doctors over there containing the epidemic where it is needed. Infected patients do not shed virus particles until they become symptomatic and it is emitted in bodily fluids. The New York physician monitored his temperature daily and reported immediately to a designated Ebola hospital the moment he detected high fever. We should not be scape goating physicians who are trying to make a real difference in containing this outbreak and really protecting the rest of the world. This current outbreak was identified in the spring of 2014 but there was no international response until late summer. We know how to contain Ebola – identify patients and isolate them and this is what we should be doing instead of making  emotional and unhelpful policy decisions.

Selection of the week

October 24, 2014

How about some Tango?  Here is Argentine composer Astor Piazzolla performing his composition Adios Nonino on the accordion.

 

Selection of the week

October 17, 2014

The utterly unique Canadian Brass performing Rimsky-Korsakov’s “Flight of the Bumblebee”.

It takes a team

October 15, 2014

Here is a letter (reposted with permission) from Michael Gottesman, Deputy Director for Intramural Research of the NIH, telling the story of how the NIH intramural research program was instrumental in helping Eric Betzig win this years Nobel Prize in Chemistry.  I think it once again shows how great breakthroughs rarely occur in isolation.

Dear colleagues,

The NIH intramural program has placed its mark on another Nobel Prize. You likely heard last week that Eric Betzig of HHMI’s Janelia Farm Research Campus will share the 2014 Nobel Prize in Chemistry “for the development of super-resolved fluorescence microscopy.”  Eric’s key experiment came to life right here at the NIH, in the lab of Jennifer Lippincott-Schwartz.

In fact, Eric’s story is quite remarkable and highlights the key strengths of our intramural program: freedom to pursue high-risk research, opportunities to collaborate, and availability of funds to kick-start such a project.

Eric was “homeless” from a scientist’s viewpoint. He was unemployed and working out of a cottage in rural Michigan with no way of turning his theory into reality.  He had a brilliant idea to isolate individual fluorescent molecules by a unique optical feature to overcome the diffraction limit of light microscopes, which is about 0.2 microns. He thought that if green fluorescent proteins (GFPs) could be switched on and off a few molecules at a time, it might be possible using Gaussian fitting to synthesize a series of images based on point localization that, when stacked, provide extraordinary resolution.

Eric chanced to meet Jennifer, who heads the NICHD’s Section on Organelle Biology. She and George Patterson, then a postdoc in Jennifer’s lab and now a PI in NIBIB, had developed a photoactivable version of GFP with these capabilities, which they were already applying to the study of organelles. Jennifer latched on to Eric’s idea immediately; she was among the first to understand its significance and saw that her laboratory had just the tool that Eric needed.

So, in mid-2005, Jennifer offered to host Eric and his friend and colleague, Harald Hess, to collaborate on building a super-resolution microscope based on the use of photoactivatable GFP. The two had constructed key elements of this microscope in Harald’s living room out of their personal funds.

Jennifer located a small space in her lab in Building 32. She and Juan Bonifacino, also in NICHD, then secured some centralized IATAP funds for microscope parts to supplement the resources that Eric and Harald brought to the lab.  Owen Rennert, then the NICHD scientific director, provided matching funds. By October 2005, Eric and Harald became affiliated with HHMI, which also contributed funds to the project.

Eric and Harald quickly got to work with their new NICHD colleagues in their adopted NIH home.  The end result was a fully operational microscope married to GFP technology capable of producing super-resolution images of intact cells for the first time. Called photoactivated localization microscopy (PALM), the new technique provided 10 times the resolution of conventional light microscopy.

Another postdoc in Jennifer’s lab, Rachid Sougrat, now at King Abdullah University of Science and Technology in Saudi Arabia, correlated the PALM images of cell organelles to electron micrographs to validate the new technique, yet another important contribution.

Upon hearing of Eric’s Nobel Prize, Jennifer told me: “We didn’t imagine at the time how quickly the point localization imaging would become such an amazing enabling technology; but it caught on like wildfire, expanding throughout many fields of biology.”

That it did! PALM and all its manifestations are at the heart of extraordinary discoveries.  We think this is a quintessential intramural story. We see the elements of high-risk/high-reward research and the importance of collaboration and the freedom to pursue ideas, as well as NIH scientists with the vision to encourage and support this research.

Read the landmark 2006 Science article by Eric, Harald, and the NICHD team, “Imaging Intracellular Fluorescent Proteins at Nanometer Resolution,” at http://www.sciencemag.org/content/313/5793/1642.long.

The story of the origins of Eric Betzig’s Nobel Prize in Jennifer Lippincott-Schwartz’s lab is one that needs to be told. I feel proud to work for an organization that can attract such talent and enable such remarkable science to happen.

Kudos to Eric and to Jennifer and her crew.

Michael M. Gottesman

Deputy Director for Intramural Research

The perils of Word

October 14, 2014

Many biology journals insist on receiving manuscripts in Microsoft Word prior to publication. Even though this probably violates some anti-trust law, I generally comply, to the point of painfully converting Latex manuscripts into Word on more than one occasion. Word is particularly unpleasant when writing papers with equations. Although the newer versions have a new equation editing system, I don’t use it because once in a past submission, a journal forced me to convert all the equations to the old equation editor system (the poor person’s version of MathType). It worked reasonably well in Word versions before 2008 but has now become very buggy in Word 2011. For instance, when I double-click on an equation to edit it, a blank equation editor window also pops up, which I have to close in order for the one I wanted to work. Additionally, when I reopen papers saved in the .docx format, the equations lose their alignment.  Instead of the center of the equation aligned to a line, the base of the equation is aligned making inline equations float above the line. Finally, a big problem is equation numbering.  Latex has a nice system where you give each equation a label and then it assigns numbers automatically when you compile. This way you can insert and delete equations without having to renumber them all. Is there a way you can do this in Word? Am I the only one with these problems? Are there work arounds?

Incompetence is the norm

October 11, 2014

People have been justly anguished by the recent gross mishandling of the Ebola patients in Texas and Spain and the risible lapse in security at the White House. The conventional wisdom is that these demonstrations of incompetence are a recent phenomenon signifying a breakdown in governmental competence. However, I think that incompetence has always been the norm; any semblance of competence in the past is due mostly to luck and the fact that people do not exploit incompetent governance because of a general tendency towards docile cooperativity (as well as incompetence of bad actors). In many ways, it is quite amazing at how reliably citizens of the US and other OECD members respect traffic laws, pay their bills and service their debts on time. This is a huge boon to an economy since excessive resources do not need to be spent on enforcing rules. This does not hold in some if not many developing nations where corruption is a major problem (c.f. this op-ed in the Times today). In fact, it is still an evolutionary puzzle as to why agents cooperate for the benefit of the group even though it is an advantage for an individual to defect. Cooperativity is also not likely to be all genetic since immigrants tend to follow the social norm of their adopted country, although there could be a self-selection effect here. However, the social pressure to cooperate could evaporate quickly if there is the perception of the lack of enforcement as evidenced by looting following natural disasters or the abundance of insider trading in the finance industry. Perhaps, as suggested by the work of Karl Sigmund and other evolutionary theorists, cooperativity is a transient phenomenon and will eventually be replaced by the evolutionarily more stable state of noncooperativity. In that sense, perceived incompetence could be rising but not because we are less able but because we are less cooperative.

Selection of the week

October 10, 2014

Here is the iconic soprano Maria Callas singing Puccini’s aria “O mio babbino caro” from the opera Gianni Schicchi. It was also used in the 1985 film “A Room with a View.”

Nobel Prize in Physiology or Medicine

October 6, 2014

The Nobel Prize for Physiology or Medicine was awarded this morning to John O’Keefe and May-Brit Moser and Edward Moser for the discovery of place cells and grid cells, respectively. O’Keefe discovered in 1971 that there were cells in the hippocampus that fired when a rat was in a certain location. He called these place cells and a whole generation of scientists, including the Mosers, have been studying them ever since then. In 2005, the Mosers discovered grid cells in the entorhinal cortex, which feed into the hippocampus. Grid cells fire whenever rats pass through periodically spaced intervals in a given area such as a room, dividing the room into a triangular lattice. Different grid cells have different frequencies, phases and orientations.

For humans, the hippocampus is an area of the brain known to be associated with memory formation. Much of what we know about the hippocampus in humans was learned by studying Henry Molaison, known as H.M. in the scientific literature, who had both of his hippocampi removed as a young man because of severe epileptic fits. H.M. could carry on a conversation but could not remember any of it if he was distracted. He had to be re-introduced to the medical staff that treated and observed him every day. H.M. showed us that memory comes in at least three forms. There is very short term or working memory, necessary to carry a conversation or remember a phone number long enough to dial it. Then there is long term explicit or declarative memory for which the hippocampus is essential. This is the memory of episodic events in your life and random learned facts about the world. People without a hippocampus, as depicted in the film Memento, cannot form explicit memories. Finally, there is implicit long term memory, such as how to ride a bicycle or use a pencil. This type of memory does not seem to require the hippocampus as evidenced by the fact that H.M. could become more skilled at certain games that he was taught to play daily even though he professed to never having played the game each time. The implication of the hippocampus for spatial location for humans is more recent. There was the famous study that showed London cab drivers had an enlarged hippocampus compared to controls and neural imaging has now shown something akin to place fields in humans.

While the three new laureates are all excellent scientists and deserving of the prize, this is still another example of how the Nobel prize singles out individuals at the expense of other important contributors. O’Keefe’s coauthor on the 1971 paper, Jonathan Dovstrosky, was not awarded. I’ve also been told that my former colleague at the University of Pittsburgh, Bill Skaggs, was the one who pointed out to the Mosers that the patterns in their data corresponded to grid cells. Bill was one of the most brilliant scientists I have known but did not secure tenure and is not directly involved in academic research anymore as far as I know. The academic system should find a way to maximize the skills of people like Bill and Douglas Prasher.

Finally, the hype surrounding the prize announcement is that the research could be important for treating Alzheimer’s disease, which is associated with a loss of episodic memory and navigational ability. However, if we use the premise that there must be a neural correlate of anything an animal can do, then place cells must necessarily exist given that rats have the ability to discern spatial location. What we did not know was where these cells are and O’Keefe showed us that it is in the hippocampus but we could have also associated the hippocampus with the memory loss of Alzheimer’s disease from H.M. The existence of grid cells is perhaps less obvious since it is not inherently obvious that we can naturally divide a room into a triangular lattice. It is plausible that grid cells do the computation giving rise to place cells but we still need to understand the computation that gives rise to grid cells. It is not obvious to me that grid cells are easier to compute than place cells.

Selection of the week

October 3, 2014

Here is a short snippet from an old BBC show featuring the great English guitarist Julian Bream playing two preludes from Brazilian composer Heitor Villa-Lobos from fifty years ago.

Linear and nonlinear thinking

October 1, 2014

A linear system is one where the whole is precisely the sum of its parts. You can know how different parts will act together by simply knowing how they act in isolation. A nonlinear function lacks this nice property. For example, consider a linear function f(x). It satisfies the property that f(a x + b y) = a f(x) + b f(y). The function of the sum is the sum of the functions. One important point to note is that what is considered to be the paragon of linearity, namely a line on a graph, i.e. f(x) = mx + b is not linear since f(x + y) = m (x + y) + b \ne f(x)+ f(y). The y-intercept b destroys the linearity of the line. A line is instead affine, which is to say a linear function shifted by a constant. A linear differential equation has the form

\frac{dx}{dt} = M x

where x can be in any dimension.  Solutions of a linear differential equation can be multiplied by any constant and added together.

Linearity is thus essential for engineering. If you are designing a bridge then you simply add as many struts as you need to support the predicted load. Electronic circuit design is also linear in the sense that you combine as many logic circuits as you need to achieve your end. Imagine if bridge mechanics were completely nonlinear so that you had no way to predict how a bunch of struts would behave when assembled together. You would then have to test each combination to see how they work. Now, real bridges are not entirely linear but the deviations from pure linearity are mild enough that you can make predictions or have rules of thumb of what will work and what will not.

Chemistry is an example of a system that is highly nonlinear. You can’t know how a compound will act just based on the properties of its components. For example, you can’t simply mix glass and steel together to get a strong and hard transparent material. You need to be clever in coming up with something like gorilla glass used in iPhones. This is why engineering new drugs is so hard. Although organic chemistry is quite sophisticated in its ability to synthesize various compounds there is no systematic way to generate molecules of a given shape or potency. We really don’t know how molecules will behave until we create them. Hence, what is usually done in drug discovery is to screen a large number of molecules against specific targets and hope. I was at a computer-aided drug design Gordon conference a few years ago and you could cut the despair and angst with a knife.

That is not to say that engineering is completely hopeless for nonlinear systems. Most nonlinear systems act linearly if you perturb them gently enough. That is why linear regression is so useful and prevalent. Hence, even though the global climate system is a highly nonlinear system, it probably acts close to linear for small changes. Thus I feel confident that we can predict the increase in temperature for a 5% or 10% change in the concentration of greenhouse gases but much less confident in what will happen if we double or treble them. How linear a system will act depends on how close they are to a critical or bifurcation point. If the climate is very far from a bifurcation then it could act linearly over a large range but if we’re near a bifurcation then who knows what will happen if we cross it.

I think biology is an example of a nonlinear system with a wide linear range. Recent research has found that many complex traits and diseases like height and type 2 diabetes depend on a large number of linearly acting genes (see here). Their genetic effects are additive. Any nonlinear interactions they have with other genes (i.e. epistasis) are tiny. That is not to say that there are no nonlinear interactions between genes. It only suggests that common variations are mostly linear. This makes sense from an engineering and evolutionary perspective. It is hard to do either in a highly nonlinear regime. You need some predictability if you make a small change. If changing an allele had completely different effects depending on what other genes were present then natural selection would be hard pressed to act on it.

However, you also can’t have a perfectly linear system because you can’t make complex things. An exclusive OR logic circuit cannot be constructed without a threshold nonlinearity. Hence, biology and engineering must involve “the linear combination of nonlinear gadgets”. A bridge is the linear combination of highly nonlinear steel struts and cables. A computer is the linear combination of nonlinear logic gates. This occurs at all scales as well. In biology, you have nonlinear molecules forming a linear genetic code. Two nonlinear mitochondria may combine mostly linearly in a cell and two liver cells may combine mostly linearly in a liver.  This effective linearity is why organisms can have a wide range of scales. A mouse liver is thousands of times smaller than a human one but their functions are mostly the same. You also don’t need very many nonlinear gadgets to have extreme complexity. The genes between organisms can be mostly conserved while the phenotypes are widely divergent.

Selection of the week

September 26, 2014

Let’s bring in the fall with Vivaldi’s Autumn from the Four Seasons. Detroit is mostly known for cars and bankruptcy but it also has great culture.  Here is the Detroit Symphony Orchestra with Scottish violinist Nicola Benedetti.


Follow

Get every new post delivered to your Inbox.

Join 120 other followers