The perils of Word

Many biology journals insist on receiving manuscripts in Microsoft Word prior to publication. Even though this probably violates some anti-trust law, I generally comply, to the point of painfully converting Latex manuscripts into Word on more than one occasion. Word is particularly unpleasant when writing papers with equations. Although the newer versions have a new equation editing system, I don’t use it because once in a past submission, a journal forced me to convert all the equations to the old equation editor system (the poor person’s version of MathType). It worked reasonably well in Word versions before 2008 but has now become very buggy in Word 2011. For instance, when I double-click on an equation to edit it, a blank equation editor window also pops up, which I have to close in order for the one I wanted to work. Additionally, when I reopen papers saved in the .docx format, the equations lose their alignment.  Instead of the center of the equation aligned to a line, the base of the equation is aligned making inline equations float above the line. Finally, a big problem is equation numbering.  Latex has a nice system where you give each equation a label and then it assigns numbers automatically when you compile. This way you can insert and delete equations without having to renumber them all. Is there a way you can do this in Word? Am I the only one with these problems? Are there work arounds?

Incompetence is the norm

People have been justly anguished by the recent gross mishandling of the Ebola patients in Texas and Spain and the risible lapse in security at the White House. The conventional wisdom is that these demonstrations of incompetence are a recent phenomenon signifying a breakdown in governmental competence. However, I think that incompetence has always been the norm; any semblance of competence in the past is due mostly to luck and the fact that people do not exploit incompetent governance because of a general tendency towards docile cooperativity (as well as incompetence of bad actors). In many ways, it is quite amazing at how reliably citizens of the US and other OECD members respect traffic laws, pay their bills and service their debts on time. This is a huge boon to an economy since excessive resources do not need to be spent on enforcing rules. This does not hold in some if not many developing nations where corruption is a major problem (c.f. this op-ed in the Times today). In fact, it is still an evolutionary puzzle as to why agents cooperate for the benefit of the group even though it is an advantage for an individual to defect. Cooperativity is also not likely to be all genetic since immigrants tend to follow the social norm of their adopted country, although there could be a self-selection effect here. However, the social pressure to cooperate could evaporate quickly if there is the perception of the lack of enforcement as evidenced by looting following natural disasters or the abundance of insider trading in the finance industry. Perhaps, as suggested by the work of Karl Sigmund and other evolutionary theorists, cooperativity is a transient phenomenon and will eventually be replaced by the evolutionarily more stable state of noncooperativity. In that sense, perceived incompetence could be rising but not because we are less able but because we are less cooperative.

Nobel Prize in Physiology or Medicine

The Nobel Prize for Physiology or Medicine was awarded this morning to John O’Keefe and May-Brit Moser and Edward Moser for the discovery of place cells and grid cells, respectively. O’Keefe discovered in 1971 that there were cells in the hippocampus that fired when a rat was in a certain location. He called these place cells and a whole generation of scientists, including the Mosers, have been studying them ever since then. In 2005, the Mosers discovered grid cells in the entorhinal cortex, which feed into the hippocampus. Grid cells fire whenever rats pass through periodically spaced intervals in a given area such as a room, dividing the room into a triangular lattice. Different grid cells have different frequencies, phases and orientations.

For humans, the hippocampus is an area of the brain known to be associated with memory formation. Much of what we know about the hippocampus in humans was learned by studying Henry Molaison, known as H.M. in the scientific literature, who had both of his hippocampi removed as a young man because of severe epileptic fits. H.M. could carry on a conversation but could not remember any of it if he was distracted. He had to be re-introduced to the medical staff that treated and observed him every day. H.M. showed us that memory comes in at least three forms. There is very short term or working memory, necessary to carry a conversation or remember a phone number long enough to dial it. Then there is long term explicit or declarative memory for which the hippocampus is essential. This is the memory of episodic events in your life and random learned facts about the world. People without a hippocampus, as depicted in the film Memento, cannot form explicit memories. Finally, there is implicit long term memory, such as how to ride a bicycle or use a pencil. This type of memory does not seem to require the hippocampus as evidenced by the fact that H.M. could become more skilled at certain games that he was taught to play daily even though he professed to never having played the game each time. The implication of the hippocampus for spatial location for humans is more recent. There was the famous study that showed London cab drivers had an enlarged hippocampus compared to controls and neural imaging has now shown something akin to place fields in humans.

While the three new laureates are all excellent scientists and deserving of the prize, this is still another example of how the Nobel prize singles out individuals at the expense of other important contributors. O’Keefe’s coauthor on the 1971 paper, Jonathan Dovstrosky, was not awarded. I’ve also been told that my former colleague at the University of Pittsburgh, Bill Skaggs, was the one who pointed out to the Mosers that the patterns in their data corresponded to grid cells. Bill was one of the most brilliant scientists I have known but did not secure tenure and is not directly involved in academic research anymore as far as I know. The academic system should find a way to maximize the skills of people like Bill and Douglas Prasher.

Finally, the hype surrounding the prize announcement is that the research could be important for treating Alzheimer’s disease, which is associated with a loss of episodic memory and navigational ability. However, if we use the premise that there must be a neural correlate of anything an animal can do, then place cells must necessarily exist given that rats have the ability to discern spatial location. What we did not know was where these cells are and O’Keefe showed us that it is in the hippocampus but we could have also associated the hippocampus with the memory loss of Alzheimer’s disease from H.M. The existence of grid cells is perhaps less obvious since it is not inherently obvious that we can naturally divide a room into a triangular lattice. It is plausible that grid cells do the computation giving rise to place cells but we still need to understand the computation that gives rise to grid cells. It is not obvious to me that grid cells are easier to compute than place cells.

Linear and nonlinear thinking

A linear system is one where the whole is precisely the sum of its parts. You can know how different parts will act together by simply knowing how they act in isolation. A nonlinear function lacks this nice property. For example, consider a linear function f(x). It satisfies the property that f(a x + b y) = a f(x) + b f(y). The function of the sum is the sum of the functions. One important point to note is that what is considered to be the paragon of linearity, namely a line on a graph, i.e. f(x) = mx + b is not linear since f(x + y) = m (x + y) + b \ne f(x)+ f(y). The y-intercept b destroys the linearity of the line. A line is instead affine, which is to say a linear function shifted by a constant. A linear differential equation has the form

\frac{dx}{dt} = M x

where x can be in any dimension.  Solutions of a linear differential equation can be multiplied by any constant and added together.

Linearity is thus essential for engineering. If you are designing a bridge then you simply add as many struts as you need to support the predicted load. Electronic circuit design is also linear in the sense that you combine as many logic circuits as you need to achieve your end. Imagine if bridge mechanics were completely nonlinear so that you had no way to predict how a bunch of struts would behave when assembled together. You would then have to test each combination to see how they work. Now, real bridges are not entirely linear but the deviations from pure linearity are mild enough that you can make predictions or have rules of thumb of what will work and what will not.

Chemistry is an example of a system that is highly nonlinear. You can’t know how a compound will act just based on the properties of its components. For example, you can’t simply mix glass and steel together to get a strong and hard transparent material. You need to be clever in coming up with something like gorilla glass used in iPhones. This is why engineering new drugs is so hard. Although organic chemistry is quite sophisticated in its ability to synthesize various compounds there is no systematic way to generate molecules of a given shape or potency. We really don’t know how molecules will behave until we create them. Hence, what is usually done in drug discovery is to screen a large number of molecules against specific targets and hope. I was at a computer-aided drug design Gordon conference a few years ago and you could cut the despair and angst with a knife.

That is not to say that engineering is completely hopeless for nonlinear systems. Most nonlinear systems act linearly if you perturb them gently enough. That is why linear regression is so useful and prevalent. Hence, even though the global climate system is a highly nonlinear system, it probably acts close to linear for small changes. Thus I feel confident that we can predict the increase in temperature for a 5% or 10% change in the concentration of greenhouse gases but much less confident in what will happen if we double or treble them. How linear a system will act depends on how close they are to a critical or bifurcation point. If the climate is very far from a bifurcation then it could act linearly over a large range but if we’re near a bifurcation then who knows what will happen if we cross it.

I think biology is an example of a nonlinear system with a wide linear range. Recent research has found that many complex traits and diseases like height and type 2 diabetes depend on a large number of linearly acting genes (see here). Their genetic effects are additive. Any nonlinear interactions they have with other genes (i.e. epistasis) are tiny. That is not to say that there are no nonlinear interactions between genes. It only suggests that common variations are mostly linear. This makes sense from an engineering and evolutionary perspective. It is hard to do either in a highly nonlinear regime. You need some predictability if you make a small change. If changing an allele had completely different effects depending on what other genes were present then natural selection would be hard pressed to act on it.

However, you also can’t have a perfectly linear system because you can’t make complex things. An exclusive OR logic circuit cannot be constructed without a threshold nonlinearity. Hence, biology and engineering must involve “the linear combination of nonlinear gadgets”. A bridge is the linear combination of highly nonlinear steel struts and cables. A computer is the linear combination of nonlinear logic gates. This occurs at all scales as well. In biology, you have nonlinear molecules forming a linear genetic code. Two nonlinear mitochondria may combine mostly linearly in a cell and two liver cells may combine mostly linearly in a liver.  This effective linearity is why organisms can have a wide range of scales. A mouse liver is thousands of times smaller than a human one but their functions are mostly the same. You also don’t need very many nonlinear gadgets to have extreme complexity. The genes between organisms can be mostly conserved while the phenotypes are widely divergent.