What Uber doesn’t get

You may have heard that ride hailing services Uber and Lyft have pulled out of Austin, TX because they refuse to be regulated. You can read about the details here. The city wanted to fingerprint drivers, as they do for taxis, but Uber and Lyft forced a referendum on the city to make them exempt or else they would leave. The city voted against them. I personally use Uber and really like it but what I like about Uber has nothing to do with Uber per se or regulation. What I like is 1) no money needs to be exchanged especially the tip and 2) the price is essentially fixed so it is in the driver’s interest to get me to my destination as fast as possible. I have been taken on joy rides far too many times by taxi drivers trying to maximize the fare and I never know how much to tip. However, these are things that regulated taxis could implement and should implement. I do think it is extremely unfair that Uber can waltz into a city like New York and compete against highly regulated taxis, who have paid as much as a million dollars for the right to operate. Uber and Lyft should collaborate with existing taxi companies rather than trying to put them out of business. There was a reason to regulate taxis (e.g. safety, traffic control, fraud protection), and that should apply whether I hail a cab on the street or I use a smartphone app.

The nature of evil

In our current angst over terrorism and extremism, I think it is important to understand the motivation of the agents behind the evil acts if we are ever to remedy the situation. The observable element of evil (actus reus) is the harm done to innocent individuals. However, in order to prevent evil acts, we must understand the motivation behind the evil (mens rea). The Radiolab podcast “The Bad Show” gives an excellent survey of the possible varieties of evil. I will categorize evil into three types, each with increasing global impact. The first is the compulsion or desire within an individual to harm another. This is what motivates serial killers like the one described in the show. Generally, such evilness will be isolated and the impact will be limited albeit grisly. The second is related to what philosopher Hannah Arendt called “The Banality of Evil.” This is an evil where the goal of the agent is not to inflict harm per se as in the first case but in the process of pursuing some other goal, there is no attempt to avoid possible harm to others. This type of sociopathic evil is much more dangerous and widespread as is most recently seen in Volkswagen’s fraudulent attempt to pass emission standards. Although there are sociopathic individuals that really have no concern for others, I think many perpetrators in this category are swayed by cultural norms or pressures to conform. The third type of evil is when the perpetrator believes the act is not evil at all but a means to a just and noble end. This is the most pernicious form of evil because when it is done by “your side” it is not considered evil. For example, the dropping of atomic bombs on Japan was considered to be a necessary sacrifice of a few hundred thousand lives to end WWII and save many more lives.

I think it is important to understand that the current wave of terrorism and unrest in the Middle East is motivated by the third type. Young people are joining ISIS not because they particularly enjoy inflicting harm on others or they don’t care how their actions affect others, but because they are rallying to a cause they believe to be right and important. Many if not most suicide bombers come from middle class families and many are women. They are not merely motivated by a promise of a better afterlife or by a dire economic situation as I once believed. They are doing this because they believe in the cause and the feeling that they are part of something bigger than themselves. The same unwavering belief and hubris that led people to Australia fifty thousand years ago is probably what motivates ISIS today. They are not nihilists as many in the west believe. They have an entirely different value system and they view the west as being as evil as the west sees them. Until we fully acknowledge this we will not be able to end it.

Why science is hard to believe

Here is an excerpt from a well written opinion piece by Washington Post columnist Joel Achenbach:

Washington Post: We live in an age when all manner of scientific knowledge — from the safety of fluoride and vaccines to the reality of climate change — faces organized and often furious opposition. Empowered by their own sources of information and their own interpretations of research, doubters have declared war on the consensus of experts. There are so many of these controversies these days, you’d think a diabolical agency had put something in the water to make people argumentative.

Science doubt has become a pop-culture meme. In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked.

I recommend reading the whole piece.

The demise of the American cappuccino

When I was a post doc at BU in the nineties, I used to go to a cafe on Commonwealth Ave just down the street from my office on Cummington Street. I don’t remember the name of the place but I do remember getting a cappuccino that looked something like this:cappuccinoNow, I usually get something that looks like this:   dry cappuccino Instead of a light delicate layer of milk with a touch of foam floating on rich espresso, I get a lump of dry foam sitting on super acidic burnt quasi-espresso. How did this unfortunate circumstance occur? I’m not sure but I think it was because of Starbucks. Scaling up massively means you get what the average customer wants, or Starbucks thinks they want. This then sets a standard and other cafes have to follow suit because of consumer expectations. Also, making a real cappuccino takes training and a lot of practice and there is no way Starbucks could train enough baristas. Now, I’m not an anti-Starbucks person by any means. I think it is nice that there is always a fairly nice space with free wifi on every corner but I do miss getting a real cappuccino. I believe there is a real business opportunity out there for cafes to start offering better espresso drinks.

What’s wrong with neuroscience

Here is a cute parable in Frontiers in Neuroscience from cognitive scientist Joshua Brown at Indiana Univeristy.  It mirrors a lot of what I’ve been saying for the past few years:

 

The tale of the neuroscientists and the computer:  Why mechanistic theory matters

http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00349/full

A little over a decade ago, a biologist asked the question “Can a biologist fix a radio?” (Lazebnik, 2002). That question framed an amusing yet profound discussion of which methods are most appropriate to understand the inner workings of a system, such as a radio. For the engineer, the answer is straightforward: you trace out the transistors, resistors, capacitors etc., and then draw an electrical circuit diagram. At that point you have understood how the radio works and have sufficient information to reproduce its function. For the biologist, as Lazebnik suggests, the answer is more complicated. You first get a hundred radios, snip out one transistor in each, and observe what happens. Perhaps the radio will make a peculiar buzzing noise that is statistically significant across the population of radios, which indicates that the transistor is necessary to make the sound normal. Or perhaps we should snip out a resistor, and then homogenize it to find out the relative composition of silicon, carbon, etc. We might find that certain compositions correlate with louder volumes, for example, or that if we modify the composition, the radio volume decreases. In the end, we might draw a kind of neat box-and-arrow diagram, in which the antenna feeds to the circuit board, and the circuit board feeds to the speaker, and the microphone feeds to the recording circuit, and so on, based on these empirical studies. The only problem is that this does not actually show how the radio works, at least not in any way that would allow us to reproduce the function of the radio given the diagram. As Lazebnik argues, even though we could multiply experiments to add pieces of the diagram, we still won’t really understand how the radio works. To paraphrase Feynmann, if we cannot recreate it, then perhaps we have not understood it (Eliasmith and Trujillo, 2014; Hawking, 2001).

Lazebnik’s argument should not be construed to disparage biological research in general. There are abundant examples of how molecular biology has led to breakthroughs, including many if not all of the pharmaceuticals currently on the market. Likewise, research in psychology has provided countless insights that have led to useful interventions, for instance in cognitive behavioral therapy (Rothbaum et al., 2000). These are valuable ends in and of themselves. Still, are we missing greater breakthroughs by not asking the right questions that would illuminate the larger picture? Within the fields of systems, cognitive, and behavioral neuroscience in particular, I fear we are in danger of losing the meaning of the Question “how does it work”? As the saying goes, if you have a hammer, everything starts to look like a nail. Having been trained in engineering as well as neuroscience and psychology, I find all of the methods of these disciplines useful. Still, many researchers are especially well-trained in psychology, and so the research questions focus predominantly on understanding which brain regions carry out which psychological or cognitive functions, following the established paradigms of psychological research. This has resulted in the question being often reframed as “what brain regions are active during what psychological processes”, or the more sophisticated “what networks are active”, instead of “what mechanisms are necessary to reproduce the essential cognitive functions and activity patterns in the system.” To illustrate the significance of this difference, consider a computer. How does it work?

**The Tale**

Once upon a time, a group of neuroscientists happened upon a computer (Carandini, 2012). Not knowing how it worked, they each decided to find out how it sensed a variety of inputs and generated the sophisticated output seen on its display. The EEG researcher quickly went to work, putting an EEG cap on the motherboard and measuring voltages at various points all over it, including on the outer case for a reference point. She found that when the hard disk was accessed, the disk controller showed higher voltages on average, and especially more power in the higher frequency bands. When there was a lot of computation, a lot of activity was seen around the CPU. Furthermore, the CPU showed increased activity in a way that is time-locked to computational demands. “See here,” the researcher declared, “we now have a fairly temporally precise picture of which regions are active, and with what frequency spectra.” But has she really understood how the computer works?

Next, the enterprising physicist and cognitive neuroscientist came along. “We don’t have enough spatial resolution to see inside the computer,” they said. So they developed a new imaging technique by which activity can be measured, called the Metabolic Radiation Imaging (MRI) camera, which now measures the heat (infrared) given off by each part of the computer in the course of its operations. At first, they found simply that lots of math operations lead to heat given off by certain parts of the CPU, and that memory storage involved the RAM, and that file operations engaged the hard disk. A flurry of papers followed, showing that the CPU and other areas are activated by a variety of applications such as word-processing, speech recognition, game play, display updating, storing new memories, retrieving from memory, etc.

Eventually, the MRI researchers gained a crucial insight, namely that none of these components can be understood properly in isolation; they must understand the network. Now the field shifts, and they begin to look at interactions among regions. Before long, a series of high profile papers emerge showing that file access does not just involve the disks. It involves a network of regions including the CPU, the RAM, the disk controller, and the disk. They know this because when they experimentally increase the file access, all of these regions show correlated increases in activity. Next, they find that the CPU is a kind of hub region, because its activity at various times correlates with activity in other regions, such as the display adapter, the disk controller, the RAM, and the USB ports, depending on what task they require the computer to perform.

Next, one of the MRI researchers has the further insight to study the computer while it is idle. He finds that there is a network involving the CPU, the memory, and the hard disk, as (unbeknownst to them) the idle computer occasionally swaps virtual memory on and off of the disk and monitors its internal temperature. This resting network is slightly different across different computers in a way that correlates with their processor speed, memory capacity, etc., and thus it is possible to predict various capacities and properties of a given computer by measuring its activity pattern when idle. Another flurry of publications results. In this way, the neuroscientists continue to refine their understanding of the network interactions among parts of the computer. They can in fact use these developments to diagnose computer problems. After studying 25 normal computers and comparing them against 25 computers with broken disk controllers, they find that the connectivity between the CPU and the disk controller is reduced in those with broken disk controllers. This allows them to use MRI to diagnose other computers with broken disk controllers. They conclude that the disk controller plays a key role in mediating disk access, and this is confirmed with a statistical mediation analysis. Someone even develops the technique of Directional Trunk Imaging (DTI) to characterize the structure of the ribbon cables (fiber tract) from the disk controller to the hard disk, and the results match the functional correlations between the hard disk and disk controller. But for all this, have they really understood how the computer works?

The neurophysiologist spoke up. “Listen here”, he said. “You have found the larger patterns, but you don’t know what the individual circuits are doing.” He then probes individual circuit points within the computer, measuring the time course of the voltage. After meticulously advancing a very fine electrode in 10 micron increments through the hard material (dura mater) covering the CPU, he finds a voltage. The particular region shows brief “bursts” of positive voltage when the CPU is carrying out math operations. As this is the math co-processor unit (unbeknownst to the neurophysiologist), the particular circuit path is only active when a certain bit of a floating point representation is active. With careful observation, the neurophysiologist identifies this “cell” as responding stochastically when certain numbers are presented for computation. The cell therefore has a relatively broad but weak receptive field for certain numbers. Similar investigations of nearby regions of the CPU yield similar results, while antidromic stimulation reveals inputs from related number-representing regions. In the end, the neurophysiologist concludes that the cells in this particular CPU region have receptive fields that respond to different kinds of numbers, so this must be a number representation area.

Finally the neuropsychologist comes along. She argues (quite reasonably) that despite all of these findings of network interactions and voltage signals, we cannot infer that a given region is necessary without lesion studies. The neuropsychologist then gathers a hundred computers that have had hammer blows to various parts of the motherboard, extension cards, and disks. After testing their abilities extensively, she carefully selects just the few that have a specific problem with the video output. She finds that among computers that don’t display video properly, there is an overlapping area of damage to the video card. This means of course that the video card is necessary for proper video monitor functioning. Other similar discoveries follow regarding the hard disks and the USB ports, and now we have a map of which regions are necessary for various functions. But for all of this, have the neuroscientists really understood how the computer works?

**The Moral**

As the above tale illustrates, despite all of our current sophisticated methods, we in neuroscience are still in a kind of early stage of scientific endeavor; we continue to discover many effects but lack a proportionally strong standard model for understanding how they all derive from mechanistic principles. There are nonetheless many individual mathematical and computational neural models. The Hodgkin-Huxley equations (Hodgkin and Huxley, 1952), Integrate-and-fire model (Izhikevich, 2003), Genesis (Bower and Beeman, 1994), SPAUN (Eliasmith et al., 2012), and Blue Brain project (Markram, 2006) are only a few examples of the models, modeling toolkits, and frameworks available, besides many others more focused on particular phenomena. Still, there are many different kinds of neuroscience models, and even many different frameworks for modeling. This means that there is no one theoretical lingua franca against which to evaluate empirical results, or to generate new predictions. Instead, there is a patchwork of models that treat some phenomena, and large gaps where there are no models relevant to existing phenomena. The moral of the story is not that the brain is a computer. The moral of the story is twofold: first, that we sorely need a foundational mechanistic, computational framework to understand how the elements of the brain work together to form functional units and ultimately generate the complex cognitive behaviors we study. Second, it is not enough for models to exist—their premises and implications must be understood by those on the front lines of empirical research.

**The Path Forward**

A more unified model shared by the community is not out of reach for neuroscience. Such exists in physics (e.g. the standard model), engineering (e.g. circuit theory), and chemistry. To move forward, we need to consider placing a similar level of value on theoretical neuroscience as for example the field of physics places on theoretical physics. We need to train neuroscientists and psychologists early in their careers in not just statistics, but also in mathematical and computational modeling, as well as dynamical systems theory and even engineering. Computational theories exist (Marr, 1982), and empirical neuroscience is advancing, but we need to develop the relationships between them. This is not to say that all neuroscientists should spend their time building computational models. Rather, every neuroscientist should at least possess literacy in modeling as no less important than, for example, anatomy. Our graduate programs generally need improvement on this front. For faculty, if one is in a soft money position or on the tenure clock and cannot afford the time to learn or develop theories, then why not collaborate with someone who can? If we really care about the question of how the brain works, we must not delude ourselves into thinking that simply collecting more empirical results will automatically tell us how the brain works any more than measuring the heat coming from computer parts will tell us how the computer works. Instead, our experiments should address the questions of what mechanisms might account for an effect, and how to test and falsify specific mechanistic hypotheses (Platt, 1964).

The Ebola response

The real failure of the Ebola response is not that a physician went bowling after returning from West Africa but that there are not more doctors over there containing the epidemic where it is needed. Infected patients do not shed virus particles until they become symptomatic and it is emitted in bodily fluids. The New York physician monitored his temperature daily and reported immediately to a designated Ebola hospital the moment he detected high fever. We should not be scape goating physicians who are trying to make a real difference in containing this outbreak and really protecting the rest of the world. This current outbreak was identified in the spring of 2014 but there was no international response until late summer. We know how to contain Ebola – identify patients and isolate them and this is what we should be doing instead of making  emotional and unhelpful policy decisions.