In many of my research projects, I spend a nontrivial amount of my time wondering if I am reinventing the wheel. I try to make sure that what I’m trying to do hasn’t already been done but this is not always simple because a solution from another field may be hidden in another form using unfamiliar jargon and concepts. Hence, I think that there is a huge opportunity out there for scientific arbitrage, where people can look for open problems that can be easily solved by adapting solutions from other areas. One could argue that my own research program is a form of arbitrage since I use methods of applied mathematics and theoretical physics to tackle problems in biology. However, generally in my work, the problem comes first and then I look for the best tool to use rather than specifically work on problems that are open to arbitrage.
I’m certain that some fields will be more amenable to arbitrage than others. My guess is that fields that are very vertical like pure mathematics and theoretical physics will be less susceptible because many people have thought about the same problem and have tried all of the available techniques. Breakthroughs in these fields will generally require novel ideas that build upon previous ones, such as in the recent proofs of the Poincare Conjecture and the Kepler sphere packing problem. Using economics language, these fields are efficient. Ironically, economics itself may be a field that is not as efficient and be open to arbitrage since many of the standard models, such as for supply and demand, are based on reaching an equilibrium. It seems like a dose of dynamical systems may be in order.
I think a nice example of scientific arbitrage is the Hopfield network. John Hopfield was already a very famous physicist when he published his two famous papers on the topic in 1982 and 1984. People had already been studying neural networks based on the McCullough-Pitts neuron for several decades before Hopfield came along and changed the field. Basically, these networks consist of two-state neurons that can be either “on” or “off”. Each neuron is connected to other neurons with some pair dependent weight. A neuron is on when the weighted sum of active neurons connected to it exceeds a threshold and is off otherwise. The goal was to figure out how to make such networks perform functions such as pattern recognition, input classification and associative memory.
Hopfield’s brilliant move was to recognize that a neural network looks very much like the Ising model of statistical mechanics. The Ising mode is a simplified model of a magnetic system where two-state spins (up or down) interact. The energy of the system is lower when the spins are aligned then when they are opposite. The global ground state of the system can then be found by finding the state of minimal energy. If the connections between the spins are symmetric then the minimum energy is bounded and there is always a ground state (i.e. global attractor). Hopfield realized that the neurons in a neural network were like spins in an Ising model so that a network with symmetric connections also had a bounded energy or Lyapunov function as it is known in dynamical systems theory. The Hopfield network is then guaranteed to have stable attractors and all initial conditions will flow to them. This is why it acts like a content addressable or associative memory. Any initial state that is near an attractor (i.e. incomplete or partial memory) will flow to the memory state, e.g. smelling apple pie makes you remember Grandma. Hopfield’s arbitrage revolutionized neuroscience and it led to an influx of ideas and people from physics.