Are we in a fusion renaissance?

Fusion is a potentially unlimited source of non-carbon emitting energy. It requires the mashing together of small nuclei such as deuterium and tritium to make another nucleus and a lot of leftover energy. The problem is that nuclei do not want to be mashed together and thus to achieve fusion you need something to confine high energy nuclei for a long enough time. Currently, there are only two methods that have successfully demonstrated fusion: 1) gravitational confinement as in the center of a star, and 2) inertial confinement as in a nuclear bomb. In order to get nuclei at high enough energy to overcome the energy barrier for a fusion reaction, electrons can no longer be bound to nuclei to form atoms. A gas of quasi-neutral hot nuclei and electrons is called a plasma and has often been dubbed the fourth state of matter. Hence, the physics of fusion is mostly the physics of plasmas.

My PhD work was in plasma physics and although my thesis ultimately dealt with chaos in nonlinear partial differential equations, my early projects were tangentially related to fusion. At that time there were two approaches to attaining fusion, one was to try to do controlled inertial confinement by using massive lasers to implode a tiny pellet of fuel and the second was to use magnetic confinement in a tokamak reactor. Government sponsored research has been focused almost exclusively on these two approaches for the past forty years. There is a huge laser fusion lab at Livermore and an even bigger global project for magnetic confinement fusion in Cadarache France, called ITER. As of today, neither has proven that they will ever be viable sources of energy although there is evidence of break even where the reactors produce more energy than is put in.

However, these approaches may not ultimately be viable and there really has not been much research funding to pursue alternative strategies. This recent New York Times article reports on a set of privately funded efforts to achieve fusion backed by some big names in technology including Paul Allen, Jeff Bezos and Peter Thiel. Although there is well deserved skepticism for the success of these companies,  (I’m sure my thesis advisor Abe Bers would have had some insightful things to say about them), the time may be ripe for new approaches. In an impressive talk I heard many years ago, roboticist Rodney Brooks remarked that Moore’s Law has allowed robotics to finally be widely available because you could use software to compensate for hardware. Instead of requiring cost prohibitive high precision motors, you could use cheap ones and use software to control them. The hybrid car is only possible because of the software to decide when to use the electric motor and when to use the gas engine. The same idea may also apply to fusion. Fusion is so difficult because plasmas are inherently unstable. Most of the past effort has been geared towards designing physical systems to contain them. However, I can now imagine using software instead.

Finally, government attempts have mostly focused on using a Deuterium-Tritium fusion reaction because it has the highest yield. The problem with this reaction is that it produces a neutron, which then destroys the reactor. However, there are reactions that do not produce neutrons (see here). Abe used to joke that that we could mine the moon for Helium 3 to use in a Deuterium-Helium 3 reactor. So, although we may never have viable fusion on earth, it could be a source of energy on Elon Musk’s moon base, although solar would probably be a lot cheaper.

Proving I’m me

I have an extremely difficult time remembering the answers to my security questions for restoring forgotten passwords. I don’t have an invariant favourite movie, or book, or colour. I have many best friends from childhood and they have various permutations of their names. Did I use their first name, nick name, full name? Even my Mother’s maiden name can be problematic because there are various ways to transliterate Chinese names and I don’t always remember which I used. The city I met my wife is ambiguous. Did I use the specific town per se or the major city the town is next to? Did I include the model of my first car or just the make. Before I can work my way through the various permutations, I’m usually locked out of my account forever.

As much as I appreciate and rely on computers, software, and the internet, objectively they all still suck. My iPhone is perhaps better than the alternative but it sucks. My laptop sucks. Apple makes awful products. Google, Amazon, Uber and the rest are not so great either. I don’t remember all the times Google Maps has steered me wrong. The tech landscape may be saturated but there is definitely room for something better.

Commentary on the Blue Brain Project

Definitely read Christof Koch and Michael Buice’s commentary on the Blue Brain Project paper in Cell. They nicely summarize all the important points of the paper and propose a Turing Test for models. The performance of a model can be assessed by how long it would take an experimenter to figure out if the data from proposed neurophysiological experiments was coming from a model or the real thing. I think that this is a nice idea but there is one big difference between the Turing Test for artificial intelligence and brain simulations and that is that everyone has an innate sense of what it means to be human but no one knows what a real brain should be doing. In that sense, it is not really a Turing Test per se but rather the replication of experiments in a more systematic way than is done now. You do an experiment on a real brain then repeat it on the model and see if they get comparable results.

Big blue brain

Appearing in this week’s edition of Cell is a paper summarizing the current status of Henry Markram’s Blue Brain Project. You can download the paper for free until Oct 22 here. The paper reports on a morphological and electrophysiological statistically accurate reconstruction of a rat somatosensory cortex. I think it is a pretty impressive piece of work. They first did a survey of cortex (14 thousand recorded and labeled neurons) to get probability distributions for various types of neurons and their connectivities. The neurons are classified according to their morphology (55 m-types), electrophysiology (11 e-types), and synaptic dynamics (6 s-types). The neurons are connected according to an algorithm outlined in a companion paper in Frontiers in Computational Neuroscience that reproduces the measured connectivity distribution. They then created a massive computer simulation of the reconstructed circuit and show that it has interesting dynamics and can reproduce some experimentally observed behaviour.

Although much of the computational neuroscience community has not really rallied behind Markram’s mission, I’m actually more sanguine about it now. Whether the next project to do the same for the human brain is worth a billion dollars, especially if this is a zero sum game, is another question. However, it is definitely a worthwhile pursuit to systematically catalogue and assess what we know now. Just like how IBM’s Watson did not really invent any new algorithms per se, it clearly changed how we perceive machine learning by showing what can be done if enough resources are put into it. One particularly nice thing the project has done is to provide a complete set of calibrated models for all types of cortical neurons. I will certainly be going to their data base to get the equations for spiking neurons in all of my future models. I think one criticism they will face is that their model basically produced what they put in but to me that is a feature not a bug. A true complete description of the brain would be a joint probability distribution for everything in the brain. This is impossible to compute in the near future no matter what scale you choose to coarse grain over. No one really believes that we need all this information and thus the place to start is to assume that the distribution completely factorizes into a product of independent distributions. We should at least see if this is sufficient and this work is a step in that direction.

However, the one glaring omission in the current rendition of this project is an attempt to incorporate genetic and developmental information. A major constraint in how much information is needed to characterize the brain is how much is contained in the genome. How much of what determines a neuron type and its location is genetically coded, determined by external inputs, or is just random? When you see great diversity in something there are two possible answers: 1) the details matter a lot or 2) details do not matter at all. I would want to know the answer to this question first before I tried to reproduce the brain.

Probability of gun death

The tragedy in Oregon has reignited the gun debate. Gun control advocates argue that fewer guns mean fewer deaths while gun supporters argue that if citizens were armed then shooters could be stopped through vigilante action. These arguments can be quantified in a simple model of the probability of gun death, p_d:

p_d = p_gp_u(1-p_gp_v) + p_gp_a

where p_g is the probability of having a gun, p_u is the probability of being a criminal or  mentally unstable enough to become a shooter, p_v is the probability of effective vigilante action, and p_a is the probability of accidental death or suicide.  The probability of being killed by a gun is given by the probability of someone having a gun times the probability that they are unstable enough to use it. This is reduced by the probability of a potential victim having a gun times the probability of acting effectively to stop the shooter. Finally, there is also a probability of dying through an accident.

The first derivative of p_d with respect to p_g is p_u - 2 p_u p_g p_v + p_a and the second derivative is negative. Thus, the minimum of p_d cannot be in the interior 0 < p_g < 1 and must be at the boundary. Given that p_d = 0 when p_g=0 and p_d = p_u(1-p_v) + p_a when p_g = 1, the absolute minimum is found when no one has a gun. Even if vigilante action was 100% effective, there would still be gun deaths due to accidents. Now, some would argue that zero guns is not possible so we can examine if it is better to have fewer guns or more guns. p_d is maximal at p_g = (p_u + p_a)/(2p_u p_v). Thus, unless p_v is greater than one half then even in the absence of accidents there is no situation where increasing the number of guns makes us safer. The bottom line is that if we want to reduce gun deaths we should either reduce the number of guns or make sure everyone is armed and has military training.