Are we in a fusion renaissance?

Fusion is a potentially unlimited source of non-carbon emitting energy. It requires the mashing together of small nuclei such as deuterium and tritium to make another nucleus and a lot of leftover energy. The problem is that nuclei do not want to be mashed together and thus to achieve fusion you need something to confine high energy nuclei for a long enough time. Currently, there are only two methods that have successfully demonstrated fusion: 1) gravitational confinement as in the center of a star, and 2) inertial confinement as in a nuclear bomb. In order to get nuclei at high enough energy to overcome the energy barrier for a fusion reaction, electrons can no longer be bound to nuclei to form atoms. A gas of quasi-neutral hot nuclei and electrons is called a plasma and has often been dubbed the fourth state of matter. Hence, the physics of fusion is mostly the physics of plasmas.

My PhD work was in plasma physics and although my thesis ultimately dealt with chaos in nonlinear partial differential equations, my early projects were tangentially related to fusion. At that time there were two approaches to attaining fusion, one was to try to do controlled inertial confinement by using massive lasers to implode a tiny pellet of fuel and the second was to use magnetic confinement in a tokamak reactor. Government sponsored research has been focused almost exclusively on these two approaches for the past forty years. There is a huge laser fusion lab at Livermore and an even bigger global project for magnetic confinement fusion in Cadarache France, called ITER. As of today, neither has proven that they will ever be viable sources of energy although there is evidence of break even where the reactors produce more energy than is put in.

However, these approaches may not ultimately be viable and there really has not been much research funding to pursue alternative strategies. This recent New York Times article reports on a set of privately funded efforts to achieve fusion backed by some big names in technology including Paul Allen, Jeff Bezos and Peter Thiel. Although there is well deserved skepticism for the success of these companies,  (I’m sure my thesis advisor Abe Bers would have had some insightful things to say about them), the time may be ripe for new approaches. In an impressive talk I heard many years ago, roboticist Rodney Brooks remarked that Moore’s Law has allowed robotics to finally be widely available because you could use software to compensate for hardware. Instead of requiring cost prohibitive high precision motors, you could use cheap ones and use software to control them. The hybrid car is only possible because of the software to decide when to use the electric motor and when to use the gas engine. The same idea may also apply to fusion. Fusion is so difficult because plasmas are inherently unstable. Most of the past effort has been geared towards designing physical systems to contain them. However, I can now imagine using software instead.

Finally, government attempts have mostly focused on using a Deuterium-Tritium fusion reaction because it has the highest yield. The problem with this reaction is that it produces a neutron, which then destroys the reactor. However, there are reactions that do not produce neutrons (see here). Abe used to joke that that we could mine the moon for Helium 3 to use in a Deuterium-Helium 3 reactor. So, although we may never have viable fusion on earth, it could be a source of energy on Elon Musk’s moon base, although solar would probably be a lot cheaper.

Abraham Bers, 1930 – 2015

I was saddened to hear that my PhD thesis advisor at MIT, Professor Abraham Bers, passed away last week at the age of 85. Abe was a fantastic physicist and mentor. He will be dearly missed by his many students. I showed up at MIT in the fall of 1986 with the intent of doing experimental particle physics. I took Abe’s plasma physics course as a breadth requirement for my degree. When I began, I didn’t know what a plasma was but by the end of the term I had joined his group. Abe was one of the best teachers I have ever had. His lectures exemplified his extremely clear and insightful mind. I still consult the notes from his classes from time to time.

Abe also had a great skill in finding the right problem for students. I struggled to get started doing research but one day Abe came to my desk with this old Russian book and showed me a figure. He said that it didn’t make sense according to the current theory and asked me to see if I could understand it. Somehow, this lit a spark in me and pursuing that little puzzle resulted in my first three papers. However, Abe also realized, even before I did I think, that I actually liked applied math better than physics. Thus, after finishing these papers and building some command in the field, he suggested that I completely switch my focus to nonlinear dynamics and chaos, which was very hot at the time. This turned out to be the perfect thing for me and it also made me realize that I could always change fields. I have never been afraid of going outside of my comfort zone since. I am always thankful for the excellent training I received at MIT.

The most eventful experience of those days was our weekly group meetings. These were famous no holds barred affairs where the job of the audience was to try to tear down everything the presenter said. I would prepare for a week to get ready when it was my turn. I couldn’t even get through the first slide my first time but by the time I graduated, nothing could faze me. Although the arguments could get quite heated at times, Abe never lost his cool. He would also come to my office after a particularly bad presentation to cheer me up. I don’t ever have any stress when giving talks or speaking in public now because I know that there could never be a sharper or tougher audience than Abe.

To me, Abe will always represent the gentleman scholar to which I’ve always aspired. He was always impeccably dressed with his tweed jacket, Burberry trench coat, and trademark bow tie. Well before good coffee became de rigueur in the US, Abe was a connoisseur and kept his coffee in a freezer in his office. He led a balanced life. He took work very seriously but also made sure to have time for his family and other pursuits. I visited him at MIT a few years ago and he was just as excited about what he was doing then as he was when I was a graduate student. Although he is gone, he will not be forgotten. The book he had been working on, Plasma Waves and Fusion, will be published this fall. I will be sure to get a copy as soon as it comes out.

2015-9-16: Here is a link to his MIT obituary.

Hopfield on the difference between physics and biology

Here is a short essay by theoretical physicist John Hopfield of the Hopfield net and kinetic proofreading fame among many other things (hat tip to Steve Hsu). I think much of the hostility of biologists towards physicists and mathematicians that Hopfield talks about have dissipated over the past 40 years, especially amongst the younger set. In fact these days, a good share of Cell, Science, and Nature papers have some computational or mathematical component. However, the trend is towards brute force big data type analysis rather than the simple elegant conceptual advances that Hopfield was famous for. In the essay, Hopfield gives several anecdotes and summarizes them with pithy words of advice. The one that everyone should really heed and one I try to always follow is “Do your best to make falsifiable predictions. They are the distinction between physics and ‘Just So Stories.’”

New paper on path integrals

Carson C. Chow and Michael A. Buice. Path Integral Methods for Stochastic Differential Equations. The Journal of Mathematical Neuroscience,  5:8 2015.

Abstract: Stochastic differential equations (SDEs) have multiple applications in mathematical neuroscience and are notoriously difficult. Here, we give a self-contained pedagogical review of perturbative field theoretic and path integral methods to calculate moments of the probability density function of SDEs. The methods can be extended to high dimensional systems such as networks of coupled neurons and even deterministic systems with quenched disorder.

This paper is a modified version of our arXiv paper of the same title.  We added an example of the stochastically forced FitzHugh-Nagumo equation and fixed the typos.

Talk at Jackfest

I’m currently in Banff, Alberta for a Festschrift for Jack Cowan (webpage here). Jack is one of the founders of theoretical neuroscience and has infused many important ideas into the field. The Wilson-Cowan equations that he and Hugh Wilson developed in the early seventies form a foundation for both modeling neural systems and machine learning. My talk will summarize my work on deriving “generalized Wilson-Cowan equations” that include both neural activity and correlations. The slides can be found here. References and a summary of the work can be found here. All videos of the talks can be found here.


Addendum: 17:44. Some typos in the talk were fixed.

Addendum: 18:25. I just realized I said something silly in my talk.  The Legendre transform is an involution because the transform of the transform is the inverse. I said something completely inane instead.

Analytic continuation continued

As I promised in my previous post, here is a derivation of the analytic continuation of the Riemann zeta function to negative integer values. There are several ways of doing this but a particularly simple way is given by Graham Everest, Christian Rottger, and Tom Ward at this link. It starts with the observation that you can write

\int_1^\infty x^{-s} dx = \frac{1}{s-1}

if the real part of s>0. You can then break the integral into pieces with

\frac{1}{s-1}=\int_1^\infty x^{-s} dx =\sum_{n=1}^\infty\int_n^{n+1} x^{-s} dx

=\sum_{n=1}^\infty \int_0^1(n+x)^{-s} dx=\sum_{n=1}^\infty\int_0^1 \frac{1}{n^s}\left(1+\frac{x}{n}\right)^{-s} dx      (1)

For x\in [0,1], you can expand the integrand in a binomial expansion

\left(1+\frac{x}{n}\right)^{-s} = 1 +\frac{sx}{n}+sO\left(\frac{1}{n^2}\right)   (2)

Now substitute (2) into (1) to obtain

\frac{1}{s-1}=\zeta(s) -\frac{s}{2}\zeta(s+1) - sR(s)  (3)


\zeta(s) =\frac{1}{s-1}+\frac{s}{2}\zeta(s+1) +sR(s)   (3′)

where the remainder R is an analytic function when Re s > -1 because the resulting series is absolutely convergent. Since the zeta function is analytic for Re s >1, the right hand side is a new definition of \zeta that is analytic for s >0 aside from a simple pole at s=1. Now multiply (3) by s-1 and take the limit as s\rightarrow 1 to obtain

\lim_{s\rightarrow 1} (s-1)\zeta(s)=1

which implies that

\lim_{s\rightarrow 0} s\zeta(s+1)=1     (4)

Taking the limit of s going to zero from the right of (3′) gives


Hence, the analytic continuation of the zeta function to zero is -1/2.

The analytic domain of \zeta can be pushed further into the left hand plane by extending the binomial expansion in (2) to

\left(1+\frac{x}{n}\right)^{-s} = \sum_{r=0}^{k+1} \left(\begin{array}{c} -s\\r\end{array}\right)\left(\frac{x}{n}\right)^r + (s+k)O\left(\frac{1}{n^{k+2}}\right)

 Inserting into (1) yields

\frac{1}{s-1}=\zeta(s)+\sum_{r=1}^{k+1} \left(\begin{array}{c} -s\\r\end{array}\right)\frac{1}{r+1}\zeta(r+s) + (s+k)R_{k+1}(s)

where R_{k+1}(s) is analytic for Re s>-(k+1).  Now let s\rightarrow -k^+ and extract out the last term of the sum with (4) to obtain

\frac{1}{-k-1}=\zeta(-k)+\sum_{r=1}^{k} \left(\begin{array}{c} k\\r\end{array}\right)\frac{1}{r+1}\zeta(r-k) - \frac{1}{(k+1)(k+2)}    (5)

Rearranging (5) gives

\zeta(-k)=-\sum_{r=1}^{k} \left(\begin{array}{c} k\\r\end{array}\right)\frac{1}{r+1}\zeta(r-k) -\frac{1}{k+2}     (6)

where I have used

\left( \begin{array}{c} -s\\r\end{array}\right) = (-1)^r \left(\begin{array}{c} s+r -1\\r\end{array}\right)

The righthand side of (6) is now defined for Re s > -k.  Rewrite (6) as

\zeta(-k)=-\sum_{r=1}^{k} \frac{k!}{r!(k-r)!} \frac{\zeta(r-k)(k-r+1)}{(r+1)(k-r+1)}-\frac{1}{k+2}

=-\sum_{r=1}^{k} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \frac{\zeta(r-k)(k-r+1)}{(k+1)(k+2)}-\frac{1}{k+2}

=-\sum_{r=1}^{k-1} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \frac{\zeta(r-k)(k-r+1)}{(k+1)(k+2)}-\frac{1}{k+2} - \frac{\zeta(0)}{k+1}

Collecting terms, substituting for \zeta(0) and multiplying by (k+1)(k+2)  gives

(k+1)(k+2)\zeta(-k)=-\sum_{r=1}^{k-1} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \zeta(r-k)(k-r+1) - \frac{k}{2}

Reindexing gives

(k+1)(k+2)\zeta(-k)=-\sum_{r'=2}^{k} \left(\begin{array}{c} k+2\\ r'\end{array}\right) \zeta(-r'+1)r'-\frac{k}{2}

Now, note that the Bernoulli numbers satisfy the condition \sum_{r=0}^{N-1} B_r = 0.  Hence,  let \zeta(-r'+1)=-\frac{B_r'}{r'}

and obtain

(k+1)(k+2)\zeta(-k)=\sum_{r'=0}^{k+1} \left(\begin{array}{c} k+2\\ r'\end{array}\right) B_{r'}-B_0-(k+2)B_1-(k+2)B_{k+1}-\frac{k}{2}

which using B_0=1 and B_1=-1/2 gives the self-consistent condition


which is the analytic continuation of the zeta function for integers k\ge 1.