Archive for the ‘Physics’ Category

Talk at Jackfest

July 16, 2014

I’m currently in Banff, Alberta for a Festschrift for Jack Cowan (webpage here). Jack is one of the founders of theoretical neuroscience and has infused many important ideas into the field. The Wilson-Cowan equations that he and Hugh Wilson developed in the early seventies form a foundation for both modeling neural systems and machine learning. My talk will summarize my work on deriving “generalized Wilson-Cowan equations” that include both neural activity and correlations. The slides can be found here. References and a summary of the work can be found here. All videos of the talks can be found here.

 

Addendum: 17:44. Some typos in the talk were fixed.

Addendum: 18:25. I just realized I said something silly in my talk.  The Legendre transform is an involution because the transform of the transform is the inverse. I said something completely inane instead.

Analytic continuation continued

March 9, 2014

As I promised in my previous post, here is a derivation of the analytic continuation of the Riemann zeta function to negative integer values. There are several ways of doing this but a particularly simple way is given by Graham Everest, Christian Rottger, and Tom Ward at this link. It starts with the observation that you can write

\int_1^\infty x^{-s} dx = \frac{1}{s-1}

if the real part of s>0. You can then break the integral into pieces with

\frac{1}{s-1}=\int_1^\infty x^{-s} dx =\sum_{n=1}^\infty\int_n^{n+1} x^{-s} dx

=\sum_{n=1}^\infty \int_0^1(n+x)^{-s} dx=\sum_{n=1}^\infty\int_0^1 \frac{1}{n^s}\left(1+\frac{x}{n}\right)^{-s} dx      (1)

For x\in [0,1], you can expand the integrand in a binomial expansion

\left(1+\frac{x}{n}\right)^{-s} = 1 +\frac{sx}{n}+sO\left(\frac{1}{n^2}\right)   (2)

Now substitute (2) into (1) to obtain

\frac{1}{s-1}=\zeta(s) -\frac{s}{2}\zeta(s+1) - sR(s)  (3)

or

\zeta(s) =\frac{1}{s-1}+\frac{s}{2}\zeta(s+1) +sR(s)   (3′)

where the remainder R is an analytic function when Re s > -1 because the resulting series is absolutely convergent. Since the zeta function is analytic for Re s >1, the right hand side is a new definition of \zeta that is analytic for s >0 aside from a simple pole at s=1. Now multiply (3) by s-1 and take the limit as s\rightarrow 1 to obtain

\lim_{s\rightarrow 1} (s-1)\zeta(s)=1

which implies that

\lim_{s\rightarrow 0} s\zeta(s+1)=1     (4)

Taking the limit of s going to zero from the right of (3′) gives

\zeta(0^+)=-1+\frac{1}{2}=-\frac{1}{2}

Hence, the analytic continuation of the zeta function to zero is -1/2.

The analytic domain of \zeta can be pushed further into the left hand plane by extending the binomial expansion in (2) to

\left(1+\frac{x}{n}\right)^{-s} = \sum_{r=0}^{k+1} \left(\begin{array}{c} -s\\r\end{array}\right)\left(\frac{x}{n}\right)^r + (s+k)O\left(\frac{1}{n^{k+2}}\right)

 Inserting into (1) yields

\frac{1}{s-1}=\zeta(s)+\sum_{r=1}^{k+1} \left(\begin{array}{c} -s\\r\end{array}\right)\frac{1}{r+1}\zeta(r+s) + (s+k)R_{k+1}(s)

where R_{k+1}(s) is analytic for Re s>-(k+1).  Now let s\rightarrow -k^+ and extract out the last term of the sum with (4) to obtain

\frac{1}{-k-1}=\zeta(-k)+\sum_{r=1}^{k} \left(\begin{array}{c} k\\r\end{array}\right)\frac{1}{r+1}\zeta(r-k) - \frac{1}{(k+1)(k+2)}    (5)

Rearranging (5) gives

\zeta(-k)=-\sum_{r=1}^{k} \left(\begin{array}{c} k\\r\end{array}\right)\frac{1}{r+1}\zeta(r-k) -\frac{1}{k+2}     (6)

where I have used

\left( \begin{array}{c} -s\\r\end{array}\right) = (-1)^r \left(\begin{array}{c} s+r -1\\r\end{array}\right)

The righthand side of (6) is now defined for Re s > -k.  Rewrite (6) as

\zeta(-k)=-\sum_{r=1}^{k} \frac{k!}{r!(k-r)!} \frac{\zeta(r-k)(k-r+1)}{(r+1)(k-r+1)}-\frac{1}{k+2}

=-\sum_{r=1}^{k} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \frac{\zeta(r-k)(k-r+1)}{(k+1)(k+2)}-\frac{1}{k+2}

=-\sum_{r=1}^{k-1} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \frac{\zeta(r-k)(k-r+1)}{(k+1)(k+2)}-\frac{1}{k+2} - \frac{\zeta(0)}{k+1}

Collecting terms, substituting for \zeta(0) and multiplying by (k+1)(k+2)  gives

(k+1)(k+2)\zeta(-k)=-\sum_{r=1}^{k-1} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \zeta(r-k)(k-r+1) - \frac{k}{2}

Reindexing gives

(k+1)(k+2)\zeta(-k)=-\sum_{r'=2}^{k} \left(\begin{array}{c} k+2\\ r'\end{array}\right) \zeta(-r'+1)r'-\frac{k}{2}

Now, note that the Bernoulli numbers satisfy the condition \sum_{r=0}^{N-1} B_r = 0.  Hence,  let \zeta(-r'+1)=-\frac{B_r'}{r'}

and obtain

(k+1)(k+2)\zeta(-k)=\sum_{r'=0}^{k+1} \left(\begin{array}{c} k+2\\ r'\end{array}\right) B_{r'}-B_0-(k+2)B_1-(k+2)B_{k+1}-\frac{k}{2}

which using B_0=1 and B_1=-1/2 gives the self-consistent condition

\zeta(-k)=-\frac{B_{k+1}}{k+1},

which is the analytic continuation of the zeta function for integers k\ge 1.

Analytic continuation

February 21, 2014

I have received some skepticism that there are possibly other ways of assigning the sum of the natural numbers to a number other than -1/12 so I will try to be more precise. I thought it would be also useful to derive the analytic continuation of the zeta function, which I will do in a future post.  I will first give a simpler example to motivate the notion of analytic continuation. Consider the geometric series 1+s+s^2+s^3+\dots. If |s| < 1 then we know that this series is equal to

\frac{1}{1-s}                (1)

Now, while the geometric series is only convergent and thus analytic inside the unit circle, (1) is defined everywhere in the complex plane except at s=1. So even though the sum doesn’t really exist outside of the domain of convergence, we can assign a number to it based on (1). For example, if we set s=2 we can make the assignment of 1 + 2 + 4 + 8 + \dots = -1. So again, the sum of the powers of two doesn’t really equal -1, only (1) is defined at s=2. It’s just that the geometric series and (1) are the same function inside the domain of convergence. Now, it is true that the analytic continuation of a function is unique. However, although the value of -1 for s=-1 is the only value for the analytic continuation of the geometric series, that doesn’t mean that the sum of the powers of 2 needs to be uniquely assigned to negative one because the sum of the powers of 2 is not an analytic function. So if you could find some other series that is a function of some parameter z that is analytic in some domain of convergence and happens to look like the sum of the powers of two for some z value, and you can analytically continue the series to that value, then you would have another assignment.

Now consider my example from the previous post. Consider the series

\sum_{n=1}^\infty \frac{n-1}{n^{s+1}}  (2)

This series is absolutely convergent for s>1.  Also note that if I set s=-1, I get

\sum_{n=1}^\infty (n-1) = 0 +\sum_{n'=1}^\infty n' = 1 + 2 + 3 + \dots

which is the sum of then natural numbers. Now, I can write (2) as

\sum_{n=1}^\infty\left( \frac{1}{n^s}-\frac{1}{n^{s+1}}\right)

and when the real part of s is greater than 1,  I can further write this as

\sum_{n=1}^\infty\frac{1}{n^s}-\sum_{n=1}^\infty\frac{1}{n^{s+1}}=\zeta(s)-\zeta(s+1)  (3)

All of these operations are perfectly fine as long as I’m in the domain of absolute convergence.  Now, as I will show in the next post, the analytic continuation of the zeta function to the negative integers is given by

\zeta (-k) = -\frac{B_{k+1}}{k+1}

where B_k are the Bernoulli numbers, which is given by the Taylor expansion of

\frac{x}{e^x-1} = \sum B_n \frac{x^n}{n!}   (4)

The first few Bernoulli numbers are B_0=1, B_1=-1/2, B_2 = 1/6. Thus using this in (4) gives \zeta(-1)=-1/12. A similar proof will give \zeta(0)=-1/2.  Using this in (3) then gives the desired result that the sum of the natural numbers is (also) 5/12.

Now this is not to say that all assignments have the same physical value. I don’t know the details of how -1/12 is used in bosonic string theory but it is likely that the zeta function is crucial to the calculation.

Nonuniqueness of -1/12

February 11, 2014

I’ve been asked to give an example of how the sum of the natural numbers could lead to another value in the comments to my previous post so I thought it may be of general interest to more people. Consider again S=1+2+3+4\dots to be the sum of the natural numbers.  The video in the previous slide gives a simple proof by combining divergent sums. In essence, the manipulation is doing renormalization by subtracting away infinities and the left over of this renormalization is -1/12. There is another video that gives the proof through analytic continuation of the Riemann zeta function

\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}

The zeta function is only strictly convergent when the real part of s is greater than 1. However, you can use analytic continuation to extract values of the zeta function to values where the sum is divergent. What this means is that the zeta function is no longer the “same sum” per se, but a version of the sum taken to a domain where it was not originally defined but smoothly (analytically) connected to the sum. Hence, the sum of the natural numbers is given by \zeta(-1) and \zeta(0)=\sum_{n=1}^\infty 1, (infinite sum over ones). By analytic continuation, we obtain the values \zeta(-1)=-1/12 and \zeta(0)=-1/2.

Now notice that if I subtract the sum over ones from the sum over the natural numbers I still get the sum over the natural numbers, e.g.

1+2+3+4\dots - (1+1+1+1\dots)=0+1+2+3+4\dots.

Now, let me define a new function \xi(s)=\zeta(s)-\zeta(s+1) so \xi(-1) is the sum over the natural numbers and by analytic continuation \xi(-1)=-1/12+1/2=5/12 and thus the sum over the natural numbers is now 5/12. Again, if you try to do arithmetic with infinity, you can get almost anything. A fun exercise is to create some other examples.

The sum of the natural numbers is -1/12?

February 10, 2014

This wonderfully entertaining video giving a proof for why the sum of the natural numbers  is -1/12 has been viewed over 1.5 million times. It just shows that there is a hunger for interesting and well explained math and science content out there. Now, we all know that the sum of all the natural numbers is infinite but the beauty (insidiousness) of infinite numbers is that they can be assigned to virtually anything. The proof for this particular assignment considers the subtraction of the divergent oscillating sum S_1=1-2+3-4+5 \dots from the divergent sum of the natural numbers S = 1 + 2 + 3+4+5\dots to obtain 4S.  Then by similar trickery it assigns S_1=1/4. Solving for S gives you the result S = -1/12.  Hence, what you are essentially doing is dividing infinity by infinity and that as any school child should know, can be anything you want. The most astounding thing to me about the video was learning that this assignment was used in string theory, which makes me wonder if the calculations would differ if I chose a different assignment.

Addendum: Terence Tao has a nice blog post on evaluating such sums.  In a “smoothed” version of the sum, it can be thought of as the “constant” in front of an asymptotic divergent term.  This constant is equivalent to the analytic continuation of the Riemann zeta function. Anyway, the -1/12 seems to be a natural way to assign a value to the divergent sum of the natural numbers.

Talk in Taiwan

November 1, 2013

I’m currently at the National Center for Theoretical Sciences, Math Division, on the campus of the National Tsing Hua University, Hsinchu for the 2013 Conference on Mathematical Physiology.  The NCTS is perhaps the best run institution I’ve ever visited. They have made my stay extremely comfortable and convenient.

Here are the slides for my talk on Correlations, Fluctuations, and Finite Size Effects in Neural Networks.  Here is a list of references that go with the talk

E. Hildebrand, M.A. Buice, and C.C. Chow, `Kinetic theory of coupled oscillators,’ Physical Review Letters 98 , 054101 (2007) [PRL Online] [PDF]

M.A. Buice and C.C. Chow, `Correlations, fluctuations and stability of a finite-size network of coupled oscillators’. Phys. Rev. E 76 031118 (2007) [PDF]

M.A. Buice, J.D. Cowan, and C.C. Chow, ‘Systematic Fluctuation Expansion for Neural Network Activity Equations’, Neural Comp., 22:377-426 (2010) [PDF]

C.C. Chow and M.A. Buice, ‘Path integral methods for stochastic differential equations’, arXiv:1009.5966 (2010).

M.A. Buice and C.C. Chow, `Effective stochastic behavior in dynamical systems with incomplete incomplete information.’ Phys. Rev. E 84:051120 (2011).

MA Buice and CC Chow. Dynamic finite size effects in spiking neural networks. PLoS Comp Bio 9:e1002872 (2013).

MA Buice and CC Chow. Generalized activity equations for spiking neural networks. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162, arXiv:1310.6934.

Here is the link to relevant posts on the topic.

New paper on neural networks

October 28, 2013

Michael Buice and I have a new paper in Frontiers in Computational Neuroscience as well as on the arXiv (the arXiv version has fewer typos at this point). This paper partially completes the series of papers Michael and I have written about developing generalized activity equations that include the effects of correlations for spiking neural networks. It combines two separate formalisms we have pursued over the past several years. The first was a way to compute finite size effects in a network of coupled deterministic oscillators (e.g. see here, herehere and here).  The second was to derive a set of generalized Wilson-Cowan equations that includes correlation dynamics (e.g. see here, here, and here ). Although both formalisms utilize path integrals, they are actually conceptually quite different. The first formalism adapted kinetic theory of plasmas to coupled dynamical systems. The second used ideas from field theory (i.e. a two-particle irreducible effective action) to compute self-consistent moment hierarchies for a stochastic system. This paper merges the two ideas to generate generalized activity equations for a set of deterministic spiking neurons.

Richard Azuma, 1930 – 2013

September 20, 2013

I was saddened to learn that Richard “Dick” Azuma, who was a professor in the University of Toronto Physics department from 1961 to 1994 and emeritus after that, passed yesterday. He was a nuclear physicist par excellence and chair of the department when I was there as an undergraduate in the early 80’s. I was in the Engineering Science (physics option) program, which was an enriched engineering program at UofT. I took a class in nuclear physics with Professor Azuma during my third year. He brought great energy and intuition to the topic. He was one of the few professors I would talk to outside of class and one day I asked if he had any open summer jobs. He went out of his way to secure a position for me at the nuclear physics laboratory TRIUMF in Vancouver in 1984. That was the best summer of my life. The lab was full of students from all over Canada and I remain good friends with many of them today. I worked on a meson scattering experiment and although I wasn’t of much use to the experiment I did get to see first hand what happens in a lab. I wrote a 4th year thesis on some of the results from that experiment. I last saw Dick in 2010 when I went to Toronto to give a physics colloquium. He was still very energetic and as engaged in physics as ever. We will all miss him greatly.

New paper on neural networks

March 22, 2013

Michael Buice and I have just published a review paper of our work on how to go beyond mean field theory for systems of coupled neurons. The paper can be obtained here. Michael and I actually pursued two lines of thought on how to go beyond mean field theory and we show how the two are related in this review. The first line started in trying to understand how to create a dynamic statistical theory of a high dimensional fully deterministic system. We first applied the method to the Kuramoto system of coupled oscillators but the formalism could apply to any system. Our recent paper in PLoS Computational Biology was an application for a network of synaptically coupled spiking neurons. I’ve written about this work multiple times (e.g. here,  here, and here). In this series of papers, we looked at how you can compute fluctuations around the infinite system size limit, which defines mean field theory for the system, when you have a finite number of neurons. We used the inverse number of neurons as a perturbative expansion parameter but the formalism could be generalized to expand in any small parameter, such as the inverse of a slow time scale.

The second line of thought was with regards to the question of how to generalize the Wilson-Cowan equation, which is a phenomenological population activity equation for a set of neurons, which I summarized here. That paper built upon the work that Michael had started in his PhD thesis with Jack Cowan. The Wilson-Cowan equation is a mean field theory of some system but it does not specify what that system is. Michael considered the variable in the Wilson-Cowan equation to be the rate (stochastic intensity) of a Poisson process and prescribed a microscopic stochastic system, dubbed the spike model, that was consistent with the Wilson-Cowan equation. He then considered deviations away from pure Poisson statistics. The expansion parameter in this case was more obscure. Away from a bifurcation (i.e. critical point) the statistics of firing would be pure Poisson but they would deviate near the critical point, so the small parameter was the inverse distance to criticality. Michael, Jack and I then derived a set of self-consistent set of equations for the mean rate and rate correlations that generalized the Wilson-Cowan equation.

The unifying theme of both approaches is that these systems can be described by either a hierarchy of moment equations or equivalently as a functional or path integral. This all boils down to the fact that any stochastic system is equivalently described by a distribution function or the moments of the distribution. Generally, it is impossible to explicitly calculate or compute these quantities but one can apply perturbation theory to extract meaningful quantities. For a path integral, this involves using Laplace’s method or the method of steepest descents to approximate an integral and in the moment hierarchy method it involves finding ways to truncate or close the system. These methods are also directly related to WKB expansion, but I’ll leave that connection to another post.

Mass

February 10, 2013

Since the putative discovery of the Higgs boson this past summer, I have heard and read multiple attempts at explaining what exactly this discovery means. They usually go along the lines of “The Higgs mechanism gives mass to particles by acting like molasses in which particles move around …” More sophisticated accounts will then attempt to explain that the Higgs boson is an excitation in the Higgs field. However, most of the explanations I have encountered assume that most people already know what mass actually is and why particles need to be endowed with it. Given that my seventh grade science teacher didn’t really understand what mass was, I have a feeling that most nonphysicists don’t really have a full appreciation of mass.

To start out, there are actually two kinds of mass. There is inertial mass, which is the resistance to acceleration and is mass that goes into Newton’s second law of  F = m a and then there is gravitational mass which is like the “charge” of gravity. The more gravitational mass you have the stronger the gravitational force. Although they didn’t need to be, these two masses happen to be the same.  The equivalence of inertial and gravitational mass is one of the deepest facts of the universe and is the reason that all objects fall at the same rate. Galileo’s apocryphal Leaning Tower of Pisa experiment was a proof that the two masses are the same. You can see this by noting that the gravitational force is given by

(more…)

New paper on finite size effects in spiking neural networks

January 25, 2013

Michael Buice and I have finally published our paper entitled “Dynamic finite size effects in spiking neural networks” in PLoS Computational Biology (link here). Finishing this paper seemed like a Sisyphean ordeal and it is only the first of a series of papers that we hope to eventually publish. This paper outlines a systematic perturbative formalism to compute fluctuations and correlations in a coupled network of a finite but large number of spiking neurons. The formalism borrows heavily from the kinetic theory of plasmas and statistical field theory and is similar to what we used in our previous work on the Kuramoto model (see here and  here) and the “Spike model” (see here).  Our heuristic paper on path integral methods is  here.  Some recent talks and summaries can be found here and here.

(more…)

Talk today at Johns Hopkins

December 12, 2012

I’m giving a computational neuroscience lunch seminar today at Johns Hopkins.  I will be talking about my work with Michael Buice, now at the Allen Institute, on how to go beyond mean field theory in neural networks. Technically, I will present our recent work on computing correlations in a network of coupled neurons systematically with a controlled perturbation expansion around the inverse network size. The method uses ideas from kinetic theory with a path integral construction borrowed and adapted by Michael from nonequilibrium statistical mechanics.  The talk is similar to the one I gave at MBI in October.  Our paper on this topic will appear soon in PLoS Computational Biology. The slides can be found here.

Complexity is the narrowing of possibilities

December 6, 2012

Complexity is often described as a situation where the whole is greater than the sum of its parts. While this description is true on the surface, it actually misses the whole point about complexity. Complexity is really about the whole being much less than the sum of its parts. Let me explain. Consider a television screen with 100 pixels that can be either black or white. The number of possible images the screen can show is 2^{100}. That’s a really big number. Most of those images would look like random white noise. However, a small set of them would look like things you recognize, like dogs and trees and salmon tartare coronets. This narrowing of possibilities, or a reduction in entropy to be more technical, increases information content and complexity. However, too much reduction of entropy, such as restricting the screen to be entirely black or white, would also be considered to have low complexity. Hence, what we call complexity is when the possibilities are restricted but not completely restricted.

Another way to think about it is to consider a very high dimensional system, like a billion particles moving around. A complex system would be if the attractor of this six billion dimensional system (3 for position and 3 for velocity of each particle), is a lower dimensional surface or manifold.  The flow of the particles would then be constrained to this attractor. The important thing to understand about the system would then not be the individual motions of the particles but the shape and structure of the attractor. In fact, if I gave you a list of the positions and velocities of each particle as a function of time, you would be hard pressed to discover that there even was a low dimensional attractor. Suppose the particles lived in a box and they moved according to Newton’s laws and only interacted through brief elastic collisions. This is an ideal gas and what would happen is that the motions of the positions of the particles would be uniformly distributed throughout the box while the velocities would obey a Normal distribution, called a Maxwell-Boltzmann distribution in physics. The variance of this distribution is proportional to the temperature. The pressure, volume, particle number and temperature will be related by the ideal gas law, PV=NkT, with the Boltzmann constant set by Nature. An ideal gas at equilibrium would not be considered complex because the attractor is a simple fixed point. However, it would be really difficult to discover the ideal gas law or even the notion of temperature if one only focused on the individual particles. The ideal gas law and all of thermodynamics was discovered empirically and only later justified microscopically through statistical mechanics and kinetic theory. However, knowledge of thermodynamics is sufficient for most engineering applications like designing a refrigerator. If you make the interactions longer range you can turn the ideal gas into a liquid and if you start to stir the liquid then you can end up with turbulence, which is a paradigm of complexity in applied mathematics. However, the main difference between an ideal gas and turbulent flow is the dimension of the attractor. In both cases, the attractor dimension is still much smaller than the full range of possibilities.

The crucial point is that focusing on the individual motions can make you miss the big picture. You will literally miss the forest for the trees. What is interesting and important about a complex system is not what the individual constituents are doing but how they are related to each other. The restriction to a lower dimensional attractor is manifested by the subtle correlations of the entire system. The dynamics on the attractor can also often be represented by an “effective theory”. Here the use of the word “effective” is not to mean that it works but rather that the underlying microscopic theory is superseded by a macroscopic one. Thermodynamics is an effective theory of the interaction of many particles. The recent trend in biology and economics had been to focus on the detailed microscopic interactions (there is push back in economics in what has been dubbed the macro-wars). As I will relate in future posts, it is sometimes much more effective (in the works better sense) to consider the effective (in the macroscopic sense) theory than a detailed microscopic theory. In other words, there is no “theory” per se of a given system but rather sets of effective theories that are to be selected based on the questions being asked.

Von Neumann

December 4, 2012

Steve Hsu has a link to a fascinating documentary on John Von Neumann. It’s definitely worth watching.  Von Neumann is probably the last great polymath. Mathematician Paul Halmos laments that Von Neumann perhaps wasted his mathematical gifts by spreading himself too thin. He worries that Von Neumann will only be considered a minor figure in pure mathematics several hundred years hence. Edward Teller believes that Von Neumann simply enjoyed thinking above all else.

Economic growth and reversible computing

November 12, 2012

In my previous post on the debt, a commenter made the important point that there are limits to economic growth. USCD physicist Tom Murphy has some thoughtful posts on the topic (see here and here). If energy use scales with economic activity then there will be a limit to economic growth because at some point we will use so much energy that the earth will boil to use Murphy’s metaphor. Even if we become energy efficient, if the rate of increase in efficiency is slower than the rate of economic growth, then we will still end up boiling. While I agree that this is true given the current state of economic activity and for the near future, I do wish to point out that it is possible to have indefinite economic growth and not use any more energy. As pointed out by Rick Bookstaber (e.g. see here), we are limited in how much we can consume because we are finite creatures. Thus, as we become richer, much of our excess wealth goes not towards increase consumption but the quality of that consumption. For example, the energy expenditure of an expensive meal prepared by a celebrity chef is not more than that from the local diner. A college education today is much more expensive than it was forty years ago without a concomitant increase in energy use. In some sense, much of modern real economic growth is effective inflation. Mobile phones have not gotten cheaper over the past decade because manufacturers keep adding more features to justify the price. We basically pay more for augmented versions of the same thing. So while energy use will increase for the foreseeable future, especially as the developed world catches up, it may not increase as fast as current trends.

However, the main reason why economic growth could possibly continue without energy growth is that our lives are becoming more virtual. One could conceivably imagine a future world in which we spend almost all of our day in an online virtual environment. In such a case, beyond a certain baseline of fulfilling basic physical needs of nutrition and shelter, all economic activity could be digital. Currently computers are quite inefficient. All the large internet firms like Google, Amazon, and Facebook require huge energy intensive server farms. However, there is nothing in principle to suggest that computers need to use energy at all. In fact, all computation can be done reversibly. This means that it is possible to build a computer that creates no entropy and uses no energy. If we lived completely or partially in a virtual world housed on a reversible computer then economic activity could increase indefinitely without using more energy. However, there could still be limits to this growth because computing power could be limited by other things such as storage capacity and relativistic effects. At some point the computer may need to be so large that information cannot be moved fast enough to keep up or the density of bits could be so high that it creates a black hole.

Nobel dilemma

July 7, 2012

Now that the Higgs boson has been discovered, the question is who gets the Nobel Prize.  This will be tricky because  the discovery was made by two detector teams with hundreds of scientists using the CERN LHC accelerator involving hundreds more and although Higgs gets the eponymous credit for the prediction, there were actually three papers published almost simultaneously on the topic, with five of the authors still alive.  In fact, we only call it the Higgs boson because of a citation error by Nobel Laureate Steven Weinberg (see here).  One could even argue further that the Higgs boson is really just a variant of the Goldstone boson, discovered by Yoichiro Nambu (Nobel Laureate) and Jeffrey Goldstone (not a Laureate). This is a perfect example of why as I argued before (see here) that discoveries are rarely made by three or fewer people.  Whatever they decide, there will be plenty of disappointed people.

Indistinguishability and transporters

June 16, 2012

I have a memory of  being in a bookstore and picking up a book with the title “The philosophy of Star Trek”.  I am not sure of how long ago this was or even what city it was in. However, I cannot seem to find any evidence of this book on the web.  There is a book entitled “Star Trek and Philosophy: The wrath of Kant“, but that is not the one I recall.  I bring this up because in this book that may or may not exist, I remember reading a chapter on the philosophy of transporters.  For those who have never watched the television show Star Trek, a transporter is a machine that can dematerialize something and rematerialize it somewhere else, presumably at the speed of light.  Supposedly, the writers of the original show invented the device so that they could move people to planet surfaces from the starship without having to use a shuttle craft, for which they did not have the budget to build the required sets.

What the author was wondering was whether or not the particles of a transported person were the same particles as the ones in the pre-transported person or were people reassembled with stock particles lying around in the new location.  The implication being that this would then illuminate the question of whether what constitutes “you” depends on your constituent particles or just the information on how to organize the particles.  I remember thinking that this is a perfect example of how physics can render questions of philosophy obsolete. What we know from quantum mechanics is that particles are indistinguishable. This means that it makes no sense to ask whether a particle in one location is the same as a particle at a different location or time.  A particle is only specified by its quantum properties like its mass, charge, and spin.   All electrons are identical.  All protons are identical and so forth.  Now they could be in different quantum states, so a more valid question is whether a transporter transports all the quantum information of a person or just the classical information, which is much smaller.  However, this question is really only relevant for the brain since we know we can transplant all the other organs from one person to another.   The neuroscience enterprise, Roger Penrose notwithstanding, implicitly operates on the principle that classical information is sufficient to characterize a brain.

Criticality

May 4, 2012

I attended a conference on Criticality in Neural Systems at NIH this week.  I thought I would write a pedagogical post on the history of critical phenomena and phase transitions since it is a long and somewhat convoluted line of thought to link criticality as it was originally defined in physics to neuroscience.  Some of this is a recapitulation of a previous post.

Criticality is about phase transitions, which is a change in the state of matter, such as between gas and liquid. The classic paradigm of phase transitions and critical phenomena is the Ising model of magnetization. In this model, a bunch of spins that can be either up or down (north or south) sit on lattice points. The lattice is said to be magnetized if all the spins are aligned and unmagnetized or disordered if they are randomly oriented. This is a simplification of a magnet where each atom has a magnetic moment which is aligned with a spin degree of freedom of the atom. Bulk magnetism arises when the spins are all aligned.  The lowest energy state of the Ising model is for all the spins to be aligned and hence magnetized. If the only thing that spins had to deal with was the interaction energy then we would be done.  What makes the Ising model interesting and for that matter all of statistical mechanics is that the spins are also coupled to a heat bath. This means that the spins are subjected to random noise and the size of this noise is given by the temperature. The noise wants to randomize the spins. The presence of randomness is why there is the word “statistical” in statistical mechanics. What this means is that we can never say for certain what the configuration of a system is but only assign probabilities and compute moments of the probability distribution. Statistical mechanics really should have been called probabilistic mechanics.

(more…)

Proof by simulation

February 7, 2012

The process of science and mathematics involves developing ideas and then proving them true.   However, what is meant by a proof depends on what one is doing.  In science, a proof is empirical.  One starts with a hypothesis and then tests it experimentally or observationally.  In pure math, a proof means that a given statement is consistent with a set of rules and axioms.  There is a huge difference between these two approaches.  Mathematics is completely internal.  It simply strives for self-consistency.  Science is external.  It tries to impose some structure on an outside world.  This is why mathematicians sometimes can’t relate to scientists and especially physicists and vice versa.

Theoretical physicists don’t need to always follow rules.  What they can do is to make things up as they go along.  To make a music analogy – physics is like jazz.  There is a set of guidelines but one is free to improvise.  If in the middle of a calculation one is stuck because they can’t solve a complicated equation, then they can assume something is small or big or slow or fast and replace the equation with a simpler one that can be solved.  One doesn’t need to know if any particular step is justified because all that matters is that in the end, the prediction must match the data.

Math is more like composing western classical music.  There are a strict set of rules that must be followed.  All the notes must fall within the diatonic scale framework.  The rhythm and meter  is tightly regulated.  There are a finite number of possible choices at each point in a musical piece just like a mathematical proof.  However,  there are a countably infinite number of possible musical pieces just as there are an infinite number of possible proofs. That doesn’t mean that rules can’t be broken, just that when they are broken a paradigm shift is required to maintain self-consistency in a new system.  Whole new fields of mathematics and genres of music arise when the rules are violated.

The invention of the computer introduced a third means of proof.  Prior to the computer,  when making an approximation, one could either take the mathematics approach and try to justify the approximation by putting bounds on the error terms analytically or take the physicist approach and compare the end result with actual data.  Now one can numerically solve the more complicated expression and compare it directly to the approximation. I would say that I have spent the bulk of my career doing just that. Although, I don’t think there is anything intrinsically wrong with proving my simulation, I do find it to be unsatisfying at times. Sometimes it is nice to know that something is true by proving it in the mathematical sense and other times it is gratifying to compare predictions directly with experiments. The most important thing is to always be aware of what mode of proof one is employing.  It is not always clear-cut.

Two talks

December 8, 2011

Last week I gave a talk on obesity at Georgia State University in Atlanta, GA. Tomorrow, I will be giving a talk on the kinetic theory of coupled oscillators at George Mason University in Fairfax, VA. Both of these talks are variations of ones I have given before so instead of uploading my slides, I’ll just point to links to previous talks, papers, and posts on the topics.  For obesity, see here and for kinetic theory, see here, here and here.


Follow

Get every new post delivered to your Inbox.

Join 121 other followers