Archive for the ‘Philosophy’ Category

Linear and nonlinear thinking

October 1, 2014

A linear system is one where the whole is precisely the sum of its parts. You can know how different parts will act together by simply knowing how they act in isolation. A nonlinear function lacks this nice property. For example, consider a linear function f(x). It satisfies the property that f(a x + b y) = a f(x) + b f(y). The function of the sum is the sum of the functions. One important point to note is that what is considered to be the paragon of linearity, namely a line on a graph, i.e. f(x) = mx + b is not linear since f(x + y) = m (x + y) + b \ne f(x)+ f(y). The y-intercept b destroys the linearity of the line. A line is instead affine, which is to say a linear function shifted by a constant. A linear differential equation has the form

\frac{dx}{dt} = M x

where x can be in any dimension.  Solutions of a linear differential equation can be multiplied by any constant and added together.

Linearity is thus essential for engineering. If you are designing a bridge then you simply add as many struts as you need to support the predicted load. Electronic circuit design is also linear in the sense that you combine as many logic circuits as you need to achieve your end. Imagine if bridge mechanics were completely nonlinear so that you had no way to predict how a bunch of struts would behave when assembled together. You would then have to test each combination to see how they work. Now, real bridges are not entirely linear but the deviations from pure linearity are mild enough that you can make predictions or have rules of thumb of what will work and what will not.

Chemistry is an example of a system that is highly nonlinear. You can’t know how a compound will act just based on the properties of its components. For example, you can’t simply mix glass and steel together to get a strong and hard transparent material. You need to be clever in coming up with something like gorilla glass used in iPhones. This is why engineering new drugs is so hard. Although organic chemistry is quite sophisticated in its ability to synthesize various compounds there is no systematic way to generate molecules of a given shape or potency. We really don’t know how molecules will behave until we create them. Hence, what is usually done in drug discovery is to screen a large number of molecules against specific targets and hope. I was at a computer-aided drug design Gordon conference a few years ago and you could cut the despair and angst with a knife.

That is not to say that engineering is completely hopeless for nonlinear systems. Most nonlinear systems act linearly if you perturb them gently enough. That is why linear regression is so useful and prevalent. Hence, even though the global climate system is a highly nonlinear system, it probably acts close to linear for small changes. Thus I feel confident that we can predict the increase in temperature for a 5% or 10% change in the concentration of greenhouse gases but much less confident in what will happen if we double or treble them. How linear a system will act depends on how close they are to a critical or bifurcation point. If the climate is very far from a bifurcation then it could act linearly over a large range but if we’re near a bifurcation then who knows what will happen if we cross it.

I think biology is an example of a nonlinear system with a wide linear range. Recent research has found that many complex traits and diseases like height and type 2 diabetes depend on a large number of linearly acting genes (see here). Their genetic effects are additive. Any nonlinear interactions they have with other genes (i.e. epistasis) are tiny. That is not to say that there are no nonlinear interactions between genes. It only suggests that common variations are mostly linear. This makes sense from an engineering and evolutionary perspective. It is hard to do either in a highly nonlinear regime. You need some predictability if you make a small change. If changing an allele had completely different effects depending on what other genes were present then natural selection would be hard pressed to act on it.

However, you also can’t have a perfectly linear system because you can’t make complex things. An exclusive OR logic circuit cannot be constructed without a threshold nonlinearity. Hence, biology and engineering must involve “the linear combination of nonlinear gadgets”. A bridge is the linear combination of highly nonlinear steel struts and cables. A computer is the linear combination of nonlinear logic gates. This occurs at all scales as well. In biology, you have nonlinear molecules forming a linear genetic code. Two nonlinear mitochondria may combine mostly linearly in a cell and two liver cells may combine mostly linearly in a liver.  This effective linearity is why organisms can have a wide range of scales. A mouse liver is thousands of times smaller than a human one but their functions are mostly the same. You also don’t need very many nonlinear gadgets to have extreme complexity. The genes between organisms can be mostly conserved while the phenotypes are widely divergent.

The morality of watching (American) football

September 7, 2014

The question in this week’s New York Times Ethicist column is whether it is wrong to watch football because of the inherent dangers to the players. The ethicist, Chuck Klosterman, says that it is ethical to watch football because the players made the decision to play freely with full knowledge of the risks. Although I think Klosterman has a valid point and I do not judge anyone who enjoys football, I have personally decided to forgo watching it. I simply could no longer stomach watching player after player going down with serious injuries each week. In Klosterman’s article, he goes on to say that even if football were the only livelihood the players had, we should still watch football so that they could have a livelihood. This is where I disagree. Aside from the fact that we shouldn’t have a society where the only chance to have a decent livelihood is through sports, football need not be that sport. If football did not exist, some other sport, including a modified safer football, would take its place. Soccer is the most popular sport in the rest of the world. Football exists in its current form because the fans support it. If that support moved to another sport, the players would move too.

What is the difference between math, science and philsophy?

May 16, 2014

I’ve been listening to the Philosophy Bites podcast recently. One from a few years ago consisted of answers from philosopher’s to the question posed on the spot and without time for deep reflection: What is Philosophy? Some managed to give precise answers, but many struggled. I think one source of conflict they faced as they answered was that they didn’t know how to separate the question of what philosophers actually do from they should be doing. However, I think that a clear distinction between science, math and philosophy as methodologies can be specified precisely. I also think that this is important because practitioner’s in each subject should be aware of what methodology they are actually using and what is appropriate for whatever problem they are working on.

Here are my definitions: Math explores the consequences of rules or assumptions, science is the empirical study of measurable things, and philosophy examines things that cannot be resolved by mathematics or empiricism. With these definitions, practitioner’s of any discipline may use either math, science, or philosophy to help answer whatever question they may be addressing. Scientists need mathematics to work out the consequences of their assumptions and philosophy to help delineate phenomena. Mathematicians need science and philosophy to provide assumptions or rules to analyze. Philosophers need mathematics to sort out arguments and science to test hypotheses experimentally.

Those skeptical of philosophy may suggest that anything that cannot be addressed by math or science has no practical value. However, with these definitions, even the most hardened mathematician or scientist may be practicing philosophy without even knowing it. Atheists like Richard Dawkins should realize that part of their position is based on philosophy and not science. The only truly logical position to take with respect to God is agnosticism. It may be probable that there is not a God that intervenes directly in our lives and that probability may be high but it is not a provable fact. To be an atheist is to put some cutoff on the posterior probability for the existence of God and that cutoff is based on philosophy not science.

While most scientists and mathematicians are cognizant that moral issues may be pertinent to their work (e.g. animal experimentation), they may be less cognizant of what I believe is an equally important philosophical issue , which is the ontological question. Ontology is a philosophical term for the study of what exists. To many pragmatically minded people, this may sound like an ethereal topic (or worse adjective) that has no place in the hard sciences. However, as I pointed out in an earlier post, we can put labels on at most a countably infinite number of things out of an uncountable number of possibilities and for most purposes, our ontological list of things is finite. We thus have to choose and although some of these choices are guided by how we as human agents interact with the world, others will be arbitrary. Determining ontology will involve aspects of philosophy, science and math.

Mathematicians face the ontological problem daily when they decide on what areas to work in and what theorems to prove. The possibilities in mathematics are infinite so it is almost certain that if we were to rerun history some if not many fields would not be reinvented. While scientists may have fewer degrees of freedom to choose from they are also making choices and these choices tend to be confined by history. The ontological problem shows up anytime we try to define a phenomenon. The classification of cognitive disorders is a pure exercise in ontology. Authors of the DSM IV have attempted to be as empirical and objective as possible but there is still plenty of philosophy in their designations of psychiatric conditions. While most string theorists accept that their discipline is mostly mathematical, they should also realize that it is very philosophical. A theory of everything includes the ontology by definition.

Subjects traditionally within the realm of philosophy also have mathematical and scientific aspects. Our morals and values have certainly been shaped by evolution and biological constraints. We should completely rethink our legal philosophy based on what we now know about neuroscience (e.g. see here). The same goes for any discussion of consciousness, the mind-body problem, and free will. To me the real problem with free will isn’t whether or not it exists but rather who or what exactly is exercising that free will and this can be looked at empirically.

So next time when you sit down to solve a problem, think about whether it is one of mathematics, science or philosophy.

The blinking-dot paradox of consciousness

May 6, 2014

Suppose you could measure the activity of every neuron in the brain of an awake and behaving person, including all sensory and motor neurons. You could then represent the firing pattern of these neurons on a screen with a hundred billion pixels (or as many as needed). Each pixel would be identified with a neuron and the activity of the brain would be represented by blinking dots of light. The question then is whether or not the array of blinking dots is conscious (provided the original person was conscious). If you believe that everything about consciousness is represented by neuronal spikes, then you would be forced to answer yes. On the other hand, you must then acknowledge that a television screen simply outputting entries from a table is also conscious.

There are several layers to this possible paradox. The first is whether or not all the information required to fully decode the brain and emulate consciousness is in the spiking patterns of the neurons in the brain. It could be that you need the information contained in all the physical processes in the brain such as the movement of  ions and water molecules, conformational changes of ion channels, receptor trafficking, blood flow, glial cells, and so forth. The question is then what resolution is required. If there is some short distance cut-off so you could discretize the events then you could always construct a bigger screen with trillions of trillions of pixels and be faced with the same question. But suppose that there is no cut-off so you need an uncountable amount of information. Then consciousness would not be a computable phenomenon and there is no hope in ever understanding it. Also, at a small enough scale (Planck length) you would be forced to include quantum gravity effects as well, in which case Roger Penrose may have been on to something after all.

The second issue is whether or not there is a difference between a neural computation and reading from a table. Presumably, the spiking events in the brain are due to the extremely complex dynamics of synaptically coupled neurons in the presence of environmental inputs. Is there something intrinsically different between a numerical simulation of a brain model from reading the entries of a list? Would one exhibit consciousness while the other not? To make matters even more confusing, suppose you have a computer running a simulation of a brain. The firing of the neurons are now encoded by the states of various electronic components like transistors. Does this means that the circuits in the computer become conscious when the simulation is running? What if the computer were simultaneously running other programs, like a web browser, or even another brain simulation?  In a computer, the execution of a program is not tied to specific electronic components.  Transistors just change states as instructions arrive so when a computer is running multiple programs, the transistors simulating the brain are not conserved.  How then do they stay coherent to form a conscious perception?  In a normal computer operation, the results are fed to an output, which is then interpreted by us.  In a simulation of the brain, there is no output, there is just the simulation. Questions like these make me question my once unwavering faith in the monistic (i.e. not dualistic) theory of the brain.

What counts as science?

December 10, 2013

Ever since the financial crisis of 2008 there has been some discussion about whether or not economics is a science. Some, like Russ Roberts of Econtalk, channelling Friedrich Hayek, do not believe that economics is a science. They think it’s more like history where we come up with descriptive narratives that cannot be proven. I think that one thing that could clarify this debate is to separate the goal of a field from its practice. A field could be a science although its practice is not scientific.

To me what defines a science is whether or not it strives to ask questions that have unambiguous answers. In that sense, most of economics is a science. We may never know what caused the financial crisis of 2008 but that is still a scientific question. Now, it is quite plausible that the crisis of 2008 had no particular cause just like there is no particular cause for a winter storm. It could have been just the result of a collection of random events but knowing that would be extremely useful. In this sense, parts of history can also be considered to be a science. I do agree that the practice of economics and history are not always scientific and can never be as scientific as a field like physics because controlled experiments usually cannot be performed. We will likely never find the answer for what caused World War I but there certainly was a set of conditions and events that led to it.

There are parts of economics that are clearly not science such as what constitutes a fair system. Likewise in history, questions regarding who was the best president or military mind are certainly  not science. Like art and ethics these questions depend on value systems. I would also stress that a big part of science is figuring out what questions can be asked. If it is true that recessions are random like winter storms then the question of when the next crisis will hit does not have an answer. There may be a short time window for some predictability but no chance of a long range forecast. However, we could possibly find some necessary conditions for recessions just like cold weather is necessary for a snow storm.

Failure at all scales

March 12, 2013

The premise of most political systems since the enlightenment is that the individual is a rational actor. The classical liberal (now called libertarian) tradition believes that social and economic ills are due to excessive government regulation and intervention. If the individuals are left to participate unfettered in a free market then these problems will disappear.  Conversely, the traditional Marxist/Leninist left posits that the capitalistic system is inherently unfair and can only be cured by replacing it with a centrally planned economy. However, the lesson of the twentieth century is that there is irrationality, incompetence, and corruption at all levels, from individuals to societies. We thus need regulations, laws and a government that take into account of the fact that we are fallible at all scales, including the regulations, laws and the government.

Markets are not perfect and often fail but they are clearly superior to central planning for the distribution of most resources (particularly consumer goods). However, they need to be monitored and regulated. When markets fail, government should intervene. Even the staunchest libertarian would support laws that prevent the elimination of your competitors by violence. Organized crime and drug cartels are an example of how businesses would run in the absence of laws. However, regulations and laws should have built-in sunset clauses that force them to be reviewed after a finite length of time. In some cases, a freer market makes sense. I believe that the government is bad in picking winners so if we want to promote alternative energy, we shouldn’t be helping nascent green industries but rather tax fossil fuel use and let the market decide what is best. Making cars more fuel-efficient may not lead to less energy use but just encourage people to drive more. If we want to save energy, we should make energy more expensive. We should also make regulations as universal and simple as possible to minimize  regulatory capture. I think means testing for social services like medicare is a bad idea because it will just encourage people to find clever ways to circumvent it. The same probably goes for need-based welfare. We should just give everyone a minimum income and let everyone keep any income above it. This would then provide a safety net but not a disincentive to work. Some people will choose to live on this minimum income but as I argued here, I think they should be allowed to. If we want to address wealth inequality then we should probably tax wealth directly rather than income. We want to encourage people to make as much money as possible but then spend it to keep the wealth circulating. By the same reasoning, I don’t like a consumption tax. Our economy is based on consumer spending so we don’t want to discourage that (unless it is for other reasons than economic).

People do not suddenly become selfless and rational when the political system changes but systems can mitigate the effects of their irrational and selfish tendencies. As the work of Kahneman, Tversky, Ariely, and others have shown, rational and scientific thinking does not come naturally to people. Having the market decide what is the most effective medical treatment is not a good idea. A perfect example is in a recent Econtalk podcast with libertarian leaning economist John Cochrane on healthcare. Cochrane suggested that instead of seeing a doctor first, he should just be allowed to buy antibiotics for his children whenever they had an earache. The most laughable part was his idea that we have rules against self-administering of drugs to protect uneducated people. Actually, the rules are to protect highly educated people like him who think that expertise in one area transfers to another. The last thing we want is for even more antibiotic use and more antibiotic resistant bacterial strains. I definitely do not want to live in a society where I have to wait for the market to penalize companies that provide unsafe food or build unsafe buildings. It doesn’t help me if my house collapses in an earthquake because the builder used inferior materials. Sure they may go out of business but I’m already dead.

There is no single perfect system or set of rules that one should always follow. We should design laws, regulations, and governments that are adaptable and adjust according to need. The US Constitution has been amended 27 times. The last time was in 1992, which just changed the rules on salaries for elected officials. The 26th amendment in 1971 made 18 the universal threshold age for voting. We are thus due for another amendment and I think the 2nd amendment, which guarantees the right to bear arms, is a place to start. We could make it more explicit what types of arms are protected and what types can be regulated by local laws. If we want to reduce gun violence then gun regulation makes sense. People will do things they later regret. If one is in the heat of an argument and there is a gun available then it could be used inadvertently. It takes a lot of training and skill to use a gun effectively. Accidents will happen. In the case of guns, failure often leads to death. I would prefer to live in a society where guns are scarce rather than one where everyone carries a weapon like the old wild west.

Aboriginals and Canada

February 21, 2013

This lecture by John Ralston Saul captures the essence of Canada better than anything else I’ve ever heard or read.  Every Canadian should listen and non-Canadians could learn something too!

Creating vs treating a brain

January 23, 2013

The NAND (Not AND) gate is all you need to build a universal computer. In other words, any computation that can be done by your desktop computer, can be accomplished by some combination of NAND gates. If you believe the brain is computable (i.e. can be simulated by a computer) then in principle, this is all you need to construct a brain. There are multiple ways to build a NAND gate out of neuro-wetware. A simple example takes just two neurons. A single neuron can act as an AND gate by having a spiking threshold high enough such that two simultaneous synaptic events are required for it to fire. This neuron then inhibits the second neuron that is always active except when the first neuron receives two simultaneous inputs and fires. A network of these NAND circuits can do any computation a brain can do.  In this sense, we already have all the elementary components necessary to construct a brain. What we do not know is how to put these circuits together. We do not know how to do this by hand nor with a learning rule so that a network of neurons could wire itself. However, it could be that the currently known neural plasticity mechanisms like spike-timing dependent plasticity are sufficient to create a functioning brain. Such a brain may be very different from our brains but it would be a brain nonetheless.

The fact that there are an infinite number of ways to creating a NAND gate out of neuro-wetware implies that there are an infinite number of ways of creating a brain. You could take two neural networks with the same set of neurons and learning rules, expose them to the same set of stimuli and end up with completely different brains. They could have the same capabilities but be wired differently. The brain could be highly sensitive to initial conditions and noise so any minor perturbation would lead to an exponential divergence in outcomes. There might be some regularities (like scaling laws) in the connections that could be deduced but the exact connections would be different. If this were true then the connections would be everything and nothing. They would be so intricately correlated that only if taken together would they make sense. Knowing some of the connections would be useless. The real brain is probably not this extreme since we can sustain severe injuries to the brain and still function. However, the total number of hard-wired conserved connections cannot exceed the number of bits in the genome. The other connections (which is almost all of them) are either learned or are random. We do not know which is which.

To clarify my position on the Hopfield Hypothesis, I think we may already know enough to create a brain but we do not know enough to understand our brain. This distinction is crucial.  What my lab has been interested in lately is to understand and discover new treatments for cognitive disorders like Autism (e.g. see here). This implies that we need to know how perturbations at the cellular and molecular levels affect the behavioural level.  This is an obviously daunting task. Our hypothesis is that the bridge between these two extremes is the canonical cortical circuit consisting of recurrent excitation and lateral inhibition. We and others have shown that such a simple circuit can explain the neural firing dynamics in diverse tasks such as working memory and binocular rivalry (e.g. see here). The hope is that we can connect the genetic and molecular perturbations to the circuit dynamics and then connect the circuit dynamics to behavior. In this sense, we can circumvent the really hard problem of how the canonical circuits are connected to each other. This may not lead to a complete understanding of the brain or the ability to treat all disorders but it may give insights into how genes and medication act on cognitive function.

Von Neumann’s response

December 11, 2012

Here’s Von Neumann’s response to straying from pure mathematics:

“[M]athematical ideas originate in empirics, although the genealogy is sometimes long and obscure. But, once they are so conceived, the subject begins to live a peculiar life of its own and is better compared to a creative one, governed by almost entirely aesthetic considerations, than to anything else, and, in particular, to an empirical science. There is, however, a further point which, I believe, needs stressing. As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticising, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganised mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration.”

Thanks to James Lee for pointing this out.

Complexity is the narrowing of possibilities

December 6, 2012

Complexity is often described as a situation where the whole is greater than the sum of its parts. While this description is true on the surface, it actually misses the whole point about complexity. Complexity is really about the whole being much less than the sum of its parts. Let me explain. Consider a television screen with 100 pixels that can be either black or white. The number of possible images the screen can show is 2^{100}. That’s a really big number. Most of those images would look like random white noise. However, a small set of them would look like things you recognize, like dogs and trees and salmon tartare coronets. This narrowing of possibilities, or a reduction in entropy to be more technical, increases information content and complexity. However, too much reduction of entropy, such as restricting the screen to be entirely black or white, would also be considered to have low complexity. Hence, what we call complexity is when the possibilities are restricted but not completely restricted.

Another way to think about it is to consider a very high dimensional system, like a billion particles moving around. A complex system would be if the attractor of this six billion dimensional system (3 for position and 3 for velocity of each particle), is a lower dimensional surface or manifold.  The flow of the particles would then be constrained to this attractor. The important thing to understand about the system would then not be the individual motions of the particles but the shape and structure of the attractor. In fact, if I gave you a list of the positions and velocities of each particle as a function of time, you would be hard pressed to discover that there even was a low dimensional attractor. Suppose the particles lived in a box and they moved according to Newton’s laws and only interacted through brief elastic collisions. This is an ideal gas and what would happen is that the motions of the positions of the particles would be uniformly distributed throughout the box while the velocities would obey a Normal distribution, called a Maxwell-Boltzmann distribution in physics. The variance of this distribution is proportional to the temperature. The pressure, volume, particle number and temperature will be related by the ideal gas law, PV=NkT, with the Boltzmann constant set by Nature. An ideal gas at equilibrium would not be considered complex because the attractor is a simple fixed point. However, it would be really difficult to discover the ideal gas law or even the notion of temperature if one only focused on the individual particles. The ideal gas law and all of thermodynamics was discovered empirically and only later justified microscopically through statistical mechanics and kinetic theory. However, knowledge of thermodynamics is sufficient for most engineering applications like designing a refrigerator. If you make the interactions longer range you can turn the ideal gas into a liquid and if you start to stir the liquid then you can end up with turbulence, which is a paradigm of complexity in applied mathematics. However, the main difference between an ideal gas and turbulent flow is the dimension of the attractor. In both cases, the attractor dimension is still much smaller than the full range of possibilities.

The crucial point is that focusing on the individual motions can make you miss the big picture. You will literally miss the forest for the trees. What is interesting and important about a complex system is not what the individual constituents are doing but how they are related to each other. The restriction to a lower dimensional attractor is manifested by the subtle correlations of the entire system. The dynamics on the attractor can also often be represented by an “effective theory”. Here the use of the word “effective” is not to mean that it works but rather that the underlying microscopic theory is superseded by a macroscopic one. Thermodynamics is an effective theory of the interaction of many particles. The recent trend in biology and economics had been to focus on the detailed microscopic interactions (there is push back in economics in what has been dubbed the macro-wars). As I will relate in future posts, it is sometimes much more effective (in the works better sense) to consider the effective (in the macroscopic sense) theory than a detailed microscopic theory. In other words, there is no “theory” per se of a given system but rather sets of effective theories that are to be selected based on the questions being asked.

Complete solutions to life’s little problems

September 25, 2012

One of the nice consequence of the finiteness of human existence is that there can exist complete solutions to some of our problems.  For example, I used to leave the gasoline (petrol for non-Americans) cap of my car on top of the gas pump every once in a while.  This has now been completely solved by the ludicrously simple solution of tethering the cap to the car.  I could still drive off with the gas cap dangling but I wouldn’t lose it.  The same goes for locking myself out of my car.  The advent of remote control locks has also eliminated this problem.  Because human reaction time is finite, there is also an absolute threshold for internet bandwidth above which the web browser will seem instantaneous for loading pages and simple computations.  Given our finite lifespan, there is also a threshold for the amount of disk space required to store every document, video, and photo we will ever want.  The converse is that are also more books in existence than we can possibly read in a life time although there will always be just a finite number of books by specific authors that we may enjoy.  I think one strategy for life is to make finite as many things as possible because then there is a chance for a complete solution.

The false dichotomy of carbs and obesity

July 1, 2012

The law of the excluded middle is one of the foundations of logic. It says that if a proposition is false then the opposite must be true. There is no room for a middle ground in classical logic. However, one must be extremely careful when applying the law to  biology where hypotheses are generally situational and rest on many assumptions. In order to apply the law of the excluded middle, one must have only two alternatives and this is seldom true in biology and in particular human metabolism. Gary Taubes argued quite successfully in his book Good Calories, Bad Calories that fat probably doesn’t cause heart disease and in some cases may even be beneficial. A major theme of that book was that scientists can become irrationally attached to hypotheses and willfully ignore any evidence to the contrary. He recently penned a New York Times opinion piece arguing that the medical establishment is equally misguided in asserting that salt is unhealthy. One of the hypotheses that Taubes dislikes the most is that “a calorie is a calorie”, which proposes what you eat is not as important as how much you eat when it comes to weight gain and obesity. Taubes thinks that carbs and especially sugar is what makes you fat (and causes heart disease). This is summarized in his Times opinion piece  today, which covers the recent JAMA result that I posted about recently (see here).

It may very well be true that a calorie is not a calorie but that still may not mean carbs are the cause of the US obesity epidemic. I’ve posted on this a few times before (e.g. see here and here) but I thought it was important enough to reiterate and simplify the points here. In short, the carbs are bad argument is that 1) carbs induce insulin and insulin sequesters fat, and 2) carbs are metabolically more efficient so you burn fewer calories when you eat them compared to fat and protein. Even if this is true (and it may not all be) that still doesn’t mean that calories are unimportant. I don’t care how metabolically efficient carbs may be, you would starve to death if you only ate one sugar cube each day. Conversely, no matter how many excess calories you may burn eating fat, you will become obese if you eat two pounds of butter each day. Hence, even if a calorie is not a calorie, calories still matter. It is then a matter of degree. If you manage to burn everything you eat then your body won’t change. This is true if you eat a high carb or a low carb diet. Now it could be true that you could have a different amount of body fat and weight for the same calorie diet depending on diet composition. So a plausible hypothesis for the cause of the obesity epidemic is that we switched from a high fat diet to a low fat diet and everyone became fatter as a result. This is something that I’m planning to test using the same data that we used to show how the increase in food production is sufficient to explain the obesity epidemic. Ultimately though, the brain is what decides how much we eat and one of the biggest things we don’t understand is how diet composition affects food intake. It could be that low carb diets do make you thinner but the reason is that we tend to eat less when we’re on them.

2012-7-2: changed fat to carb in last  sentence.

Understandability

June 23, 2012

In my  post on panpsychism, a commenter, Matt Sigl, made a valiant defense of the ideas of Koch and Tononi about consciousness. I claimed in my post that panpsychism, where some or all the constituents of a system possess some elementary form of consciousness, is no different from dualism, which says that mind and body are separate entities. Our discussion, which can be found in the comment thread, made me think more about what it means for a theory to be monistic and understandable.  I have now revised my claim to be that panpsychism is either dualist or superfluous. Tononi’s idea of integrated information may be completely correct but panpsychism would not add anything more to it. In my view, a  monistic theory is one where all the properties of a system can be explained by the fundamental governing rules. Most importantly there can only be a finite set of rules. A system with an infinite set of rules is not understandable since every situation has its own consequence. There would be no predictability; there would  be no science. There would only be history where we could write down each rule whenever we observed it.

Consider a system of identical particles that can move around in a three dimensional space and interact with each other in a pairwise fashion. Let the motion of these particles obey Newton’s laws, where their acceleration is determined by a force that is given by an interaction rule or potential. The proportionality constant between acceleration and force is the mass, which is assigned to each particle. The particles are then given an initial position and velocity. All of these rules can be specified in absolute precise terms mathematically. Space can be discrete so the particles can only occupy a finite or countably infinite number of points or continuous where the particles can occupy an uncountable infinite number of points.

Depending on how I define the interactions, select the masses, and specify the initial conditions, various things could happen.  For example,  I could have an attractive interaction, start all the particles with no velocity at the same point, and they would stay clumped together. This clumped state is a fixed point of the system. If I can move one of the particles slightly away from the point and it falls back to the clump then the fixed point is stable.  However, even a stable fixed point doesn’t mean all initial conditions will end up clumped. For example, if I have a square law attraction like gravity, then particles can orbit one another or scatter off of each other. For many initial conditions, the particles could just bounce around indefinitely and never settle into a fixed point. For more than two particles, the fate of all initial conditions is generally  impossible to predict. However, I claim that the configuration of the system at any given time is explainable or understandable because I could in principle simulate the system from a given specific initial condition and determine its trajectory for any amount of time. For a continuous system, where positions require an infinite amount of information to specify, an understandable system would be one where one could prove that there is always an initial condition that can be specified with a finite amount of information that remains close to any arbitrary initial condition.

If I make the dynamics sufficiently complex then there could be some form of basic chemistry and even biology. This need not be fully quantum mechanical;  Bohr-like atoms may be enough. If the system can form sufficiently complex molecules then evolution could take over and generate multi-cellular life forms. At some point, animals with brains could arise.  These animals could possess memory and enough computational capability to strategize and plan for the future.  There could be an entire ecosystem of plants and animals at multiple scales interacting in highly complex ways. All of this could be understandable in the sense that all of the observed dynamics could be simulated on a big enough computer if you knew the rules and the initial conditions. You may even be lucky enough that almost all initial conditions will lead to complex life.

At this point, all the properties of the system can be completely specified by an outside observer. Understandable means that all of these properties can be shown to arise from a finite set of rules and initial conditions. Now, suppose that some of the animals are also conscious in the sense that they have a subjective experience. The  panpsychic hypothesis is that consciousness is a property of some or all the particles. However, proponents must then explain why even the biggest rock does not seem conscious or human consciousness disappears when we are in deep sleep. Tononi and Koch try to finesse this problem by saying that it is only if one has enough integrated information does one notice the effect of the accumulated consciousness. However, bringing in this secondary criterion obviates the panpsychic hypothesis because there is now a systematic way to identify consciousness that is completely consistent with an emergent theory of consciousness. This doesn’t dispel the mystery of  “the hard problem” of consciousness of what exactly happens when the threshold is crossed to give subjective experience. However, the resolution is either that consciousness can be described by the finite set of rules of the constituent particles or there is a dualistic explanation where the brain “taps” into some other system that generates consciousness.  Panpsychism does not help in resolving this dilemma. Finally, it might be that the question of whether or not a system has sufficient integrated information to exhibit noticeable consciousness may be undecidable in which case there would be  no algorithm to test for consciousness. The best that one could do is to point to specific cases. If this were true then panpsychism does not solve any problem at all. We would never have a theory of consciousness. We would only have examples.

Indistinguishability and transporters

June 16, 2012

I have a memory of  being in a bookstore and picking up a book with the title “The philosophy of Star Trek”.  I am not sure of how long ago this was or even what city it was in. However, I cannot seem to find any evidence of this book on the web.  There is a book entitled “Star Trek and Philosophy: The wrath of Kant“, but that is not the one I recall.  I bring this up because in this book that may or may not exist, I remember reading a chapter on the philosophy of transporters.  For those who have never watched the television show Star Trek, a transporter is a machine that can dematerialize something and rematerialize it somewhere else, presumably at the speed of light.  Supposedly, the writers of the original show invented the device so that they could move people to planet surfaces from the starship without having to use a shuttle craft, for which they did not have the budget to build the required sets.

What the author was wondering was whether or not the particles of a transported person were the same particles as the ones in the pre-transported person or were people reassembled with stock particles lying around in the new location.  The implication being that this would then illuminate the question of whether what constitutes “you” depends on your constituent particles or just the information on how to organize the particles.  I remember thinking that this is a perfect example of how physics can render questions of philosophy obsolete. What we know from quantum mechanics is that particles are indistinguishable. This means that it makes no sense to ask whether a particle in one location is the same as a particle at a different location or time.  A particle is only specified by its quantum properties like its mass, charge, and spin.   All electrons are identical.  All protons are identical and so forth.  Now they could be in different quantum states, so a more valid question is whether a transporter transports all the quantum information of a person or just the classical information, which is much smaller.  However, this question is really only relevant for the brain since we know we can transplant all the other organs from one person to another.   The neuroscience enterprise, Roger Penrose notwithstanding, implicitly operates on the principle that classical information is sufficient to characterize a brain.

Panpsychism

May 31, 2012

I have noticed that panpsychism, which is the idea that some or all elements of  matter possess some form of consciousness, subjective experience, mental awareness, or whatever you would like to call it, seems to be gaining favour these days. Noted neuroscientist Christoff Koch has recently suggested that consciousness may be a property of matter like mass or charge. I was just listening to a Philosophy Bites podcast where philosopher Galen Strawson (listen here) was forcefully arguing that panpsychism or micropsychism was in fact the most plausible prior if one is a physicalist or monist (i.e. someone who believes that everything is made of the same stuff).  He argued that it was much more plausible for electrons to possess some tiny amount of consciousness then for it to emerge from the interactions of a large number of neurons.

What I want to point out  is that panpsychism is a closeted form of dualism (i.e. mind is different from matter). I believe philosopher David Chalmers, who coined the term “The hard problem of consciousness“, would agree.  Unlike consciousness, mass and charge can be measured and obey well-defined rules. If I were to make a computer simulation of the universe, I could incorporate mass and charge into the physical laws, be they Newton’s Laws and Maxwell’s equations, the Standard Model of particle physics, String theory, or whatever will replace that.  However, I have no idea how to incorporate consciousness into any simulation. Deeming consciousness to be a property of matter is no different from Cartesian dualism.  Both off-load the problem to a separate realm. You can be a monist or a panpsychist but you cannot be both.

Causality and obesity

May 23, 2012

The standard adage for complex systems as seen in biology and economics is that “correlation does not imply causation.”  The question then is how do you ever prove that something causes something. In the example of obesity, I stated in my New York Times interview that the obesity epidemic was caused by an increase in food availability.  What does that mean? If you strictly follow formal logic then this means that a) an increase in food supply will lead to an increase in obesity (i.e. modus ponens) and b) if there were no obesity epidemic then there would not have been an increase in food availability (i.e. modus tollens). It doesn’t mean that if there were not an increase in food availability then there would be no obesity epidemic.  This is where many people seem to be confused.  The obesity epidemic could have been caused by many things.  Some argue that it was a decline in physical activity. Some say that it is due to some unknown environmental agent. Some believe it is caused by an overconsumption of sugar and high fructose corn syrup. They could all be true and that still doesn’t mean that increased food supply was not a causal factor. Our validated model shows that if you feed the US population the extra food then there will be an  increase in body weight that more than compensates for the observed rise.  We have thus satisfied a) and thus I can claim that the obesity epidemic was caused by an increase in food supply.

Stating that obesity is a complex phenomenon that involves lots of different factors and that there cannot be a simple explanation is not an argument against my assertion. This is what I called hiding behind complexity. Yes, it is true that obesity is complex but that is not an argument for saying that food is not a causal factor. If you want to disprove my assertion then what you need to do is to find a country that does not have an obesity epidemic but did exhibit an increase in food supply that was sufficient to cause it. My plan is to do this by applying our model to other nations as soon as I am able to get ahold of data of body weights over time. This has proved more difficult than I expected. The US should be commended for having good easily accessible data. Another important point to consider is that even if increased food supply caused the obesity epidemic, this does not mean that reducing food supply will reverse it. There could be other effects that maintain it even in the absence of excess food.  As we all know, it’s complicated.

Known unknown unknowns

April 20, 2012

I  listened today to the podcast of science fiction writer Bruce Sterling’s Long Now Foundation talk from 2004 on “The Singularity: Your Future as a Black Hole”.  The talk is available here.  Sterling describes some of the ideas of  mathematician and science fiction writer Vernor Vinge’s conception of the singularity as a scary moment in time where super human intelligence ends the human era and we have no way to predict what will happen.  I won’t address the issue of whether or not such a moment in time will or not happen in the near future or ever.  I’ve posted about it in the past (e.g. see here, here and here).  What I do want to discuss is whether or not there can exist events or phenomena that are so incomprehensible that it will reduce us to a quivering pile of mush.  I think an excellent starting point is former US Secretary of Defense, Donald Rumsfeld’s infamous speech from 2002 regarding the link between Iraq and weapons of mass destruction prior to the Iraq war, where he said:

[T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – there are things we do not know we don’t know.

While Rumsfeld was mocked by the popular media for this seemingly inane statement, I actually think (geopolitical consequences aside) that it was the deepest thing I ever heard him say. Rumsfeld is a Bayesian! There is a very important distinction between known unknowns and unknown unknowns.  In the first case, we can assign a probability to the event.  In the second we cannot.  Stock prices are known unknowns, while black swans are unknown unknowns.  (Rumsfeld’s statement predates Nassim Taleb’s book.)  My question is not whether we can predict black swans (by definition we cannot) but whether something can ever occur that we wouldn’t even be able to describe it much less understand it.

In Bayesian language, a known unknown would be any event for which a prior probability exists.    Unknown unknowns are events for which there is no prior.  It’s not just that the prior is zero, but also that it is not included in our collection (i.e. sigma algebra) of possibilities. Now, the space of all possible things is uncountably infinite so it seems likely that there will be things that you cannot imagine. However, I claim that by simply acknowledging that there exist things that I cannot possible ever imagine, is sufficient to remove the surprise. We’ve witnessed enough in the post-modern world, to assign a nonzero prior to the possibility that anything can and will happen. That is not to say that we won’t be very upset or disturbed by some event. We may read about some horrific act of cruelty tomorrow that will greatly perturb us but it won’t be inconceivably shocking.  The entire world could vanish tomorrow and be replaced by an oversized immersion blender and while I wouldn’t be very happy about it and would be extremely puzzled by how it happened, I would not say that it was impossible. Perhaps I won’t be able to predict what will happen after the singularity arrives but I won’t be surprised by it.

Proof by simulation

February 7, 2012

The process of science and mathematics involves developing ideas and then proving them true.   However, what is meant by a proof depends on what one is doing.  In science, a proof is empirical.  One starts with a hypothesis and then tests it experimentally or observationally.  In pure math, a proof means that a given statement is consistent with a set of rules and axioms.  There is a huge difference between these two approaches.  Mathematics is completely internal.  It simply strives for self-consistency.  Science is external.  It tries to impose some structure on an outside world.  This is why mathematicians sometimes can’t relate to scientists and especially physicists and vice versa.

Theoretical physicists don’t need to always follow rules.  What they can do is to make things up as they go along.  To make a music analogy – physics is like jazz.  There is a set of guidelines but one is free to improvise.  If in the middle of a calculation one is stuck because they can’t solve a complicated equation, then they can assume something is small or big or slow or fast and replace the equation with a simpler one that can be solved.  One doesn’t need to know if any particular step is justified because all that matters is that in the end, the prediction must match the data.

Math is more like composing western classical music.  There are a strict set of rules that must be followed.  All the notes must fall within the diatonic scale framework.  The rhythm and meter  is tightly regulated.  There are a finite number of possible choices at each point in a musical piece just like a mathematical proof.  However,  there are a countably infinite number of possible musical pieces just as there are an infinite number of possible proofs. That doesn’t mean that rules can’t be broken, just that when they are broken a paradigm shift is required to maintain self-consistency in a new system.  Whole new fields of mathematics and genres of music arise when the rules are violated.

The invention of the computer introduced a third means of proof.  Prior to the computer,  when making an approximation, one could either take the mathematics approach and try to justify the approximation by putting bounds on the error terms analytically or take the physicist approach and compare the end result with actual data.  Now one can numerically solve the more complicated expression and compare it directly to the approximation. I would say that I have spent the bulk of my career doing just that. Although, I don’t think there is anything intrinsically wrong with proving my simulation, I do find it to be unsatisfying at times. Sometimes it is nice to know that something is true by proving it in the mathematical sense and other times it is gratifying to compare predictions directly with experiments. The most important thing is to always be aware of what mode of proof one is employing.  It is not always clear-cut.

Metaphysics as mathematics

January 9, 2012

One of the branches of western philosophy is metaphysics, which asks about the nature of being and the world.  It is the extension of what was once known as natural philosophy.  Modern science is empirical  natural philosophy.  Instead of trying to answer questions about how the world is the way it is by thinking about it, it makes hypotheses and tests them experimentally or observationally.  The late twentieth century was a time when physics, specifically string theory, drifted back towards metaphysics.  String theorists attempt to answer questions about our reality by constructing theories that are mostly grounded on mathematically aesthetic principles.   I have no real problem with string theory per se, except in its claim to be more “fundamental” than other branches of physics.  As I have argued before (e.g. here), there are fundamental concepts at all energy and length scales.

What I will argue here is that we have been misguided in trying to reunite metaphysics with science.  As I have argued before (e.g. here  and here), it is not even simple to define what is meant by “fundamental laws” or a “theory of everything”.  If our universe can be approximated arbitrarily accurately by a computable one (yes I know some of you disagree with this assertion), then what constitutes the underlying theory?  Is it the program that generates the universe?  Is it the most simple description (in which case it is not computable)?  Or is it something else?

While metaphysics as science is a dead-end for me, metaphysics as mathematics is ripe for very interesting insights. Instead of asking directly about “our” reality, we should be asking about hypothetical realities.  We should be doing philosophy of science and metaphysics on artificial worlds.  This would then be a controlled situation.  Instead of speculating about the underlying laws of our universe, we can simply specify a given set of properties in some hypothetical or simulated universe and probe the consequences.  We can do this at arbitrary levels as well –  universe,  multiverse, meta-multiverse and so forth.

I think ironically that doing such a thing would give more  insights into our universe than what we are doing now.  For example, if we started to investigate what types of simulated worlds would generate life, it may inform us more about how probable life exists in our universe ( as well as force us to come up with some quantitative definitions for life) then sending out space probes (e.g. see here).  It could also give us an idea of how variable life can be.  We seem to be stuck on looking for biochemical life.  Well maybe there are electromagnetic plasma life forms out there.  If all it took to generate complex life-like objects was a nonlinear rule that didn’t blow up, then the answer to why our universe seems so well-tuned for us would be that any old rule would have worked although it would give entirely different looking life forms.  Also, if we thought more about how we could generate or detect any type of consciousness in a simulation, that may help us better understand the consciousness we have.

The Scientific Worldview

December 30, 2011

An article that has been making the rounds on the twitter/blogosphere is The Science of Why We Don’t Believe Science by Chris Mooney in Mother Jones.  The article asks why it is that people cling to old beliefs even in the face of overwhelming data against them.  It argues that we basically use values to evaluate scientific facts.  Thus if the facts go against a value system that was built over a lifetime, we will find ways to rationalize away the facts.  This is particularly true for climate change and vaccines causing autism.  The scientific evidence is pretty strong that our climate is changing and vaccines don’t cause autism but adherents to these beliefs simply will not change their minds.

I mostly agree with the article but I would add that the idea that the scientific belief system is somehow more compelling than an alternative belief system may not be on as solid ground as scientists think.  The concept of rationality and the scientific method was a great invention that has improved the human condition dramatically.  However,  I think one of the things that people trained in science forget is how much we trust the scientific process and other scientists.  Often when I watch a science show like  NOVA on paleontology, I am simple amazed that archeologists can determine that a piece of bone that looks like some random rock to me, is a fragment of a finger bone of a primate that lived two million years ago.  However,  I trust them because they are scientists and I presume that they have received  the same rigorous training and constant scrutiny I have received.  I know that their conclusions are based on empirical evidence and a line of thought that I could follow if I took the time.  But if I grew up in a tradition where a community elder prescribed truths from a pulpit, why would I take the word of a scientist over someone I know and trust?  To someone not trained or exposed to science, it would just be the word of one person over another.

Thus, I think it would be prudent for scientists to realize that they possess a belief system that in many ways is no more self-evident than any other system.  Sure, our system has proven to be more useful over the years but ancient cultures managed to build massive architectural structures like the pyramids and invented agriculture without the help of modern science and engineering.   What science prizes is parsimony of explanation but at the risk of being called a post-modern relativist, this is mostly an aesthetic judgement.  The worldview that everything is the way it is because a creator insisted on it is as self-consistent as the scientific view.  The rational scientific worldview takes a lot of hard work and time to master.  Some (many?) people are just not willing to put in the effort it takes to learn it.   We may need to accept that a scientific worldview may not be palatable to everyone.  Understanding this truth may help us devise better strategies for conveying scientific ideas.


Follow

Get every new post delivered to your Inbox.

Join 118 other followers