RNA

I read an article recently about an anti-vaccination advocate exclaiming at a press conference with the governor of Florida that vaccines against SARS-CoV-2 “change your RNA!” This made me think that most people probably do not know much about RNA and that a little knowledge is a dangerous thing. Now ironically, contrary to what the newspapers say, this statement is kind of true although in a trivial way. The Moderna and Pfizer vaccines insert a little piece of RNA into your cells (or rather cells ingest them) and that RNA gets translated into a SARS-CoV-2 spike protein that gets expressed on the surface of the cells and thereby presented to the immune system. So, yes these particular vaccines (although not all) have changed your RNA by adding new RNA to your cells. However, I don’t think this is what the alarmist was worried about. To claim that something that changes is a bad thing implies that the something is fixed and stable to start with, which is profoundly untrue about RNA.

The central dogma of molecular biology is that genetic information flows from DNA to RNA to proteins. All of your genetic material starts as DNA organized in 23 pairs of chromosomes. Your cells will under various conditions transcribe this DNA into RNA, which is then translated into proteins. The biological machinery that does all of this is extremely complex and not fully understood and part of my research is trying to understand this better. What we do know is that transcription is an extremely noisy and imprecise process at all levels. The molecular steps that transcribe DNA to RNA are stochastic. High resolution images of genes in the process of transcription show that transcription occurs in random bursts. RNA is very short-lived, lasting between minutes to at most a few days. There is machinery in the cell dedicated to degrading RNA. RNA is spliced; it is cut up into pieces and reassembled all the time and this splicing happens more or less randomly. Less than 2% of your DNA codes for proteins but virtually all of the DNA including noncoding parts are continuously being transcribed into small RNA fragments. Your cell is constantly littered with random stray pieces of RNA, and only a small fraction of it gets translated into proteins. Your RNA changes. All. The. Time.

Now, a more plausible alarmist statement (although still untrue) would be to say that vaccines change your DNA, which could be a bad thing. Cancer after all involves DNA mutations. There are viruses (retroviruses) that insert a copy of its RNA code into the host’s DNA. HIV does this for example. In fact, a substantial fraction of the human genome is comprised of viral genetic material. Changing proteins can also be very bad. Prion diseases are basically due to misfolded proteins. So DNA changing is not good, protein changing is not good, but RNA changing? Nothing to see here.

New paper in eLife

I never thought this would ever be finished but it’s out. We hedge in the paper but my bet is that MYC is a facilitator of an accelerator essential for gene transcription.

Dissecting transcriptional amplification by MYC

eLife 2020;9:e52483 doi: 10.7554/eLife.52483

Zuqin Nie, Chunhua Guo, Subhendu K Das, Carson C Chow, Eric Batchelor, S Stoney Simons Jr, David Levens

Abstract

Supraphysiological MYC levels are oncogenic. Originally considered a typical transcription factor recruited to E-boxes (CACGTG), another theory posits MYC a global amplifier increasing output at all active promoters. Both models rest on large-scale genome-wide ”-omics’. Because the assumptions, statistical parameter and model choice dictates the ‘-omic’ results, whether MYC is a general or specific transcription factor remains controversial. Therefore, an orthogonal series of experiments interrogated MYC’s effect on the expression of synthetic reporters. Dose-dependently, MYC increased output at minimal promoters with or without an E-box. Driving minimal promoters with exogenous (glucocorticoid receptor) or synthetic transcription factors made expression more MYC-responsive, effectively increasing MYC-amplifier gain. Mutations of conserved MYC-Box regions I and II impaired amplification, whereas MYC-box III mutations delivered higher reporter output indicating that MBIII limits over-amplification. Kinetic theory and experiments indicate that MYC activates at least two steps in the transcription-cycle to explain the non-linear amplification of transcription that is essential for global, supraphysiological transcription in cancer.

New paper on Wilson-Cowan Model

I forgot to post that my fellow Yahya and I have recently published a review paper on the history and possible future of the Wilson-Cowan Model in the Journal of Neurophysiology tribute issue to Jack Cowan. Thanks to the patient editors for organizing this.

Before and beyond the Wilson–Cowan equations

Abstract

The Wilson–Cowan equations represent a landmark in the history of computational neuroscience. Along with the insights Wilson and Cowan offered for neuroscience, they crystallized an approach to modeling neural dynamics and brain function. Although their iconic equations are used in various guises today, the ideas that led to their formulation and the relationship to other approaches are not well known. Here, we give a little context to some of the biological and theoretical concepts that lead to the Wilson–Cowan equations and discuss how to extend beyond them.

 

PDF here.

New paper in Molecular Psychiatry

Genomic analysis of diet composition finds novel loci and associations with health and lifestyle

S. Fleur W. Meddens, et al.

Abstract

We conducted genome-wide association studies (GWAS) of relative intake from the macronutrients fat, protein, carbohydrates, and sugar in over 235,000 individuals of European ancestries. We identified 21 unique, approximately independent lead SNPs. Fourteen lead SNPs are uniquely associated with one macronutrient at genome-wide significance (P < 5 × 10−8), while five of the 21 lead SNPs reach suggestive significance (P < 1 × 10−5) for at least one other macronutrient. While the phenotypes are genetically correlated, each phenotype carries a partially unique genetic architecture. Relative protein intake exhibits the strongest relationships with poor health, including positive genetic associations with obesity, type 2 diabetes, and heart disease (rg ≈ 0.15–0.5). In contrast, relative carbohydrate and sugar intake have negative genetic correlations with waist circumference, waist-hip ratio, and neighborhood deprivation (|rg| ≈ 0.1–0.3) and positive genetic correlations with physical activity (rg ≈ 0.1 and 0.2). Relative fat intake has no consistent pattern of genetic correlations with poor health but has a negative genetic correlation with educational attainment (rg ≈−0.1). Although our analyses do not allow us to draw causal conclusions, we find no evidence of negative health consequences associated with relative carbohydrate, sugar, or fat intake. However, our results are consistent with the hypothesis that relative protein intake plays a role in the etiology of metabolic dysfunction.

Covid-19 modeling

I have officially thrown my hat into the ring and joined the throngs of would-be Covid-19 modelers to try to estimate (I deliberately do not use predict) the progression of the pandemic. I will pull rank and declare that I kind of do this type of thing for a living. What I (and my colleagues whom I have conscripted) are trying to do is to estimate the fraction of the population that has the SARS-CoV-2 virus but has not been identified as such. This was the goal of the Lourenco et al. paper I wrote about previously that pulled me into this pit of despair. I argued that fitting to deaths alone, which is what they do, is insufficient for constraining the model and thus it has no predictive ability. What I’m doing now is seeing whether it is possible to do the job if you fit not only deaths but also the number of cases reported and cases recovered. You then have a latent variable model where the observed variables are cases, cases that die, and cases that recover, and the latent variables are the infected that are not identified and the susceptible population. Our plan is to test a wide range of models with various degrees of detail and complexity and use Bayesian Model Comparison to see which does a better job. We will apply the analysis to global data. We’re getting numbers now but I’m not sure I trust them yet so I’ll keep computing for a few more days. The full goal is to quantify the uncertainty in a principled way.

2020-04-06: edited for purging typos

The relevant Covid-19 fatality rate

Much has been written in the past few days about whether the case fatality rate (CFR) for Covid-19 is actually much lower than the original estimate of about 3 to 4%. Globally, the CFR is highly variable ranging  from half a  percent in Germany to nearly 10% in Italy. The difference could be due to underlying differences in the populations or to the extent of testing. South Korea, which has done very wide scale testing, has a CFR of around 1.5%. However, whether the CFR is high or low is not the important parameter.  The number we must determine is the population fatality rate because even if most of the people who become infected with SARS-CoV-2 have mild or even no symptoms so the CFR is low, if most people are susceptible and the entire world population gets the virus then even a tenth of a percent of 7 billion is still a very large number.

What we don’t know yet is how much of the population is susceptible. Data from the cruise ship Diamond Princess showed that about 20% of the passengers and crew became infected but there were still some social distancing measures in place after the first case was detected so this does not necessarily imply that 80% of the world population is innately immune. A recent paper from Oxford argues that about half of the UK population may already have been infected and is no longer susceptible. However, I redid their analysis and find that widespread infection although possible is not very likely (details to follow — famous last words) but this can and should be verified by testing for anti-bodies in the population. The bottom line is that we need to test, test and test both for the virus and for anti-bodies before we will know how bad this will be.

New paper in Cell

 2018 Dec 10. pii: S0092-8674(18)31518-6. doi: 10.1016/j.cell.2018.11.026. [Epub ahead of print]

Intrinsic Dynamics of a Human Gene Reveal the Basis of Expression Heterogeneity.

Abstract

Transcriptional regulation in metazoans occurs through long-range genomic contacts between enhancers and promoters, and most genes are transcribed in episodic “bursts” of RNA synthesis. To understand the relationship between these two phenomena and the dynamic regulation of genes in response to upstream signals, we describe the use of live-cell RNA imaging coupled with Hi-C measurements and dissect the endogenous regulation of the estrogen-responsive TFF1 gene. Although TFF1 is highly induced, we observe short active periods and variable inactive periods ranging from minutes to days. The heterogeneity in inactive times gives rise to the widely observed “noise” in human gene expression and explains the distribution of protein levels in human tissue. We derive a mathematical model of regulation that relates transcription, chromosome structure, and the cell’s ability to sense changes in estrogen and predicts that hypervariability is largely dynamic and does not reflect a stable biological state.

KEYWORDS:

RNA; chromosome; estrogen; fluorescence; heterogeneity; imaging; live-cell; single-molecule; steroid; transcription

PMID: 30554876

 

DOI: 10.1016/j.cell.2018.11.026

Mosquito experiment concluded

It’s hard to see from the photo but when I checked my bucket after a week away, there were definitely a few mosquito larvae swimming around. There was also an impressive biofilm on the bottom of the bucket. It took less than a month for mosquitoes to breed in a newly formed pool of stagnant water. My son also noticed that a nearby flower pot with water only a few centimeters deep also had larvae. So the claims that mosquitos will breed in tiny amounts of stagnant water is true.IMG_3158

Mosquito update

IMG_2917It’s been about two weeks since I first set out my bucket, although I had to move it to a less obtrusive location. Still no signs of mosquito larvae, although judging from my bite frequency even with mosquito repellant, mosquito activity is still high in my garden. I see the occasional insect trapped (they are not really floating since at their size water is highly viscous) in the surface and there is a nice collection of plant debris at the bottom. The water level seems a little bit higher. It has rained at least once every two days since my first post although it has also been very hot so the input seems mostly balanced by the evaporative loss. I’m starting to believe that mosquitos have their prefered gestation grounds that they perpetually use and only exploit new locales when necessary.

Audio of SIAM talk

Here is an audio recording synchronized to slides of my talk a week and a half ago in Pittsburgh. I noticed some places where I said the wrong thing such as conflating neuron with synapse.  I also did not explain the learning part very well. I should point out that we are not applying a control to the network.  We train a set of weights so that given some initial condition, the neuron firing rates follow a specified target pattern. I also made a joke that implied that the Recursive Least Squares algorithm dates to 1972. That is not correct. It goes back much further back than that. I also take a pot shot at physicists. It was meant as a joke of course and describes many of my own papers.

Arsenic and Selenium

You should listen to this podcast from Quirks and Quarks about how University of Calgary scientist Judit Smits is trying to use selenium rich lentils from Saskatchewan, Canada to treat arsenic poisoning in Bangladesh. Well water in parts of rural Bangladesh have high levels of natural arsenic and this is a major health problem. Professor Smits, who is actually in the department of veterinary medicine, has done work using arsenic to treat selenium poisoning in animals. It turns out that arsenic and selenium, both of which can be toxic in high doses, effectively neutralize each other. They each seem to increase excretion of the other into the bile. So she hypothesized that selenium might counter arsenic poisoning but the interaction is nontrivial so it is not a certainty that it would work. Dr. Smits organized a study to transport ten tons of lentils from Canada to Bangladesh this past summer to test the hypothesis and you can hear about the trials and tribulations of getting the study done. The results are not yet in but I think this is a perfect example of how cleverness combined with determination can make a real difference. This study is funded entirely from Canadian sources but it sounds like something the Gates and Clinton foundations could be interested in.

2016-9-26. Corrected a typo, changed Saskatchewan to Bangladesh

Confusion about consciousness

I have read two essays in the past month on the brain and consciousness and I think both point to examples of why consciousness per se and the “problem of consciousness” are both so confusing and hard to understand. The first article is by philosopher Galen Strawson in The Stone series of the New York Times. Strawson takes issue with the supposed conventional wisdom that consciousness is extremely mysterious and cannot be easily reconciled with materialism. He argues that the problem isn’t about consciousness, which is certainly real, but rather matter, for which we have no “true” understanding. We know what consciousness is since that is all we experience but physics can only explain how matter behaves. We have no grasp whatsoever of the essence of matter. Hence, it is not clear that consciousness is at odds with matter since we don’t understand matter.

I think Strawson’s argument is mostly sound but he misses on the crucial open question of consciousness. It is true that we don’t have an understanding of the true essence of matter and we probably never will but that is not why consciousness is mysterious. The problem is that we do now know whether the rules that govern matter, be they classical mechanics, quantum mechanics, statistical mechanics, or general relativity, could give rise to a subjective conscious experience. Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn. However, we have no idea how these rules can explain consciousness and more importantly we do not know whether these rules are sufficient to understand consciousness or whether we need a different set of rules or reality or whatever. One of the biggest lessons of the twentieth century is that knowing the rules does not mean you can predict the outcome of the rules. Not even taking into the computability and decidability results of Turing and Gödel, it is still not clear how to go from the microscopic dynamics of molecules to the Navier-Stokes equation for macroscopic fluid flow and how to get from Navier-Stokes to the turbulent flow of a river. Likewise, it is hard to understand how the liver works, much less the brain, starting from molecules or even cells. Thus, it is possible that consciousness is an emergent phenomenon of the rules that we already know, like wetness or a hurricane. We simply do not know and are not even close to knowing. This is the hard problem of consciousness.

The second article is by psychologist Robert Epstein in the online magazine Aeon. In this article, Epstein rails against the use of computers and information processing as a metaphor for how the brain works. He argues that this type of restricted thinking is why we can’t seem to make any progress understanding the brain or consciousness. Unfortunately, Epstein seems to completely misunderstand what computers are and what information processing means.

Firstly, a computation does not necessarily imply a symbolic processing machine like a von Neumann computer with a central processor, memory, inputs and outputs. A computation in the Turing sense is simply about finding or constructing a desired function from one countable set to another. Now, the brain certainly performs computations; any time we identify an object in an image or have a conversation, the brain is performing a computation. You can couch it in whatever language you like but it is a computation. Additionally, the whole point of a universal computer is that it can perform any computation. Computations are not tied to implementations. I can always simulate whatever (computable) system you want on a computer. Neural networks and deep learning are not symbolic computations per se but they can be implemented on a von Neumann computer. We may not know what the brain is doing but it certainly involves computation of some sort. Any thing that can sense the environment and react is making a computation. Bacteria can compute. Molecules compute. However, that is not to say that everything a brain does can be encapsulated by Turing universal computation. For example, Penrose believes that the brain is not computable although as I argued in a previous post, his argument is not very convincing. It is possible that consciousness is beyond the realm of computation and thus would entail very different physics. However, we have yet to find an example of a real physical phenomenon that is not computable.

Secondly, the brain processes information by definition. Information in both the Shannon and Fisher senses is a measure of uncertainty reduction. For example, in order to meet someone for coffee you need at least two pieces of information, where and when. Before you received that information your uncertainty was huge since there were so many possible places and times the meeting could take place. After receiving the information your uncertainty was eliminated. Just knowing it will be on Thursday is already a big decrease in uncertainty and an increase in information. Much of the brain’s job at least for cognition is about uncertainly reduction. When you are searching for your friend in the crowded cafe, you are eliminating possibilities and reducing uncertainty. The big mistake that Epstein makes is conflating an example with the phenomenon. Your brain does not need to function like your smartphone to perform computations or information processing. Computation and information theory are two of the most important mathematical tools we have for analyzing cognition.

New paper in PLoS Comp Bio

Shashaank Vattikuti , Phyllis Thangaraj, Hua W. Xie, Stephen J. Gotts, Alex Martin, Carson C. Chow. Canonical Cortical Circuit Model Explains Rivalry, Intermittent Rivalry, and Rivalry Memory. PLoS Computational Biology (2016).

Abstract

It has been shown that the same canonical cortical circuit model with mutual inhibition and a fatigue process can explain perceptual rivalry and other neurophysiological responses to a range of static stimuli. However, it has been proposed that this model cannot explain responses to dynamic inputs such as found in intermittent rivalry and rivalry memory, where maintenance of a percept when the stimulus is absent is required. This challenges the universality of the basic canonical cortical circuit. Here, we show that by including an overlooked realistic small nonspecific background neural activity, the same basic model can reproduce intermittent rivalry and rivalry memory without compromising static rivalry and other cortical phenomena. The background activity induces a mutual-inhibition mechanism for short-term memory, which is robust to noise and where fine-tuning of recurrent excitation or inclusion of sub-threshold currents or synaptic facilitation is unnecessary. We prove existence conditions for the mechanism and show that it can explain experimental results from the quartet apparent motion illusion, which is a prototypical intermittent rivalry stimulus.

Author Summary

When the brain is presented with an ambiguous stimulus like the Necker cube or what is known as the quartet illusion, the perception will alternate or rival between the possible interpretations. There are neurons in the brain whose activity is correlated with the perception and not the stimulus. Hence, perceptual rivalry provides a unique probe of cortical function and could possibly serve as a diagnostic tool for cognitive disorders such as autism. A mathematical model based on the known biology of the brain has been developed to account for perceptual rivalry when the stimulus is static. The basic model also accounts for other neural responses to stimuli that do not elicit rivalry. However, these models cannot explain illusions where the stimulus is intermittently switched on and off and the same perception returns after an off period because there is no built-in mechanism to hold the memory. Here, we show that the inclusion of experimentally observed low-level background neural activity is sufficient to explain rivalry for static inputs, and rivalry for intermittent inputs. We validate the model with new experiments.

 

This paper is the latest of a continuing series of papers outlining how a canonical cortical circuit of excitatory and inhibitory cells can explain psychophysical and electrophysiological data of perceptual and cortical dynamics under a wide range of stimuli and conditions. I’ve summarized some of the work before (e.g. see here). In this particular paper, we show how the same circuit previously shown to explain winner-take-all behavior, normalization, and oscillations at various time scales, can also possess memory in the absence of input. Previous work has shown that if you have a circuit with effective mutual inhibition between two pools representing different percepts and include some type of fatigue process such as synaptic depression or spike frequency adaptation, then the circuit exhibits various dynamics depending on the parameters and input conditions. If the inhibition strength is relatively low and the two pools receive equal inputs then the model will have a symmetric fixed point where both pools are equally active. As the inhibition strength (or input strength) increases, then there can be a bifurcation to oscillations between the two pools with a frequency that is dependent on the strengths of inhibition, recurrent excitation, input, and the time constant of the fatigue process. A further increase in inhibition leads to a bifurcation to a winner-take-all (WTA) state where one of the pools dominates the others. However, the same circuit would be expected to not possess “rivalry memory”, where the same percept returns after the stimulus is completely removed for a duration that is long compared to the average oscillation period (dominance time). The reason is that during rivalry, the dominant pool is weakened while the suppressed pool is strengthened by the fatigue process. Thus when the stimulus is removed and returned, the suppressed pool would be expected to win the competition and become dominant. This reasoning had led people, including myself, to believe that rivalry memory could not be explained by this same model.

However, one thing Shashaank observed and that I hadn’t really noticed before was that the winner-take-all state can persist for arbitrarily low input strength. We prove a little theorem in the paper showing that if the gain function (or FI curve) is concave (i.e. does not bend up), then the winner-take-all will persist for arbitrarily low input if the inhibition is strong enough. Most importantly, the input does not need to be tuned and could be provided by the natural background activity known to exist in the brain. Even zero mean noise is sufficient to maintain the WTA state. This low-activity WTA state can then serve as a memory since whatever was active during a state with strong input can remain active when the input is turned off and the neurons just receive low level background activity. It is thus a purely mutual inhibition maintained memory. We dubbed this “topological memory” because it is like a kink in the carpet that never disappears and persists over a wide range of parameter values and input strengths. Although, we only consider rivalry memory in this paper, the mechanism could also apply in other contexts such as working memory. In this paper, we also focus on a specific rivalry illusion called the quartet illusion, which makes the model slightly more complicated but we show how it naturally reduces to a two pool model. We are currently finishing a paper quantifying precisely how excitatory and inhibitory strengths affect rivalry and other cortical phenomena so watch this space. We also have submitted an abstract to neuroscience demonstrating how you can get WTA and rivalry in a balanced-state network.

 

Update: link to paper is fixed.

Two new papers

Pradhan MA1, Blackford JA Jr1, Devaiah BN2, Thompson PS2, Chow CC3, Singer DS2, Simons SS Jr4.  Kinetically Defined Mechanisms and Positions of Action of Two New Modulators of Glucocorticoid Receptor-regulated Gene Induction.  J Biol Chem. 2016 Jan 1;291(1):342-54. doi: 10.1074/jbc.M115.683722. Epub 2015 Oct 26.

Abstract: Most of the steps in, and many of the factors contributing to, glucocorticoid receptor (GR)-regulated gene induction are currently unknown. A competition assay, based on a validated chemical kinetic model of steroid hormone action, is now used to identify two new factors (BRD4 and negative elongation factor (NELF)-E) and to define their sites and mechanisms of action. BRD4 is a kinase involved in numerous initial steps of gene induction. Consistent with its complicated biochemistry, BRD4 is shown to alter both the maximal activity (Amax) and the steroid concentration required for half-maximal induction (EC50) of GR-mediated gene expression by acting at a minimum of three different kinetically defined steps. The action at two of these steps is dependent on BRD4 concentration, whereas the third step requires the association of BRD4 with P-TEFb. BRD4 is also found to bind to NELF-E, a component of the NELF complex. Unexpectedly, NELF-E modifies GR induction in a manner that is independent of the NELF complex. Several of the kinetically defined steps of BRD4 in this study are proposed to be related to its known biochemical actions. However, novel actions of BRD4 and of NELF-E in GR-controlled gene induction have been uncovered. The model-based competition assay is also unique in being able to order, for the first time, the sites of action of the various reaction components: GR < Cdk9 < BRD4 ≤ induced gene < NELF-E. This ability to order factor actions will assist efforts to reduce the side effects of steroid treatments.

Li Y, Chow CC, Courville AB, Sumner AE, Periwal V. Modeling glucose and free fatty acid kinetics in glucose and meal tolerance test. Theor Biol Med Model. 2016 Mar 2;13:8. doi: 10.1186/s12976-016-0036-3.

Abstract:
BACKGROUND:
Quantitative evaluation of insulin regulation on plasma glucose and free fatty acid (FFA) in response to external glucose challenge is clinically important to assess the development of insulin resistance (World J Diabetes 1:36-47, 2010). Mathematical minimal models (MMs) based on insulin modified frequently-sampled intravenous glucose tolerance tests (IM-FSIGT) are widely applied to ascertain an insulin sensitivity index (IEEE Rev Biomed Eng 2:54-96, 2009). Furthermore, it is important to investigate insulin regulation on glucose and FFA in postprandial state as a normal physiological condition. A simple way to calculate the appearance rate (Ra) of glucose and FFA would be especially helpful to evaluate glucose and FFA kinetics for clinical applications.
METHODS:
A new MM is developed to simulate the insulin modulation of plasma glucose and FFA, combining IM-FSIGT with a mixed meal tolerance test (MT). A novel simple functional form for the appearance rate (Ra) of glucose or FFA in the MT is developed. Model results are compared with two other models for data obtained from 28 non-diabetic women (13 African American, 15 white).
RESULTS:
The new functional form for Ra of glucose is an acceptable empirical approximation to the experimental Ra for a subset of individuals. When both glucose and FFA are included in FSIGT and MT, the new model is preferred using the Bayes Information Criterion (BIC).
CONCLUSIONS:
Model simulations show that the new MM allows consistent application to both IM-FSIGT and MT data, balancing model complexity and data fitting. While the appearance of glucose in the circulation has an important effect on FFA kinetics in MT, the rate of appearance of FFA can be neglected for the time-period modeled.

Chomsky on The Philosopher’s Zone

Listen to MIT Linguistics Professor Noam Chomsky on ABC’s radio show The Philosopher’s Zone (link here).  Even at 87, he is still as razor sharp as ever. I’ve always been an admirer of Chomsky although I think I now mostly disagree with his ideas about language. I do remember being completely mesmerized by the few talks I attended when I was a graduate student.

Chomsky is the father of modern linguistics. He turned it into a subfield of computer science and mathematics. People still use Chomsky Normal Form and the Chomsky Hierarchy in computer science. Chomsky believes that the language ability is universal among all humans and is genetically encoded. He comes to this conclusion because in his mathematical analysis of language he found what he called “deep structures”, which are embedded rules that we are consciously unaware of when we use language. He was adamantly opposed to the idea that language could be acquired via a probabilistic machine learning algorithm. His most famous example is that we know that the sentence “Colorless green ideas sleep furiously” makes grammatical sense but is nonsensical while the sentence “Furiously sleep ideas green colorless”, is nongrammatical. Since, neither of these sentences had ever been spoken nor written he surmised that no statistical algorithm could ever learn the difference between the two. I think it is pretty clear now that Chomsky was incorrect and machine learning can learn to parse language and classify these sentences. There has also been field work that seems to indicate that there do exist languages in the Amazon that are qualitatively different form the universal set. It seems that the brain, rather than having an innate ability for grammar and language, may have an innate ability to detect and learn deep structure with a very small amount of data.

The host Joe Gelonesi, who has filled in admirably for the sadly departed Alan Saunders, asks Chomsky about the hard problem of consciousness near the end of the program. Chomsky, in his typical fashion of invoking 17th and 18th century philosophy, dismisses it by claiming that science itself and physics in particular has long dispensed with the equivalent notion. He says that the moment that Newton wrote down the equation for gravitational force, which requires action at a distance, physics stopped being about making the universe intelligible and became about creating predictive theories. He thus believes that we will eventually be able to create a theory of consciousness although it may not be intelligible to humans. He also seems to subscribe to panpsychism, where consciousness is a property of matter like mass, an idea championed by Christof Koch and Giulio Tononi. However, as I pointed out before, panpsychism is dualism. If it does exist, then it exists apart from the way we currently describe the universe. Lately, I’ve come to believe and accept the fact that consciousness is an epiphenomenon and has no causal consequence in the universe. I must credit David Chalmers (e.g. see previous post) for making it clear that this is the only recourse to dualism. We are no more nor less than automata caroming through the universe, with the ability to spectate a few tens of milliseconds after the fact.

Addendum: As pointed out in the comments, there are monoistic theories, as espoused by Bishop Berkeley, where only ideas are real.  My point about the only recourse to dualism is epiphenomena for consciousness, is if one adheres to materialism.

 

 

 

 

 

In praise of MSG

When I go to a Chinese restaurant, I am always disappointed when the menu says “No MSG.” I used to be on the “MSG is bad” bandwagon too until I learned some neuroscience. Glutamate is the primary excitatory neurotransmitter in the brain and now I’m always hoping to get extra glutamate into my system and brain. I don’t really know if MSG is going to supercharge my brain but hey the placebo effect is real. Research has never found any bad effects of MSG. See this article for details. This is also an interesting case where other people’s beliefs do directly affect me. Because, the public is hugely biased against MSG, I will be deprived of it at my local Chinese take out place. I don’t know why you have a headache after you eat at a Chinese restaurant. It might be from drinking too much tea or the salt but it’s probably not because of the MSG.