Confusion about consciousness

I have read two essays in the past month on the brain and consciousness and I think both point to examples of why consciousness per se and the “problem of consciousness” are both so confusing and hard to understand. The first article is by philosopher Galen Strawson in The Stone series of the New York Times. Strawson takes issue with the supposed conventional wisdom that consciousness is extremely mysterious and cannot be easily reconciled with materialism. He argues that the problem isn’t about consciousness, which is certainly real, but rather matter, for which we have no “true” understanding. We know what consciousness is since that is all we experience but physics can only explain how matter behaves. We have no grasp whatsoever of the essence of matter. Hence, it is not clear that consciousness is at odds with matter since we don’t understand matter.

I think Strawson’s argument is mostly sound but he misses on the crucial open question of consciousness. It is true that we don’t have an understanding of the true essence of matter and we probably never will but that is not why consciousness is mysterious. The problem is that we do now know whether the rules that govern matter, be they classical mechanics, quantum mechanics, statistical mechanics, or general relativity, could give rise to a subjective conscious experience. Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn. However, we have no idea how these rules can explain consciousness and more importantly we do not know whether these rules are sufficient to understand consciousness or whether we need a different set of rules or reality or whatever. One of the biggest lessons of the twentieth century is that knowing the rules does not mean you can predict the outcome of the rules. Not even taking into the computability and decidability results of Turing and Gödel, it is still not clear how to go from the microscopic dynamics of molecules to the Navier-Stokes equation for macroscopic fluid flow and how to get from Navier-Stokes to the turbulent flow of a river. Likewise, it is hard to understand how the liver works, much less the brain, starting from molecules or even cells. Thus, it is possible that consciousness is an emergent phenomenon of the rules that we already know, like wetness or a hurricane. We simply do not know and are not even close to knowing. This is the hard problem of consciousness.

The second article is by psychologist Robert Epstein in the online magazine Aeon. In this article, Epstein rails against the use of computers and information processing as a metaphor for how the brain works. He argues that this type of restricted thinking is why we can’t seem to make any progress understanding the brain or consciousness. Unfortunately, Epstein seems to completely misunderstand what computers are and what information processing means.

Firstly, a computation does not necessarily imply a symbolic processing machine like a von Neumann computer with a central processor, memory, inputs and outputs. A computation in the Turing sense is simply about finding or constructing a desired function from one countable set to another. Now, the brain certainly performs computations; any time we identify an object in an image or have a conversation, the brain is performing a computation. You can couch it in whatever language you like but it is a computation. Additionally, the whole point of a universal computer is that it can perform any computation. Computations are not tied to implementations. I can always simulate whatever (computable) system you want on a computer. Neural networks and deep learning are not symbolic computations per se but they can be implemented on a von Neumann computer. We may not know what the brain is doing but it certainly involves computation of some sort. Any thing that can sense the environment and react is making a computation. Bacteria can compute. Molecules compute. However, that is not to say that everything a brain does can be encapsulated by Turing universal computation. For example, Penrose believes that the brain is not computable although as I argued in a previous post, his argument is not very convincing. It is possible that consciousness is beyond the realm of computation and thus would entail very different physics. However, we have yet to find an example of a real physical phenomenon that is not computable.

Secondly, the brain processes information by definition. Information in both the Shannon and Fisher senses is a measure of uncertainty reduction. For example, in order to meet someone for coffee you need at least two pieces of information, where and when. Before you received that information your uncertainty was huge since there were so many possible places and times the meeting could take place. After receiving the information your uncertainty was eliminated. Just knowing it will be on Thursday is already a big decrease in uncertainty and an increase in information. Much of the brain’s job at least for cognition is about uncertainly reduction. When you are searching for your friend in the crowded cafe, you are eliminating possibilities and reducing uncertainty. The big mistake that Epstein makes is conflating an example with the phenomenon. Your brain does not need to function like your smartphone to perform computations or information processing. Computation and information theory are two of the most important mathematical tools we have for analyzing cognition.


18 thoughts on “Confusion about consciousness

  1. Carson,

    Assuming that your statement is true:

    “We simply do not know and are not even close to knowing. This is the hard problem of consciousness.”

    Then why do you spend so much time pondering the origin of consciousness?
    Just wondering.


  2. @Tom Because it is so interesting and to quote JFK: “We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard;”


  3. Its quite interesting who gets employed. The epstein piece seems to have approximately zero content, though some history—hydraulic metaphor, etc. This reminds of political correctness–he doesnt want us to use the word information processor, because thats a computer metaphor, and humans made computers, while evolution made humans. he cites eric kandel who i once saw speak. (he was touting in part his new company , possibly on designer genes and drugs.) one thing i do agree with is the conclusion—the human brain project is wayyy over hyped).

    i personally tend to think actually every or most natural processes are non computable, though this gets technical. nature appears to operate on the real numbers, or continiuum. Many such processes are computable, but some are not (penrose gives a few examples in his book—-pour el, and richards—noncomputable solutions of differential equations given certain initial conditions (some people seem to be using these recently to model quantum theory. a sort of modified hidden variable theory–hard to tell if its makes sense, but g t’hooft spoke at their conference). . .

    its possible all such non-computable processes can be transformed into a computable problem . (basically using the lowenheim-skolem theorem or the modern variants–via hyper or super-turing computation ) .See also ‘computation over the reals’ by blum-shub-smale). Or even easier, non-predicatave or seond order logic. You just add a symbol which stands for an infinite set. Then its countable, and computable.


  4. ps your second to last sentence ‘your brain doesn’t have to function life a smartphone to compute’ is ‘spot on’. alot of people seem to hold this view (including i’m failrly sure john searle). if i write down equation like newton’s law (assuming i can which is getting questionable) that doesn’t mean a particle has to act like that equation—i’ve seen particles follow newton’s law, but i’ve never seen newton’s law in nature. similarily, i”ve never seen a person compute a utility function in a store—i dont see any cobbs-douglas function and budget constrasint, nor optimazation and lagrangian multiplers.could be i’m hard of hearing or need eyeglasses.

    it might be a easier and a better world if we did–live in tegmark’s mathematical universe’ and cantor’s paradise. just equations, no rain or snow or traffic accidents. (reminds me of a scene in ‘a beautiful mind’ film about john nash–a firend showed me this since i dont go to movies. one does wonder a little bit whether his problem was at all connected to those experiments they were doing on people back in the 50’s and such with lsd, etc. I think timothy o’leary and also the ‘unabomber’ had some interactions with those psychological experiments–zimbardo also did the ”stanford prison experiments’. nowadays those might not be allowed; also there was tuskeegee experiments). .

    this same sort of confusion arises when people say a neural net is a parallel computer, unlike serial computers, so its more ‘brainlike’ and powerful. i think ‘neurophilosophers’ like the churchills (not ward, nor winston, but pat and someone else) and dennett say this sort of stuff. its same computing power (or maybe simplest turing degree–s c kleene) . same i think in general with quantum computers. just quantitative differences, not qualitative—-like walking versus flying. (though in a sense quantitative can turn into qualitative if you see the world a different way by flying rather than walking.) (as t h hardy said, ‘a hat can become a teacup’. lewontin and gould had the notion written up in a paper called ‘the spandrels of pangloss’ (co-authored with voltaire and dr pangloss, pseudonym of leibniz).

    my view is searle seems to think oone day, in a lab, people will be able to not only isolate dna, and maybe grow networks of brain cells in a petri dish as mini-quasi-brains, they will also be able to isolate qualia, like red–elementary particles of consiousness–spinoza’s and leibniz’ modes and monads. (what Frohlich of ‘bose-einstein condensation in the brain’ (hameroff, penrose) and u umezawa and j cowan —gauge bosons–might call collective oscillations or coherent modes, quasiparticles).
    maybe one could package up the various qualia with some megavitamins and sell it as a smart and empathetic drug at CVS (maybe the one in baltimore that got burned down in riots—a friend of mine teaches at the local elementary school there, and her students didnt like the riots)
    Ray Kurzweil is into smart drugs and designed something useful—some sort of thing so blind people could use a computer. . Kurt Weill was a musical composer. andre weil was into math—maybe bourbaki group. math, music, drugs.

    i think the software/hardware metaphor is perfectly good as a first approximation. people have genetic hardware, such as a brain pr general intelligence. one can add modules onto it like a sound anhd hearing system, skype, and other (maybe non)sense. then given the hardware, one can write software, even apps. put on mathematica by wolfram, a GPS, a random number generator, and the ‘postmodern generator’ (so you can write academic papers quickly by writing your name in the author line and pushing one button).
    i do agree that likely the mind is not an information processor strictly in the computer sense, unless computers are also consious. it seems consiousness may be ‘non computable’. we can talk about and write about consiousness, and all that is computable. but what we write down may not be conscious.


  5. @cc—i realized that after i typed it but i learend its always best to ‘shoot first and ask questions later’. i think i heard recently that it was winston chruchill who had the famous inspirational quote ‘what men learn from studying herstory (pc variant) is that s’he don’t learn from studying it’. (i think winston is also a character in a book called 1984 and a a product brand name; and a town in NC). i wonder whether hills are related to land. i still think basically there may be no fundamental difference between the computable and noncomputable since they are both wor(l)ds..


  6. Carson,
    I don’t have an answer for this, but I am just a little curious about your thoughts on consciousness. I’ll pose a few questions, but you don’t have to feel obligated to answer them.

    Is consciousness (as a style of information processing) a top down or bottom up process?


  7. I’m a little hesitant to posit too much here, but one thought seems to keep coming up as I read your thoughts above. It seems like your list of human accomplishments ( copied below) strictly utilize a bottom-up processing style.

    Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn….

    Computers take inputs, ‘process them’, and generate outputs in a bottom-up manner. Yet, our subjective experience appears to be different. Everything is context. Most sensory inputs are discarded. Most of the visual scene before your eyes is disregarded. Most of the auditory landscape is also ignored. Most of the touch receptors in your skin activated by your clothing or keyboard are suppressed. The incoming information is not being utilized.

    What is this context? It would seem like it’s a top-down understanding of the sensory scene that is not affected by frequent fluctuations of sensory input. Now, I don’t have any idea of how to program a computer to copy this approach. I’ve never heard of a top-down computer program or a top-down algorithm. I’ve never even heard of a top-down mathematical function, so perhaps mathematical functions aren’t the best way to approach this computation.

    If I have to make some analogy, I guess it would have to be something loosely akin to a type of matching pursuit decomposition of a signal based upon a known dictionary of ‘atoms’ of defined waveforms. In the case of human consciousness, the dictionary would not be a pre-determined collection of wave forms, but rather an expanding dictionary collection of our ‘priors’. These priors could be sensory, but they could also be more abstract representations accumulated with experience. Our ‘context’ is a string of these higher-level decompositions of the entire sensory scene, but these can also generated in the absence of sensory input.

    The thing with a matching pursuit decomposition of a waveform is that it is usually not unique. The same waveform could have different decompositions. This lack of uniqueness does seem to mimic human nature as no two people interpret everything in exactly the same way. The diversity of the human conscious experience is easily seen in all things politics.

    Now, how you would program this, or even observe this in electrical impulses of the brain is beyond me. But I suspect that the answer to consciousness will depend on development of a top-down computational style that is non-existent at this point in time.


  8. @Tom I don’t really see the distinction between your bottom up and top down approaches. I generally use these terms to mean understanding a complex system from the microscopic mechanisms vs understanding it from the natural emergent degrees of freedom. For example, we can understand a box of molecules in terms of the microscopic dynamics of each molecule using Newton’s laws or quantum mechanics. This would be a bottom up molecular dynamics approach. Or we can consider the temperature, pressure and volume of the gas as in thermodynamics. This is the top down thermodynamics approach. Each has its advantages and disadvantages depending on what questions you ask.

    I would suggest that current deep learning algorithms in multilayer convolution networks are doing exactly what you are calling top down. They are taking rich inputs, processing it through multiple layers and producing a single integrated response. The different layers are doing the exact atomic decomposition you proposed. The decompositions are also not unique.


  9. I’d add to the last comment by cc that one can think of ‘top down’ computations as resting on evolutionary ‘computations’ done in the past yielding a brain and sensory systems—-humans are biocomputers like multilevel neural nets (or related computing architectures which basically likely are equivalent) except these were programed by the expeince of evolution, development, etc. They weren’t built in a lab and trained using various exposures to artificial lab produced data.

    I saw an article recently about an MRI experiment done at dartmouth—-they took a dead salmon and put it in an MRI machine and then exposed it to various pictures of humans with different expresssions—happy, sad, etc. It turned out that just as with live human subjects, dead salmons could also recognize human emotions. This may show there can be a lot of ‘top down’ computation going on. If u cut off a chicken’s head it will still be alive and kicking for quite awhile.
    The ‘top’ sort of is a product of what was in the ‘bottom’ in the past. As they say, cream falls to the bottom due to laws of physics or regression to the mean. .

    (As an aside a controversial NSF funded program at U Az or new Mexico is run by P C Davies, who with GFR Ellis of South africa, has written alot about top-down causation. I am agnostic but it seems possible. For example, the NSF may decide to fund some research project, which will lead to discovering a higgs boson or rika virus, which later are seen as causes of the universe, illness, etc. There’s an old greek myth I cant remember which discusses ‘be careful what you ask for’ —Pandora’s box. Current wars in Iraq may be an example of this principle.

    LEH Trainor (was at U Toronto in physics—mentor to C J Lumsden who wrote with e o Wilson ‘genes mind and culture’) wrote a paper once suggesting that quantum statistics (FD and BE stats) might be thought of as leading to a sort of top down causation since the whole is not the sum of the parts due to quantum entanglement and pep (pauli exclusion pcp), (I like the recent papers which show one can construct bosons from fermions and the reverse —eg in solitons in some cases so your statistics is ’emergent’ and emerges from other statistics. What comes out is not what went in. Or see ‘the fallacy of group selection’ by S Pinker on edge magazine web site. (He is roundly refuted in the comments). )

    I came across the salmon article after looking at the research of prominent ‘new atheist’ Sam Harris (not quite the same as the old atheists — not every descendent of a genius is a genius, they may even be retarded ) , He is a sort of cult figure or superstar to certain groups of highly evolved, educated people who believe in science and rationalism rather than superstition, the easter fairy and santa claus (they gave those beliefs up last week after doing extensive studies of videos by Neil degrasses Tyson who convinced them that the earth is not flat nor does the sun rotate around it . Though, they hadn’t read Einstein’s GTR which shows also that the earth doesnt rotate around the sun either—its a convention—and using mercatus projection the earth can be as flat as you want–it just has singularities at poles, etc.i.

    PC Davies also has written on viewing the ‘universal field theory—quantum+gravity—as possibly being the schrodinger equation on a curved space time, while other theories (volkovich or volovich, nathan rosen (the ‘r’ of EPR) discuss the geometry of the universe is Euclidean (flat) but has some matter in it that makes it look curved. See also Emile Cartan or einstein-cartan gravity and related ideas).

    Sam Harris’s PhD consisted of putting maybe 10 people in an MRI machine. Then they discovered that regions of the brain apparently differ between people (maybe like iq) so atheists had one kind of MRI pattern, and religious people had a different one. From this Harris concluded that maybe MRIs can be done to distinguish criminals from non-criminals etc. (He may also think, as a ‘new atheist’, that a religious brain is abnormal, as people like H Hoppe of U Nevada thinks homosexuals are. )One likely could even determine which dead salmons are good Christians, and which are members of isis. One could sell a lot of MRI machines too—maybe they’d be mandatory equipment in every place requiring screening (entrance to supreme court, airports, schools, jobs, etc.) and they could be put right next to the vending machines for fruit, soda, and guns.

    For this reason, maybe new regulations should be devised to ban experiments on dead salmon because its cruel (and I’ve been in an MRI machine and almost flipped out due to extreme claustrophobia. I was also denied particiapation in a free NIH clinical study for this reason –i’d be in there for like 2 months and in an mri machine many times a week. I actually was glad I was banned since I basically didn’t feel i need a program but rather an appropriate job and some sort of situation in which I can avoid dealing with toxic people, environments and and situations. These clinical programs don’t do that—I’ve known people who have been in these multiple times, and they basically cost over 1000$/day but the minute you leave ‘all fixed up’ you go right back injto a toxic situation. I knoiw someone who has been in prison and he told me when inmates are released, the guards say ‘see you next week’. I was stupid enough to hang out at Occupy DC and caught pneumonia—a family member noticed i was going incoherent and drove me to a hospital. I shoulkd have been out in 3 days they said, but then i got sepsis, and was 8 weeks at 3500$/day most of which they excused since i’ was uninsured. This is one reason why I also avoid, besides hospitals m political groups, who are also here to save you supposedly. In political groups a lot of the leaders say ‘step It up’. , be reasy to fight for the cause, but then they say ‘I have to go now and complete my PhD dissertation and apply for a grant so you’ll have to do that part).

    ‘stream of consiousness’. Rock creek was almost flooding yesterday —can be dangerous—and west va—eg richwood—got hit bad. This is an example of top down causation—rain. Though it can be viewed as a computation too—just code the differential equations for climate into a computer program, and end up like E Lorenz, with chaos.


  10. ps regarding the last statement of the op (original post) that information theory is important tool, etc. u may want to look at a very old article in NY Review of books (actually letter to editor) —an exchange between colin mGinn and h allen orr. (its from July 14th 2016 or a couple of weeks in the future). they both seem to agree that information theory had no real use in biology or cognitive sciences. (I will say in my experience there was a split between field and theoretical biologists—field biologists had no use for mathematical biology in general, population genetics, etc. I think actually much of this is based on aesthetic taste, as well as loyalty and envy—-some people don’t like math, others don’t like being outside watching animals, and there is a kind of hierarchy—alot of people have heard of einstein and feynman (for example) but few hear of all the many people who were constructing CERN, though many my not care so long as they have a reasonable job and life.


  11. I must say that I didn’t like my example almost as soon as I typed it. But, to pursue your ideas referencing deep learning networks, two questions come to mind.
    I think that the current state of deep learning requires more hidden layers. In that regard, can I assume that more hidden layers equates to more potential learning?

    Also, how would the deep learning algorithm categorize the input of this dual subject image?


  12. @tom I don’t think anyone knows, including Geoff Hinton, why more layers leads to more learning. It certainly adds more parameters. Your image is an example of an ambiguous illusion similar to what I study in my lab. I don’t know what a deep learning network would do but I would presume that it would converge to one of the solutions (young or old), which would change depending on the contingent training history of that network. The networks that assign a confidence would perhaps assign low confidence or pick both with equal confidence. This type of illusion is interesting because unlike say a Necker Cube, where you will alternate between perceptions, in this case people usually only see one of the solutions at first. I think when I was a child I saw the young lady first and only after viewing for along time did I finally see the old lady. However, after that I always see both. So my brain first only converged on the young lady solution but after some presumed plasticity, my brain network now sees both simultaneously. I’m not sure deep learning is at that stage yet.


  13. Carson,
    I would agree that the networks of today could likely be trained to identify either the young lady, or the old lady, but not both. Let’s assume that the networks of tomorrow might be able to simultaneously identify both the old lady and the young lady, and maintain both representations in it’s ‘output’.

    Should that day come, would that alter your current views on ‘free will’ in that the network could then identify multiple representations of ambiguous stimuli, and would have the option to chose which interpretation it would retain and which it would reject?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s