If we believe in materialism then we must accept that the function of our brain is the collective action of the biological parts. This doesn’t imply reductionism (i.e. just knowing the parts is enough), but it does imply that there is nothing beyond the laws of physics, chemistry and biology required for the operation of the brain. Given that the brain is just some really big dynamical system, why then from a computational and evolutionary point of view is there consciousness? I was criticized correctly in a previous post for not defining consciousness before talking about it. I will argue below that the definition of consciousness is intimately tied to its purpose but for now it suffices to work with the definition that consciousness is the sense of self awareness that I personally have. I’m pretty sure you have it too but of course I can’t prove it. For what I will discuss, it won’t matter if consciousness is an illusion or not. I will focus on why in a purely materialistic world, say a computer simulation, would a being in that world composed entirely of interacting components (e.g. bits), have a sense of self awareness and spectate the world around them.
At this purely mechanistic level, there is no free will. We are all Skinneresque creatures of stimulus and response. Thus, each of us could be represented by a function or table that maps the current state of the brain with its sensory inputs into a new state and a set of responses. Now, this function is going to be really, really big since the dimension of the brain state could be something like or more (state of all neuron and synaptic variables), the sensory input space is even bigger (all possible things we could experience from the outside world), and the space of possible responses (everything we can say or do) is just as unlimited. So, how can you possibly program or even store such a massive look up table? How, can you make the brain program run in real time? This is a computational tractability question.
This is where I see how consciousness could be useful. Consciousness may be an algorithmic trick to speed up the brain program. For example, in wiring up the brain function or table (where I’ve purposely mixed metaphors), the various autonomic, sensory, motor and executive functions must be connected to each other (if you want to be concrete, let’s say that there must be logical connectives between the states of these different components). You could program this in multiple ways. You could design the system so that you first account for pairwise interactions between the components (i.e if eyes see X and ears hear Y then do Z), then consider three component interactions (i.e. if eyes see R, ears hear S, nose smells T, then do U), then four, then five, and so forth. You can imagine that this brute force way would be a very inefficient way to code things.
An alternative is to have all the components connect to a common bulletin board. The board gets updated each time a component posts to it. If other components need to act, they simply check what’s on the board and act on that information. The nice thing about this approach is that if new components get added, they just need to tap into the bulletin board. In the brute force way, interactions with all the other components would have to be wired in separately. The tricky part is how the board gets updated. This is what I think consciousness is. It is the super duper compressed summary of the current state of the brain and inputs. This is the running dialog in my head. That is why we can watch a movie and give a review in 140 characters – something that computers are no where close to being able to do right now.
I think evolution tapped into this trick for nervous systems pretty early on so perhaps all life forms have consciousness in the sense of an ongoing compressed summary of their current state. Differences in animals would then be quantitative – bigger and faster. It is possible that there are bifurcations or phase transitions in the operation as a function of size and speed so that quantitative increases lead to abrupt qualitative differences in performance. So an insect’s consciousness is qualitatively different from ours. I also think that artificial minds that can emulate human-like tasks in real time may require this design. It might be that from a purely computational tractability issue, artificial intelligence and artificial consciousness cannot be separated.
Acknowledgments: Many of these ideas were inspired by an article in IEEE Spectrum by Christof Koch and Giulio Tononi.
8 thoughts on “Why consciousness?”
I may have missed your point but, how do you link the two definitions of consciousness you gave:
– the sense of self awareness that I personally have
– the super duper compressed summary of the current state of the brain and inputs
Well obviously this is total speculation but I think that this sense of self awareness is a consequence of the existence of a compressed summary. How this works exactly, I don’t know.
Let me expand some more on this issue. I was trying to suggest that the describable part of “self-awareness” is the compressed summary of our current state. For example, the running dialog in my head is a manifestation of the summary that is a low fidelity recapitulation of auditory and vocal processes. The internal “visual display” I have is in a form that is convenient for my motor system to use in setting gaze direction and planning movement. My emotional states, like joy, agitation, fear, etc provide a summary for future actions. So, all aspects of my self-awareness seems to framed in terms that are convenient for various neural systems to use. This probably doesn’t explain what you really want to know but I think it gives some sense of why consciousness would be useful from a computational standpoint.
“At this purely mechanistic level, there is no free will.”
IMHO the problem for materialism isn’t free will, which I can believe is an “illusion,” but rather qualia, which I can’t.
So are you arguing against materialism?
I like what you have to say, but I’m not sure it really gets at the “Why” of consciousness. It seems like you are saying consciousness is necessary to solve a computational problem. It seems that puts the computational problem ahead of the evolutionary problem of why bother to evolve this computationally complex brain in the first place. The answer, IMHO, is that consciousness is there in order to allow us more behavioral flexibility. What I mean by this is that non-conscious (or less conscious) creatures might function more like if X and Y then do Z. Organisms with consciousness are able to say if X and Y, then do Z or A or B – this is what our computationally and biologically expensive brains “buy us” in evolutionary terms.
The question of why anything evolves is difficult if not impossible to answer. If consciousness gives a computational advantage then if that advantage is useful for increasing fitness then it will fix in a population if some chance event gets it started. I think it would be an advantage for creatures that predate and actively avoid predation.
I think you are correct that consciousness does give more flexibility but I think that even before you get to flexibility, it allows you to address a combinatorial explosion of possibilities. So an animal couldn’t even do a logical task with more than a few states without it.
[…] in cognitive science right now. Many of my views on consciousness, which I partly summarized here, have been strongly influenced by his […]