If we believe in materialism then we must accept that the function of our brain is the collective action of the biological parts. This doesn’t imply reductionism (i.e. just knowing the parts is enough), but it does imply that there is nothing beyond the laws of physics, chemistry and biology required for the operation of the brain. Given that the brain is just some really big dynamical system, why then from a computational and evolutionary point of view is there consciousness? I was criticized correctly in a previous post for not defining consciousness before talking about it. I will argue below that the definition of consciousness is intimately tied to its purpose but for now it suffices to work with the definition that consciousness is the sense of self awareness that I personally have. I’m pretty sure you have it too but of course I can’t prove it. For what I will discuss, it won’t matter if consciousness is an illusion or not. I will focus on why in a purely materialistic world, say a computer simulation, would a being in that world composed entirely of interacting components (e.g. bits), have a sense of self awareness and spectate the world around them.
At this purely mechanistic level, there is no free will. We are all Skinneresque creatures of stimulus and response. Thus, each of us could be represented by a function or table that maps the current state of the brain with its sensory inputs into a new state and a set of responses. Now, this function is going to be really, really big since the dimension of the brain state could be something like or more (state of all neuron and synaptic variables), the sensory input space is even bigger (all possible things we could experience from the outside world), and the space of possible responses (everything we can say or do) is just as unlimited. So, how can you possibly program or even store such a massive look up table? How, can you make the brain program run in real time? This is a computational tractability question.
This is where I see how consciousness could be useful. Consciousness may be an algorithmic trick to speed up the brain program. For example, in wiring up the brain function or table (where I’ve purposely mixed metaphors), the various autonomic, sensory, motor and executive functions must be connected to each other (if you want to be concrete, let’s say that there must be logical connectives between the states of these different components). You could program this in multiple ways. You could design the system so that you first account for pairwise interactions between the components (i.e if eyes see X and ears hear Y then do Z), then consider three component interactions (i.e. if eyes see R, ears hear S, nose smells T, then do U), then four, then five, and so forth. You can imagine that this brute force way would be a very inefficient way to code things.
An alternative is to have all the components connect to a common bulletin board. The board gets updated each time a component posts to it. If other components need to act, they simply check what’s on the board and act on that information. The nice thing about this approach is that if new components get added, they just need to tap into the bulletin board. In the brute force way, interactions with all the other components would have to be wired in separately. The tricky part is how the board gets updated. This is what I think consciousness is. It is the super duper compressed summary of the current state of the brain and inputs. This is the running dialog in my head. That is why we can watch a movie and give a review in 140 characters – something that computers are no where close to being able to do right now.
I think evolution tapped into this trick for nervous systems pretty early on so perhaps all life forms have consciousness in the sense of an ongoing compressed summary of their current state. Differences in animals would then be quantitative – bigger and faster. It is possible that there are bifurcations or phase transitions in the operation as a function of size and speed so that quantitative increases lead to abrupt qualitative differences in performance. So an insect’s consciousness is qualitatively different from ours. I also think that artificial minds that can emulate human-like tasks in real time may require this design. It might be that from a purely computational tractability issue, artificial intelligence and artificial consciousness cannot be separated.
Acknowledgments: Many of these ideas were inspired by an article in IEEE Spectrum by Christof Koch and Giulio Tononi.