The blinking-dot paradox of consciousness

Suppose you could measure the activity of every neuron in the brain of an awake and behaving person, including all sensory and motor neurons. You could then represent the firing pattern of these neurons on a screen with a hundred billion pixels (or as many as needed). Each pixel would be identified with a neuron and the activity of the brain would be represented by blinking dots of light. The question then is whether or not the array of blinking dots is conscious (provided the original person was conscious). If you believe that everything about consciousness is represented by neuronal spikes, then you would be forced to answer yes. On the other hand, you must then acknowledge that a television screen simply outputting entries from a table is also conscious.

There are several layers to this possible paradox. The first is whether or not all the information required to fully decode the brain and emulate consciousness is in the spiking patterns of the neurons in the brain. It could be that you need the information contained in all the physical processes in the brain such as the movement of  ions and water molecules, conformational changes of ion channels, receptor trafficking, blood flow, glial cells, and so forth. The question is then what resolution is required. If there is some short distance cut-off so you could discretize the events then you could always construct a bigger screen with trillions of trillions of pixels and be faced with the same question. But suppose that there is no cut-off so you need an uncountable amount of information. Then consciousness would not be a computable phenomenon and there is no hope in ever understanding it. Also, at a small enough scale (Planck length) you would be forced to include quantum gravity effects as well, in which case Roger Penrose may have been on to something after all.

The second issue is whether or not there is a difference between a neural computation and reading from a table. Presumably, the spiking events in the brain are due to the extremely complex dynamics of synaptically coupled neurons in the presence of environmental inputs. Is there something intrinsically different between a numerical simulation of a brain model from reading the entries of a list? Would one exhibit consciousness while the other not? To make matters even more confusing, suppose you have a computer running a simulation of a brain. The firing of the neurons are now encoded by the states of various electronic components like transistors. Does this means that the circuits in the computer become conscious when the simulation is running? What if the computer were simultaneously running other programs, like a web browser, or even another brain simulation?  In a computer, the execution of a program is not tied to specific electronic components.  Transistors just change states as instructions arrive so when a computer is running multiple programs, the transistors simulating the brain are not conserved.  How then do they stay coherent to form a conscious perception?  In a normal computer operation, the results are fed to an output, which is then interpreted by us.  In a simulation of the brain, there is no output, there is just the simulation. Questions like these make me question my once unwavering faith in the monistic (i.e. not dualistic) theory of the brain.

4 thoughts on “The blinking-dot paradox of consciousness

  1. @BKGMU It’s a neuroscience version of the Chinese Room. Searle argued that the person in the room didn’t understand Chinese but the combination of the person, the dictionary and the slips of paper could collectively “understand” Chinese. Here the question is whether or not the brain and a simulation of the brain are the same and if so, are all simulations equivalent.

    Like

  2. I’ve read you since the path-integral stuff (this was very appreciated: http://arxiv.org/abs/1009.5966), and I’ve never commented, but temptation!

    Consider your post as an argument (if far from proof) by contradiction:

    If consciousness is a thing, (post here), then big brains are somehow different from everything else in the physical world.

    Or

    If (consciousness is a thing), then ~(physics).

    By contradiction, consciousness probably isn’t a thing.

    Shamefully facile argument maybe, but that’s the bet I’m making. (I left comp neuro after my PhD to be an epidemiologist, but here I am still reading…).

    Like

  3. i thought schrodinger had some views on monism and dualism (mind and matter if recall). building on this, if computers are conscious it would seem they, like information, should be free. (eg one could set up legal departments in law schools, next to the aminal rights ones which aim for getting travel money to the haugue’s icc —not the same icc as the one in md which is helping to turn paint branch into another sewer). the ‘slaving principle’ (eg Haken) could be reformed.
    some other ideas exist, such as http://www.arxiv.org/abs/0810.4339 or http://www.arxiv.org/abs/1405.0126 (the second article has a professor of business as a co-author—maybe business is the underlying monist principle; someone from caltech, like hopfield, is also on the title page.).

    i note this entry on sch is tagged as ‘philosophy’ as well; tomorrow there is some discussion on whether brain science has made philosophy obsoltete

    Like

Leave a comment