Mary’s room is a philosophical thought experiment used to question a physical or materialistic explanation of consciousness and mind. The argument has various forms but it essentially boils down to a situation where Mary is a scientist who cannot see colours but goes about to study everything physical there is to know about colour. So she learns about light, quantum mechanics, molecular biology, photoreceptors, neuroscience, psychology, art, and so forth. Then suddenly her colour vision is restored. The question then is whether or not she has learned something new. If you answer yes, then there cannot possibly be a physical or material explanation of consciousness since she has learned everything that she could about the physical properties of colour vision. The thought experiment is meant to highlight that there seems to be something special and nonphysical about qualia.
I’m certain that most everyone has at one point wondered if what they call red looks the same to someone else. How many times have you had a conversation where someone says “How do I know that my red is not your blue?” The philosopher David Chalmers uses Mary’s room as one of his arguments against a pure physical explanation of consciousness. I believe he is the person who coined the term “the hard problem of consciousness”, for the issue of how to understand the awareness of consciousness. His arguments are quite compelling but I haven’t quite jumped off the materialistic bandwagon. My response to Mary’s room is that if Mary truly discovered everything to know about consciousness then she will not learn anything new when her colour vision is restored. However, I might differ from other materialists in that I believe that she would have to simulate colour vision in her brain to qualify as knowing everything physical because there is a profound difference between knowing what an algorithm is and actually executing it. The reason, which drives much of my recent philosophical inquiry, is the Halting Problem.
The exact definition of the Halting Problem is that there cannot exist an algorithm or computer program that can tell whether every computation will halt. The inclusion of “every” is important because there could be ways to tell if some programs will halt. Obviously if we wrote the program
Print “hello world”
it would print “hello world” then stop. And the program
while (3>2) continue
would run forever. However, if you had enough “conditional” and “goto” statements, it would be hard to figure out if the program would halt and in fact, you can’t come up with a surefire system to decide if all programs will halt.
The more general implication of the Halting Problem is that you can’t generally predict what computer programs will do. Hence, there is a qualitative difference between knowing an algorithm and executing it. There is a difference between a recipe for a cake and the actual cake. You might be able to prove some certain properties about cakes using physics and chemistry but if wanted to fully understand and appreciate a cake, then you would need to know everything about the brain and cake perception, all the contingencies of the kitchen such as humidity, temperature, altitude, number of mold spores in the air, water quality and so forth. In other words, a recipe alone may be insufficient to understand a cake.
Hence, even if there existed an algorithm to understand consciousness and colour, Mary would still have to run the “experience colour” program on the specified hardware, i.e. her brain. So suppose she couldn’t experience colour because she was missing cones in her retina. Then to fully understand colour, she could implant electrodes in her visual cortex to trigger the sequence of stimuli that causes an experience of colour. Now some (most?) would argue that this is cheating and that I have actually capitulated to the argument that you couldn’t learn everything about colour by doing physical experiments and theorizing so consciousness cannot be entirely physical. However, my response is that physical knowledge entails running the program on the exact hardware it is to be implemented on because of the Halting Problem.
8 thoughts on “Mary’s room”
[…] Mary’s room (sciencehouse.wordpress.com) […]
It looks like the “The Mind vs. Brain Debate” trackback page has been removed by their owner the Christian site Earthpages.org, may be they found that their Mind vs. Brain arguments were somewhat wanting? :-)
My own view about Mary’s room thought experiment is quite heretical, I hold that:
– Mary did indeed learn something new when her colour vision was restored.
– Consciousness is a purely physical/material phenomenon.
And there is NO conundrum!
The point is EVERYTHING Mary knows is inside her consciousness (kinda a big database) which is a physical/material thing and the only thing she can be aware of: the whole universe as much as she can know IS inside her consciousness.
Whenever she recovers colour vision “something” is added to this big database which could not be there before no matter how hard she studied.
This is another instance of distinction model/reality.
Not only will we never know the “full reality” but each one of us (distinct “big database”) has his/her own purported reality which is only the “best model” which has been built so far from previous experience.
As for whether red looks the same to someone else I am pretty sure it doesn’t because I have myself a very slight different color perception from right and left eye, from the right eye colors are little more reddish.
Had I both eyes like the right one any given red patch will be more reddish, whatever “my” reddishness might mean to you.
I think we agree. My argument was that Mary had to experience red in her brain in order for it to qualify as knowing everything.
Your argument about Mary strikes me as having your cake and eating it too. No one doubts that IF Mary’s brain was in a state that produces color sensation she would know about color. Dualist or materialist, to be in a brain state of color sensation, you’d experience the color. What’s weird about consciousness is that there is a whole category of facts (phenomenological facts) that require physical systems to actually be “in” certain states. How is this materialism? The implication here is that certain physical systems undergoing certain “algorithms” have intrinsic qualities that cannot be understood “from the outside.” If you want to call this materialism, fine, but it takes the wind out of materialism’s sails. A key tenet to materialism, seems to me, is that all properties of a system should be explicable extrinsically. What Dennett calls, “third-person absolutism.” The whole point of materialism is to explain away the subjective perspective with objective facts. But if there are such things as subjective facts, like facts about the experience of color, and if these facts are not deducible from objective analysis, then materialism looses it’s footing. Your point of view, to me, sounds much more like Russellian Monism, or epiphenomenalism. If some algorithms (and why just some and not all?) have a subjective character when instantiated and if there is no way to understand these facts without actually processing these states in your brain, then, there is more to consciousness than mere physical facts. How this relates exactly to the halting problem still perplexes me a bit. While I think that computational analysis can help us understand the mind, the halting problem doesn’t make the hard problem disappear. Good post but please, do explain further how the halting problem eliminates the conundrum of subjective experience.
Not at all.
It is the cartesian dualism which is trying hopelessly to “flatten” the picture with the silly idea of the pineal gland needed to connect material and mental “substances” (information as a substance!!!).
When you say intrinsic qualities that cannot be understood “from the outside.” what you fail to understand is that the “intrinsic qualities” do not exist at all “in the outside” they are only correlated to observables from the outside but that does not mean that they are of “spiritual” nature, they are not any kind of substance but a process.
You’re arguments are well taken. My argument is based on the premise that the best a third party could do is to acquire the algorithm of colour sensation, which may or may not be separable from the machine that is running it, i.e. Mary’s brain. The outcome of that algorithm is not known until you run the program on that specific machine. The tie in to the halting problem is that it represents to me a clear example of having complete knowledge of both program and machine and yet not being able to know in advance what the answer will be. That seems to indicate that the algorithm is intrinsically different from the simulation of the algorithm. Now, you may wish to argue that this is an abandonment of materialism and perhaps according to Dennett it is. I use the term materialism loosely to represent a contrast to dualism. However, neutral monist is perhaps a more apt description and I’m perfectly happy being given that label.
I believe Kevembuangga’s point is “running the program” is the material object, although he would be best to defend his point.
Kevembuangga – I’m not defending Cartesian dualism. Far from it. Don’t know where you got that. No one defends his view anymore. The kind of “dualism” materialists disparage is mostly a philosophical straw man these days. No modern philosopher or neuroscientist is a Cartesian. Though see this: http://consc.net/notes/dualism.html
Thanks Carson. I think Monism is where I am at too though with an idealist bent. The view is imminently reasonable but its panpsychist implications scare a lot of people away. (Personally, it’s always made sense to me that our brains, being complex matter – or instantiating complex algorithms, in your paradigm – has complex consciousness, while a thermometer, a much simpler form of matter, has really minimal consciousness.)
Again, I still don’t think the halting problem tells us much about ontology however I can imagine some interesting implications for the free will debate. (Especially if you hold a metaphysic that computation is the foundation of matter, aka digital physics.) The reason that the analogy with the halting problem doesn’t totally hold for the hard problem, I think, is that even though you can’t predict what the computer will do, when the computer does do it you can see why it did so post facto. (If I am wrong about this, please let me know.) For Mary, why red has the qualitative nature it has is still a mystery even after she finally sees red for the first time. So, even though she knows everything about the brain and how it is discriminating different wavelengths of light, and even though she now knows what those different colored wavelengths actually look like, the is still a question of why red looks like RED and not some other color. It’s hard to even imagine what kind of answer could satisfy this question.
I’ll start with some premises and then build up to a response. As you infer, I hold that the hard problem of consciousness is contained within Turing computation. That is not to say that our brains are necessarily computable, only that computation is sufficient to attain consciousness in however subjective way you wish to define it. Given that premise, I used the halting problem only as an example of an undecidable problem. Your point that given a computation, we should always be able to deconstruct what will happen post facto is compelling. However, I think even doing a post mortem analysis is undecidable. In the specific example of the halting problem you could always figure out why if a computation halted but you wouldn’t be able to say why if it doesn’t halt. In any case, my belief is that the hard problem is undecidable but I obviously have no proof or even an idea of how to prove it.