My brother was just in town, and we had our usual argument about Old Man's War, which he loves and about which I'm less enthusiastic (it was a fun read, but...). Perhaps one issue that keeps me from enjoying it fully is that whenever I think about it I think about an early scene, in which a character's consciousness was transferred from an old body to a new body. This is presented in the book as just one more futuristic miracle, but I can't stop thinking about the deeper questions it raises.
What does it mean to transfer consciousness from one body to another? Our current scientific understanding is that there is no consciousness separate from the underlying physical machinery, so such a transfer could not happen. But you might be able to create the illusion of a consciousness transfer, which I explain below. So we can make sense of Old Man's War if we assume that the doctors are deliberately lying about what is going on, covering up the murder that lies at the heart of the procedure.
Here's what might be going on (yes, I realize this is fiction, but good science fiction almost always has a thought experiment at its heart): It should be possible, at least in principle, to create a new body that has identical machinery to an existing body. This is would be new person who is a twin not only physically but mentally, down to having the same memories (by definition, since they have the same brains down to the microcircuitry). From the new person's perspective, he has finds herself suddenly in a "new" body. (This is much like the old philosophical puzzle, what if the world was created yesterday, all of us with artificial memories?)
So now we've got a consciousness that believes itself to have transferred into a new body from an old body. What happened to the consciousness in the old body? The doctors in Old Man's War claim that it is now a vegetable, with no consciousness inside, because that consciousness has transferred. Since that can't happen, they are lying: either the process of creating the new copy of the old brain destroys the old brain, or the doctors deliberately destroy the old brain to preserve the illusion of the transfer (after all, if transfer is impossible, why go through this procedure? It's very nice for your twin to have a new body, but it's not going to do you any good at all!).
Here's the question: does this matter? If John undergoes this procedure happens on a Wednesday, then the world on Thursday is much the same as the world on Tuesday: on both days, there is a consciousness calling itself "John" with roughly the same memories. It only gets tricky when you think too much about Wednesday. You might be tempted to say you have John 1 on Tuesday and John 2 on Thursday, who are duplicates but nonetheless not the same because they have different bodies. But, of course, John had a different body when he was 5yo than when he was 75yo, down even to being made up of different atoms. So if we're willing to call 5yo John and 75yo John the same person, why aren't John 1 and John 2?
This confuses the heck out of me, which is why I have difficulty paying attention to the novel itself.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
Change of address8 months ago in Variety of Life
-
Change of address8 months ago in Catalogue of Organisms
-
-
Earth Day: Pogo and our responsibility11 months ago in Doc Madhattan
-
What I Read 20241 year ago in Angry by Choice
-
I've moved to Substack. Come join me there.1 year ago in Genomics, Medicine, and Pseudoscience
-
-
-
-
Histological Evidence of Trauma in Dicynodont Tusks7 years ago in Chinleana
-
Posted: July 21, 2018 at 03:03PM7 years ago in Field Notes
-
Why doesn't all the GTA get taken up?7 years ago in RRResearch
-
-
Harnessing innate immunity to cure HIV9 years ago in Rule of 6ix
-
-
-
-
-
-
post doc job opportunity on ribosome biochemistry!11 years ago in Protein Evolution and Other Musings
-
Blogging Microbes- Communicating Microbiology to Netizens11 years ago in Memoirs of a Defective Brain
-
Re-Blog: June Was 6th Warmest Globally11 years ago in The View from a Microbiologist
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl13 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House14 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs14 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby15 years ago in The Large Picture Blog
-
in The Biology Files
Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts
Recent Findings Don't Prove there's a Ghost in the Machine (Sorry Saletan)
When I took intro to psychology (way too long ago), the graduate instructor posed the following question to the class: Does your brain control your mind, or does your mind control your brain? At first I thought this was a trick question -- coming from most neuroscientists or cognitive scientists it would be -- but she meant it seriously.
On Tuesday, William Saletan at Slate posed the same question. Bouncing off recent evidence that some supposedly vegetative patients are in fact still able to think, Saletan writes, "Human minds stripped of every other power can still control one last organ--the brain."
Huh?
Every neuroscientist I've talked to would read this as a tautology: "the brain controls the brain." Given the gazillions of feedback circuits in the brain, that's a given. Reading further, though, Saletan clearly has something else in mind:
We think of the brain as its own master, controlling or fabricating the mind ... If the brain controls the mind this way, then brain scanning seems like mind reading ... It's fun to spin out these neuro-determinist theories and mind-reading fantasties. But the reality of the European scans is much more interesting. They don't show the brain controlling the mind ... The scans show the opposite: the mind operating the brain."
Evidence Mind is Master
As I've already mentioned above, the paragraph quoted above is nonsensical in modern scientific theory, and I'll get back to why. But before that, what evidence is Saletan looking at?
In the study he's talking about, neuroscientists examined 54 patients who show limited or no awareness and no ability to communicate. Patients brains were scanned while they were asked to think of motor activities (swinging a tennis racket) or navigation activities (moving around one's home town). 5 of the 54 were able to do this. They also tried to ask the patients yes-no questions. If the answer was 'yes', the patient was to think about swinging a tennis racket; if 'no', moving around one's home town. One patient was able to do this successfully.
Note that the brain scans couldn't see the patient deciding 'yes' or 'no' -- actually, they couldn't see the patient deciding at all. This seems to be why Saletan thinks this is evidence of an invisible will controlling the physical brain: "On the tablet of your brain, you can write whatever answer you want."
The Mistake
The biggest problem with this reasoning is a misunderstanding of the method the scientists used. FMRI detects very, very small signals in the brain. The technology tracks changes in blood oxygenation levels, which correlates with local brain activity (though not perfectly). A very large change is on the order of 1%. For more complicated thoughts, effect sizes of 0.5% or even 0.1% are typical. Meanwhile, blood oxygen levels fluctuate a good deal for reasons of their own. This low signal-to-noise ratio means that you usually need dozens of trials: have the person think the same thoughts over and over again and average across all the trials. In the fMRI lab I worked in previously, the typical experiment took 2 hours. Some labs take even longer.
To use fMRI for meaningful communication between a paralyzed person and their doctors, you need to be able to detect the response to an individual question. Even if we knew were to look in the brain for 'yes' or 'no' answers -- and last I heard we didn't, but things change quickly -- its unlikely we could hear this whispering over the general tumult in the brain. The patients needed to shout at the top of their lungs. It happens that physical imagery produces very nice signals (I know less about navigation, but presumably it does, too, or the researchers wouldn't have used it).
Thus, the focus on visual imagery rather than more direct "mind-reading" was simply an issue of technology.
Dualism
The more subtle issue is that Saletan takes dualism as a starting point: the mind and brain are separate entities. Thus, it makes sense to ask which controls the other. He seems to understand modern science as saying the brain controls the mind.
This is not the way scientists currently approach the problem -- or, at least, not any I know. The starting assumption is that the mind and brain are two ways of describing the same thing. Asking whether the mind can control the brain makes as much sense as asking whether the Senate controls the senators or senators control the Senate. Talking about the Senate doing something is just another way of talking about some set of senators doing something.
Of course, modern science could be wrong about the mind. Maybe there is a non-material mind separate from the brain. However, the theory that the mind is the brain has been enormously productive. Without it, it is extremely difficult to explain just about anything in neuroscience. Why does brain trauma lead to amnesia, if memories aren't part of the brain? Why can strokes leave people able to see but unaware that they can see?
Descartes' Error
A major problem with talking about the mind and brain is that we clearly conceptualize of them differently. One of the most exciting areas of cognitive science in the last couple decades has looked at mind perception. It appears humans are so constructed that we are good at detecting minds. We actually over-detect minds, otherwise puppet shows wouldn't work (we at least half believe the puppets are actually thinking and acting). Part of our concept of mind is that it is non-physical but controls physical bodies. While our concept of mind appears to develop during early childhood, the fact that almost all humans end up with a similar concept suggests that either the concept or the propensity to develop it is innate.
Descartes, who produced probably the most famous defense of dualism, thought the fact that he had the concept of God proved that God exists (his reasoning: how can an imperfect being have the thought of a perfect being, unless the perfect being put that thought there?). Most people would agree, however, that just because you have a concept doesn't mean the thing the concept refers to exists. I, for instance, have the concept of cylons, but I don't expect to run into any.
Thus, even as science becomes better and better at explaining how a physical entity like the brain gives rise to our perceptions, our experience of existing and thinking, the unity of mind and brain won't necessarily make any more intuitive sense. This is similar to the problem with quantum physics: we have plenty of scientific evidence that something can be both a wave and a particle simultaneously, and many scientists work these theories with great dexterity. But I doubt anyone really has a clear conception of a wave/particle. I certainly don't, despite a semester of quantum mechanics in college. We just weren't set up to think that way.
For this reason, I expect we'll continue to read articles like Saletan's long in the future. This is unfortunate, as neuroscience is becoming an increasingly important part of our lives and society, in a way quantum physics has yet to do. Consider, for instance, insanity pleas in the criminal justice system, lie detectors, and so on.
On Tuesday, William Saletan at Slate posed the same question. Bouncing off recent evidence that some supposedly vegetative patients are in fact still able to think, Saletan writes, "Human minds stripped of every other power can still control one last organ--the brain."
Huh?
Every neuroscientist I've talked to would read this as a tautology: "the brain controls the brain." Given the gazillions of feedback circuits in the brain, that's a given. Reading further, though, Saletan clearly has something else in mind:
We think of the brain as its own master, controlling or fabricating the mind ... If the brain controls the mind this way, then brain scanning seems like mind reading ... It's fun to spin out these neuro-determinist theories and mind-reading fantasties. But the reality of the European scans is much more interesting. They don't show the brain controlling the mind ... The scans show the opposite: the mind operating the brain."
Evidence Mind is Master
As I've already mentioned above, the paragraph quoted above is nonsensical in modern scientific theory, and I'll get back to why. But before that, what evidence is Saletan looking at?
In the study he's talking about, neuroscientists examined 54 patients who show limited or no awareness and no ability to communicate. Patients brains were scanned while they were asked to think of motor activities (swinging a tennis racket) or navigation activities (moving around one's home town). 5 of the 54 were able to do this. They also tried to ask the patients yes-no questions. If the answer was 'yes', the patient was to think about swinging a tennis racket; if 'no', moving around one's home town. One patient was able to do this successfully.
Note that the brain scans couldn't see the patient deciding 'yes' or 'no' -- actually, they couldn't see the patient deciding at all. This seems to be why Saletan thinks this is evidence of an invisible will controlling the physical brain: "On the tablet of your brain, you can write whatever answer you want."
The Mistake
The biggest problem with this reasoning is a misunderstanding of the method the scientists used. FMRI detects very, very small signals in the brain. The technology tracks changes in blood oxygenation levels, which correlates with local brain activity (though not perfectly). A very large change is on the order of 1%. For more complicated thoughts, effect sizes of 0.5% or even 0.1% are typical. Meanwhile, blood oxygen levels fluctuate a good deal for reasons of their own. This low signal-to-noise ratio means that you usually need dozens of trials: have the person think the same thoughts over and over again and average across all the trials. In the fMRI lab I worked in previously, the typical experiment took 2 hours. Some labs take even longer.
To use fMRI for meaningful communication between a paralyzed person and their doctors, you need to be able to detect the response to an individual question. Even if we knew were to look in the brain for 'yes' or 'no' answers -- and last I heard we didn't, but things change quickly -- its unlikely we could hear this whispering over the general tumult in the brain. The patients needed to shout at the top of their lungs. It happens that physical imagery produces very nice signals (I know less about navigation, but presumably it does, too, or the researchers wouldn't have used it).
Thus, the focus on visual imagery rather than more direct "mind-reading" was simply an issue of technology.
Dualism
The more subtle issue is that Saletan takes dualism as a starting point: the mind and brain are separate entities. Thus, it makes sense to ask which controls the other. He seems to understand modern science as saying the brain controls the mind.
This is not the way scientists currently approach the problem -- or, at least, not any I know. The starting assumption is that the mind and brain are two ways of describing the same thing. Asking whether the mind can control the brain makes as much sense as asking whether the Senate controls the senators or senators control the Senate. Talking about the Senate doing something is just another way of talking about some set of senators doing something.
Of course, modern science could be wrong about the mind. Maybe there is a non-material mind separate from the brain. However, the theory that the mind is the brain has been enormously productive. Without it, it is extremely difficult to explain just about anything in neuroscience. Why does brain trauma lead to amnesia, if memories aren't part of the brain? Why can strokes leave people able to see but unaware that they can see?
Descartes' Error
A major problem with talking about the mind and brain is that we clearly conceptualize of them differently. One of the most exciting areas of cognitive science in the last couple decades has looked at mind perception. It appears humans are so constructed that we are good at detecting minds. We actually over-detect minds, otherwise puppet shows wouldn't work (we at least half believe the puppets are actually thinking and acting). Part of our concept of mind is that it is non-physical but controls physical bodies. While our concept of mind appears to develop during early childhood, the fact that almost all humans end up with a similar concept suggests that either the concept or the propensity to develop it is innate.
Descartes, who produced probably the most famous defense of dualism, thought the fact that he had the concept of God proved that God exists (his reasoning: how can an imperfect being have the thought of a perfect being, unless the perfect being put that thought there?). Most people would agree, however, that just because you have a concept doesn't mean the thing the concept refers to exists. I, for instance, have the concept of cylons, but I don't expect to run into any.
Thus, even as science becomes better and better at explaining how a physical entity like the brain gives rise to our perceptions, our experience of existing and thinking, the unity of mind and brain won't necessarily make any more intuitive sense. This is similar to the problem with quantum physics: we have plenty of scientific evidence that something can be both a wave and a particle simultaneously, and many scientists work these theories with great dexterity. But I doubt anyone really has a clear conception of a wave/particle. I certainly don't, despite a semester of quantum mechanics in college. We just weren't set up to think that way.
For this reason, I expect we'll continue to read articles like Saletan's long in the future. This is unfortunate, as neuroscience is becoming an increasingly important part of our lives and society, in a way quantum physics has yet to do. Consider, for instance, insanity pleas in the criminal justice system, lie detectors, and so on.
Can we make our body do what we want?
I read Fodor's Language of Thought over the summer. Towards the end of Chapter 1, he says psychological rules must have exceptions, since "even when the spirit is willing the flesh is often weak. There are always going to be behavioral lapses which are physiologically explicable but which are uninteresting from the point of view of psychological theory."
Like many cognitive scientists, I'm a big fan of Fodor. I love his theory of concepts put forth in Language of Thought. I even like his work when I disagree with it. This is a place I disagree: behavioral lapses are fascinating from the point of view of psychological theory.
There's two ways of interpreting this comment. One assumes dualism: there is an immaterial mind that tries to make the physical brain and body do what it wants. This theory is almost certainly wrong, but on this theory, cases in which the brain fails to do what the mind wants are interesting. Why would that happen? Are these errors random stochastic noise, or are there patterns in the failures?
If we assume that all behavior arises from the activity of the brain (probably the right theory), behavioral lapses are even more interesting. We have conscious parts of our mind/brain that make explicit decisions (I'm getting out of bed now; I won't eat that slice of chocolate cake; I'm going to smile when I open this present, no matter what it is), but it's clear to any owner of a consciousness that making a decision is one thing -- making it happen is another (there are more prosaic cases as well, in which we mean to say one word but a different one comes out).
Perhaps wires sometimes get crossed and our decision isn't transmitted to the relevant module of the brain. That seems like a pretty serious design flaw, so why hasn't evolution fixed it? Perhaps there are non-conscious parts of our brain that sometimes override our conscious decisions ... in which case, what is consciousness for (according to some, not for making decisions).
Psychological theory aside, I suspect most people would like to understand why, despite a willing spirit, their flesh is so often weak.
Fractured Consciousness
The modern scientific consensus is that the 'mind' is just a word we use to describe our experience of our own brains in action. That is, mind and brain are more or less the same thing, just described at different levels (this gets stuck in the semantics because the brain monitors some nonconscious things such as heart rate, activities not normally thought of as in the domain of the mind).
That said, some in the scientific community and many in the general community have difficulty buying this 'astonishing hypothesis' (check out the comments to my last post on the subject).
Different people arrive at the hypothesis by their own paths. To me, the most compelling evidence is the range of bizarre consequences of brain damage. For instance, check out this late-December New York Times profile of a recent case of blind-sight, a phenomenon in which a person, due to brain damage, believes herself to be blind, but is clearly able to see. Oliver Sacks books are full of such cases, such as hemispheric neglect, in which people lose their awareness of half the world, to the extent that they eat from only one side of their plate, shave only one side of their face, and may even only be able to turn in one direction. A recent obituary of a famous amnesic noted how work with amnesics has shown that losing one's ability to form memories is in essence losing part of oneself.
Data like these make it hard to save dualism. If there is a non-material soul, it is not responsible for memory, for having a sense of left or right, or probably even for consciousness itself. That doesn't seem to leave much for the non-material soul to do. This conclusion may be disheartening, but it seems inescapable.
Mind and Brain
In periodic posts, I've been trying to lay out the modern scientific consensus on the mind/brain problem, with mixed success. If I had come across the following passage, from Ray Jackendoff's Language, Consciousness, Culture, a bit earlier, I might have saved some trouble, since I feel it is one of the clearest, most concise statements on the topic I have seen:
The predominant view is a strict materialism, in which consciousness is taken to be an emergent property of brains that are undergoing certain sorts of activity.
Although the distinction is not usually made explicit, one could assert the materialist position in either of two ways. The first would be 'methodological materialism': let's see how far we can get toward explaining consciousness under materialist assumptions, while potentially leaving open the possibility of an inexplicable residue. The second would be 'dogmatic materialism,' which would leave no room for anything but materialist explanation. Since we have no scientific tools for any sort of nonmaterialist explanation, the two positions are in practice indistinguishable, and they lead to the same research...
Of course, materialism goes strongly against folk intuition about the mind, which concurs with Descartes in thinking of the conscious mind as associated with a nonmaterial 'soul' or the like... The soul is taken to be capable of existence independently of the body. It potentially survives the death of the body and makes its way in the world as a ghost or a spirit or ensconced in another body through reincarnation... Needless to say, most people cherish the idea of being able to survive the death of their bodies, so materialism is more than an 'astonishing hypothesis,' to use Crick's (1994) term: it is a truly distressing and alienating one. Nevertheless, by now it does seem the only reasonable way to approach consciousness scientifically.
Physics is for wimps
Matt Springer may not have been throwing down the gauntlet in his Oct. 21 post, but I'm picking it up. In a well-written and well-reasoned short essay, he lays out just what is so difficult about the study of consciousness:
Now, I like physics, too. I almost majored in it. But I like cognitive science more for precisely this reason: developing the right experiment doesn't just take knowing the literature or being able to build precision machinery, though both help. What distinguishes the geniuses in our field is their ability to design an experiment to test something nobody ever thought was testable. (After that, the engineering skill comes in.)
Hands thrown up.
Many people threw up their hands at answering basic questions like how many types of light receptors do we have in our eyes or how fast does a signal travel down a nerve cell ("instantanously" was one popular hypothesis) until Hermann von Helmholtz designed ingenious behavioral experiments long before the technology was available to answer those questions (and likely before anyone knew such technology would be available).
However, while Helmholtz pioneered brilliants methods for understanding the way the adult mind works, he declared it impossible to ever know what a baby was thinking. His methods wouldn't work with babies, and he couldn't think of any others. A hundred years later, however, researchers like Susan Carey, Liz Spelke and others pioneered new techniques to probe the minds of babes. Spelke managed to prove babies only a few months old have basic aspects of object perception in place. But Spelke herself despaired of ever testing certain aspects of object perception in newborns, until a different set of researchers (Valenza, Leo, Gava & Simion, 2006) devised an ingenious experiment that ultimately proved we are born with the ability to perceive objects (not just a blooming, buzzing confusion).
"I study dead people, everywhere."
I'm not saying I know how to test whether dead people are conscious. I'm still stumped by much easier puzzles. But a difficult question is a challenge, not a reason to avoid the subject.
PZ Myers, as is his wont, recently wrote here that after his death he will have ceased to be. In other words, his experience of consciousness will have ended forever. Can we test this?He goes on to describe some possible ways you might test the hypothesis. It turns out it is very difficult.
[PZ Myers] could die and then make the observation as to whether or not he still existed. If he still did he'd be surprised, but at least he'd be able to observe that he was still somehow existing. If he didn't still exist, he's not around to make the observation of his nonexistence. So personal experimentation can't verify his prediction.Springer goes through some possible ways one might use neuroscience to test the hypothesis. None of them are very good either. In the end, he concludes:
Where am I going with this? Nowhere, that's the point. Clean experimental testability is why I like physics.
Now, I like physics, too. I almost majored in it. But I like cognitive science more for precisely this reason: developing the right experiment doesn't just take knowing the literature or being able to build precision machinery, though both help. What distinguishes the geniuses in our field is their ability to design an experiment to test something nobody ever thought was testable. (After that, the engineering skill comes in.)
Hands thrown up.
Many people threw up their hands at answering basic questions like how many types of light receptors do we have in our eyes or how fast does a signal travel down a nerve cell ("instantanously" was one popular hypothesis) until Hermann von Helmholtz designed ingenious behavioral experiments long before the technology was available to answer those questions (and likely before anyone knew such technology would be available).
However, while Helmholtz pioneered brilliants methods for understanding the way the adult mind works, he declared it impossible to ever know what a baby was thinking. His methods wouldn't work with babies, and he couldn't think of any others. A hundred years later, however, researchers like Susan Carey, Liz Spelke and others pioneered new techniques to probe the minds of babes. Spelke managed to prove babies only a few months old have basic aspects of object perception in place. But Spelke herself despaired of ever testing certain aspects of object perception in newborns, until a different set of researchers (Valenza, Leo, Gava & Simion, 2006) devised an ingenious experiment that ultimately proved we are born with the ability to perceive objects (not just a blooming, buzzing confusion).
"I study dead people, everywhere."
I'm not saying I know how to test whether dead people are conscious. I'm still stumped by much easier puzzles. But a difficult question is a challenge, not a reason to avoid the subject.
Neuroimaging study does not disprove free will
An excellent new paper in Nature Neuroscience made a big splash last week by purporting to show activity in the brain related to muscle movement starts up to ten seconds before the person is consciously aware of having made a decision to move. This study is in fact a replication and extension of previous research that had suggested that related brain activity starts at least 300 ms before the conscious decision. The big news presumably is that new technology (specifically pattern recognition algorithms) allowed the researchers to push back this time window (which really is big news and an excellent application of this technology).
The reason I am using words like "purported" is that there are some important methodological assumptions buried into this experiment (to be sure the authors did not mention "free will" in the actual paper, though I imagine they were aware of the implications). In this particular version of the experiment
This case was made some time ago by the philosopher Daniel Dennett in his excellent Freedom Evolves. The argument is somewhat long, but it goes like this. First, we have to assume that participants weren't deciding to press a particular button as soon as the next letter popped up; if they were doing that, they would have already made the decision before that letter appeared, throwing off the scientists' measurements. But even if we assume that is not the case, there is a bigger confound:
Dennett formulated this argument to explain the 300ms difference between conscious decision making and the related brain activity found in previous experiments. However, it certainly can be extended to the new study. If it turns out that it takes 10 seconds from the beginning of a decision to move until the actual movement is carried out, then we most definitely do not want to be aware of it. Much better if our minds trick us into thinking movement follows thought instantaneously.
This argument does require some mental time distortion: just because we think two things are happening simultaneously does not mean that they are. But why should they be? If we have learned anything about the brain in the last couple centuries, it is that perception is useful, not accurate.
Soon, C.S., Brass, M., Heinze, H.J., Haynes, J.D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience
The reason I am using words like "purported" is that there are some important methodological assumptions buried into this experiment (to be sure the authors did not mention "free will" in the actual paper, though I imagine they were aware of the implications). In this particular version of the experiment
Afterwards, they reported which letter had been on the screen when they made their decision. The assumption is that participants are reporting the letter correctly. We already know that conscious perception is a distortion of reality (in fact, this study is a demonstration of that fact), so this may not be a fair assumption.
The subjects were asked to relax while fixating on the center of the screen where a stream of letters was presented. At some point, when they felt the urge to do so, they were to freely decide between one of two buttons, operated by the left and right index fingers, and press it immediately. In parallel, they should remember the letter presented when their motor decision was consciously made.
This case was made some time ago by the philosopher Daniel Dennett in his excellent Freedom Evolves. The argument is somewhat long, but it goes like this. First, we have to assume that participants weren't deciding to press a particular button as soon as the next letter popped up; if they were doing that, they would have already made the decision before that letter appeared, throwing off the scientists' measurements. But even if we assume that is not the case, there is a bigger confound:
If we monitor your brain with an array of surface electrodes ... we will find that the brain activity leading up to [a hand movement] has a definite and repeatable time course, and a shape. It lasts the better part of a second ... ending when your wrist actually moves.Dennett points out that we aren't aware that it takes our brains a good second to plan, coordinate and execute a simple motor movement.
When we perform an intentional action, we normally monitor it visually (and by hearing and touch, of course) to make sure it is coming off as intended. Hand-eye coordination is accomplished by a tightly interwoven system of sensory and motor systems. Suppose I am intentionally typing hte words "flick the wrist" and wish to monitor my output for typographical errors. Since the motor commands take some time to execute, my brain should not compare the current motor command with the current visual feedback, since by the time I see the word "flick" on the screen, my brain is already sending the command type "wrist" to my muscles.The effect, though Dennett doesn't put it this way, of actually being aware of the time it takes for your conscious decision to be converted into muscle movement would create a bewildering sense of out-of-sync-ness, something like being drunk or watching a baseball game at a far distance, where the crack of the bat reaches your ears the same time the image of the runner reaches first base.
Dennett formulated this argument to explain the 300ms difference between conscious decision making and the related brain activity found in previous experiments. However, it certainly can be extended to the new study. If it turns out that it takes 10 seconds from the beginning of a decision to move until the actual movement is carried out, then we most definitely do not want to be aware of it. Much better if our minds trick us into thinking movement follows thought instantaneously.
This argument does require some mental time distortion: just because we think two things are happening simultaneously does not mean that they are. But why should they be? If we have learned anything about the brain in the last couple centuries, it is that perception is useful, not accurate.
Soon, C.S., Brass, M., Heinze, H.J., Haynes, J.D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience
Understanding our own minds
Freud was wrong about most things, but one thing he was dead on about was that we have at best limited access to our own minds. Here is an excellent quote from Daniel Dennet's Freedom Evolves:
Your brain knows when do be afraid, even if you don't.
Free will.
Not your granddaddy's subconscious mind.
For Descartes, the mind was perfectly transparent to itself, with nothing happening out of view, and it has taken more than a century of psychological theorizing and experimentation to erode this ideal of perfect introspectability, which we can now see gets the situation almost backward. Consciousness of the springs of action is the exception, not the rule, and it requires some rather remarkable circumstances to have evolved at all.See also previous posts on related topics:
Your brain knows when do be afraid, even if you don't.
Free will.
Not your granddaddy's subconscious mind.
Wait -- are you suggesting your brain affects your behavior?
One of my office-mates burst out laughing on Monday after receiving an email. The email was a forward, but it wasn't intended to be funny. It was a brief news blurb about a recent study looking at teenage impulsiveness, entitled "Teens' brains hold key to their impulsiveness."
What's funny about that? Well, where did the journalist think the key to impulsiveness was hidden -- in teens' kidneys? Many scientists puzzle over the fact that 150 years of biology have not driven out Creationism, but 150 years of psychology and neuroscience have been even less successful. Many people -- probably most -- still believe in mind/brain duality.
Philosophers began suggesting that all human behavior is caused by the physical body at least as early as Thomas Hobbes in the 1600s. A century and a half of psychology and neuroscience has found no evidence of an immaterial mind, and now the assumption that all behavior and thought is caused by the physical body underlies essentially all modern research. It's true that nobody has proved that immaterial minds do not exist, but similarly nobody has ever proved the nonexistence of anything. It just seems very unlikely.
This leads to an interesting dichotomy between cognitive scientists and the general public. While journalists get very excited about studies that prove some particular behavior is related to some particular part of the brain, many cognitive scientists find such studies to be colossal wastes of time and money. It would be like a physicist publishing a study entitled "Silicon falls when dropped." Maybe nobody ever tested to see whether silicon falls when dropped, but the outcome was never really in doubt.
This isn't to say that the study I mentioned above wasn't a useful study. I have no doubt that it is a very useful study. Determining mechanistically what changes in what parts of the brain during development affect impulsiveness is very informative. The mere fact that the brain changes during development, and that this affects our behavior, is not.
Your brain knows when you should be afraid, even if you don't
I just got back to my desk after an excellent talk by Paul Whalen of Dartmouth College. Whalen studies the amygdala, an almond-shaped region buried deep in the brain. Scientists have long known that the amygdala is involved in emotional processing. For instance, when you look at a person whose facial expression is fearful, your amygdala gets activated. People with damage to their amygdalas have difficulty telling if a given facial expression is "fear" as opposed to just "neutral."
It was an action-packed talk, and I recommend that anybody interested in the topic visit his website and read his latest work. What I'm going to write about here are some of his recent results -- some of which I don't think have been published yet -- investigating whether you have to be consciously aware of seeing a fearful face in order for your amygdala to become activated.
The short answer is "no." What Whalen and his colleagues did was use an old trick called "masking." If you present one stimulus (say, a fearful face) very quickly (say, 1/20 of a second) and then immediately present another stimulus (say, a neutral face) immediately afterwards, the viewer typically reports only having seen the second stimulus. Whalen used fMRI to scan the brains of people while they viewed emotional faces (fearful or happy) that were masked by neutral faces. The participants said they only saw neutral faces, but the brain scans showed that their amygdalas knew different.
One question that has been on researcher's minds for a while is what information does the amygdala care about? Is it the whole face? The color of the face? The eyes? Whalen ran a second experiment which was almost exactly the same, but he erased everything from the emotional faces except the eyes. The amygdala could still tell the fearful faces from the happy faces.
You could be wondering, "Does it even matter if the amygdala can recognize happy and fearful eyes or faces that the person doesn't remember seeing? If the person didn't see the face, what effect can it have?"
Quite possibly plenty. In one experiment, the participants were told about the masking and asked to guess whether they were seeing fearful or happy eyes. Note that the participants still claimed to be unable to see the emotional eyes. Still, they were able to guess correctly -- not often, but more often than if they were guessing randomly. So the information must be available on some level.
There are several ways this might be possible. In ongoing research in Whalen's lab, he has found that people who view fearful faces are more alert and more able to remember what they see than people who view happy faces. Experiments in animals show that when you stimulate the amygdala, various things happen to your body such as your eyes dilating. Whalen interprets this in the following way: when you see somebody being fearful, it's probably a clue that there is something dangerous in the area, so you better pay attention and look around. It's possible that subjects who guessed correctly [this is my hypothesis, not his] were tapping into the physiological changes in their bodies in order to make these guesses. "I feel a little fearful. Maybe I just saw a fearful face."
For previous posts about the dissociation between what you are consciously aware of from what your brain is aware of, click here, here and here.
It was an action-packed talk, and I recommend that anybody interested in the topic visit his website and read his latest work. What I'm going to write about here are some of his recent results -- some of which I don't think have been published yet -- investigating whether you have to be consciously aware of seeing a fearful face in order for your amygdala to become activated.
The short answer is "no." What Whalen and his colleagues did was use an old trick called "masking." If you present one stimulus (say, a fearful face) very quickly (say, 1/20 of a second) and then immediately present another stimulus (say, a neutral face) immediately afterwards, the viewer typically reports only having seen the second stimulus. Whalen used fMRI to scan the brains of people while they viewed emotional faces (fearful or happy) that were masked by neutral faces. The participants said they only saw neutral faces, but the brain scans showed that their amygdalas knew different.
One question that has been on researcher's minds for a while is what information does the amygdala care about? Is it the whole face? The color of the face? The eyes? Whalen ran a second experiment which was almost exactly the same, but he erased everything from the emotional faces except the eyes. The amygdala could still tell the fearful faces from the happy faces.
You could be wondering, "Does it even matter if the amygdala can recognize happy and fearful eyes or faces that the person doesn't remember seeing? If the person didn't see the face, what effect can it have?"
Quite possibly plenty. In one experiment, the participants were told about the masking and asked to guess whether they were seeing fearful or happy eyes. Note that the participants still claimed to be unable to see the emotional eyes. Still, they were able to guess correctly -- not often, but more often than if they were guessing randomly. So the information must be available on some level.
There are several ways this might be possible. In ongoing research in Whalen's lab, he has found that people who view fearful faces are more alert and more able to remember what they see than people who view happy faces. Experiments in animals show that when you stimulate the amygdala, various things happen to your body such as your eyes dilating. Whalen interprets this in the following way: when you see somebody being fearful, it's probably a clue that there is something dangerous in the area, so you better pay attention and look around. It's possible that subjects who guessed correctly [this is my hypothesis, not his] were tapping into the physiological changes in their bodies in order to make these guesses. "I feel a little fearful. Maybe I just saw a fearful face."
For previous posts about the dissociation between what you are consciously aware of from what your brain is aware of, click here, here and here.
Not your granddaddy's subconscious mind
To the average person, the paired associate for "psychology," for better or worse, is "Sigmund Freud." Freud is probably best known for his study of the "unconscious" or "subconscious". Although Freudian defense mechanisms have long since retired to the history books and Hollywood movies, along with the ego, superego and id, Freud was largely right in his claim that much of human behavior has its roots outside of conscious thought and perception. Scientists are continually discovering new roles for nonconscious activities. In this post, I'll try to go through a few major aspects of the nonconscious mind.
A lab recently reported that they were able to alter people's opinions through a cup of coffee. This was not an effect of caffeine, since the cup of coffee was not actually drunk. Instead, study participants were asked to hold a cup of coffee momentarily. The cup was either hot or cold. Those who held the hot cup judged other people to be warmer and more sociable than those who held the cold cup.
This is one in a series of similar experiments. People are more competitive if a briefcase (a symbol of success) is in sight. They do better in a trivia contest immediately after thinking about their mothers (someone who wants you to succeed). These are all examples of what is called "social priming" -- where a socially-relevant cue affects your behavior.
Social priming is an example of a broader phenomenon (priming) that is a classic example of nonconscious processing. One simple experiment is to have somebody read a list of words presenting one at a time on a computer. The participant is faster if the words are all related (dog, cat, bear, mouse) than if they are relatively unrelated (dog, table, mountain, car). The idea is that thinking about dogs also makes other concepts related to dogs (i.e., other animals) more accessible to your conscious thought. In fact, if you briefly present the word "dog" on the screen so fast that the participant isn't even aware of having seen it, they will still be faster at reading "cat" immediately afterwards than if "mountain" had flashed on the screen.
Mahzarin Banaji has made a career around the Implicit Association Test. In this test, you press a key (say "g") when you see a white face or a positive word (like "good" or "special" or "happy") and a different key (say "b") when you see a black face or a negative word (like "bad" or "dangerous"). You do this as fast as you can. Then the groupings switch -- good words with black faces and bad words with white faces. The latter condition is typically harder for white Americans, even those who self-report being free of racial prejudice. Similar versions of the test have been used in different cultures (i.e., Japan) and have generally found that people are better able to associate good words with their own in-group than a non-favored out-group. I didn't describe the methodology in detail here, but trust me when I say it is rock-solid. The interpretation that this is a measure of implicit, nonconscious prejudice is up for debate. For the purposes of my post here, though, this is clearly a nonconscious prejudice. (Try it for yourself here.)
Vision turns out to be divided into conscious vision and nonconscious vision. Yes, you read that correctly: nonconscious vision. The easiest way to tell this for yourself is to blindfold a single eye. You probably know that you need two eyes for depth perception, but with one eye blindfolded, the world doesn't suddenly look flat. (At least, it doesn't for me.) You may notice some small differences, but to get a real sense of what you have lost, try playing tennis. The ball becomes nearly impossible to find. This is because the part of your vision that you use to orient in space is largely inaccessible to your conscious mind.
An even more interesting case study of this -- though not one you can try at home -- is blindsight. People with blindsight report being blind. As far as they can tell, they can't see a thing. However, if you show them a picture and ask them to guess what the picture is of, they can "guess" correctly. They can also reach out and take the picture. They are unaware of being able to see, but clearly on some level they are able to do so.
It is also possible to learn something without being aware of learning it. My old mentor studies contextual cueing. The experiment works like this: You see a bunch of letters on the screen. You are looking for the letter T. Once you find it, you press an arrow key to report which direction the "T" faces. This repeats many hundreds of times. Some of the displays repeat over and over (the letters are all in the same places). Although you aren't aware of the repetition -- if asked, you would be unable to tell a repeated display from a new display -- you are faster at finding the T on repeated displays than new displays.
In similar experiments about language learning, you listed to nonsense sentences made of nonsense words. Unknown to you, the sentences all conform to a grammar. If asked to explain the grammar, you would probably just say "huh?" but if asked to pick between two sentences, one of which is grammatical and one of which is not, you can do so successfully.
Actually, an experiment isn't needed to prove this last point. Most native speakers are completely ignorant of the grammar rules governing their language. Nobody knows all the grammar rules of their language. Yet we are perfectly capable of following those grammar rules. When presented with an ungrammatical sentence, you may not be able to explain why it's ungrammatical (compare "Human being is important" with "The empathy is important"), yet you still instinctively know there is a problem.
And the list goes on. If people can think of other broad areas of subconscious processing, please comment away. These are simply the aspects of the unconscious I have studied.
You'll notice I haven't talked about defense mechanisms or repressed memories. These Freudian ideas have fallen out of the mainstream. But the fact remains that conscious thought and perception are just one corner of our minds.
A lab recently reported that they were able to alter people's opinions through a cup of coffee. This was not an effect of caffeine, since the cup of coffee was not actually drunk. Instead, study participants were asked to hold a cup of coffee momentarily. The cup was either hot or cold. Those who held the hot cup judged other people to be warmer and more sociable than those who held the cold cup.
This is one in a series of similar experiments. People are more competitive if a briefcase (a symbol of success) is in sight. They do better in a trivia contest immediately after thinking about their mothers (someone who wants you to succeed). These are all examples of what is called "social priming" -- where a socially-relevant cue affects your behavior.
Social priming is an example of a broader phenomenon (priming) that is a classic example of nonconscious processing. One simple experiment is to have somebody read a list of words presenting one at a time on a computer. The participant is faster if the words are all related (dog, cat, bear, mouse) than if they are relatively unrelated (dog, table, mountain, car). The idea is that thinking about dogs also makes other concepts related to dogs (i.e., other animals) more accessible to your conscious thought. In fact, if you briefly present the word "dog" on the screen so fast that the participant isn't even aware of having seen it, they will still be faster at reading "cat" immediately afterwards than if "mountain" had flashed on the screen.
Mahzarin Banaji has made a career around the Implicit Association Test. In this test, you press a key (say "g") when you see a white face or a positive word (like "good" or "special" or "happy") and a different key (say "b") when you see a black face or a negative word (like "bad" or "dangerous"). You do this as fast as you can. Then the groupings switch -- good words with black faces and bad words with white faces. The latter condition is typically harder for white Americans, even those who self-report being free of racial prejudice. Similar versions of the test have been used in different cultures (i.e., Japan) and have generally found that people are better able to associate good words with their own in-group than a non-favored out-group. I didn't describe the methodology in detail here, but trust me when I say it is rock-solid. The interpretation that this is a measure of implicit, nonconscious prejudice is up for debate. For the purposes of my post here, though, this is clearly a nonconscious prejudice. (Try it for yourself here.)
Vision turns out to be divided into conscious vision and nonconscious vision. Yes, you read that correctly: nonconscious vision. The easiest way to tell this for yourself is to blindfold a single eye. You probably know that you need two eyes for depth perception, but with one eye blindfolded, the world doesn't suddenly look flat. (At least, it doesn't for me.) You may notice some small differences, but to get a real sense of what you have lost, try playing tennis. The ball becomes nearly impossible to find. This is because the part of your vision that you use to orient in space is largely inaccessible to your conscious mind.
An even more interesting case study of this -- though not one you can try at home -- is blindsight. People with blindsight report being blind. As far as they can tell, they can't see a thing. However, if you show them a picture and ask them to guess what the picture is of, they can "guess" correctly. They can also reach out and take the picture. They are unaware of being able to see, but clearly on some level they are able to do so.
It is also possible to learn something without being aware of learning it. My old mentor studies contextual cueing. The experiment works like this: You see a bunch of letters on the screen. You are looking for the letter T. Once you find it, you press an arrow key to report which direction the "T" faces. This repeats many hundreds of times. Some of the displays repeat over and over (the letters are all in the same places). Although you aren't aware of the repetition -- if asked, you would be unable to tell a repeated display from a new display -- you are faster at finding the T on repeated displays than new displays.
In similar experiments about language learning, you listed to nonsense sentences made of nonsense words. Unknown to you, the sentences all conform to a grammar. If asked to explain the grammar, you would probably just say "huh?" but if asked to pick between two sentences, one of which is grammatical and one of which is not, you can do so successfully.
Actually, an experiment isn't needed to prove this last point. Most native speakers are completely ignorant of the grammar rules governing their language. Nobody knows all the grammar rules of their language. Yet we are perfectly capable of following those grammar rules. When presented with an ungrammatical sentence, you may not be able to explain why it's ungrammatical (compare "Human being is important" with "The empathy is important"), yet you still instinctively know there is a problem.
And the list goes on. If people can think of other broad areas of subconscious processing, please comment away. These are simply the aspects of the unconscious I have studied.
You'll notice I haven't talked about defense mechanisms or repressed memories. These Freudian ideas have fallen out of the mainstream. But the fact remains that conscious thought and perception are just one corner of our minds.
Subliminal messaging
When I was a small child, I thought the idea of subliminal messaging was way cool. Learn languages in your sleep! Control people's minds by inserting inaudible dialogue into the background! Wicked!
To the best of my knowledge, that type of subliminal messaging -- a hidden, language-based message -- doesn't exist (but if you have evidence of one, please comment!). Influencing another's actions turns out to be pretty easy. There are many well-documented ways to manipulate others. I will focus here on getting the answers you want. Basically, response management comes down to how you phrase the question.
In a classic study by Tversky and Kahneman, participants were given two options for combating a plague that was projected to kill 600 people. Plan A was sure to save 200 people. Plan B had a 1/3 probability of saving 600 and a 2/3 probability of saving nobody. 78% of participants took the safe option: A. Rephrasing the question in terms of deaths (400 guaranteed under Plan A; 1/3 probably of 0 and 2/3 probability of 600 under Plan B) reversed the result: 78% of participants chose plan B. This is because humans are risk-prone when dealing with losses ("let's hope for the best") but risk-averse when dealing with gains ("let's keep what we have").
In another study by Tversky and colleagues, they found that if you offer a shopper a "one time only" sale on a piece of merchandice (e.g., a Sony CD player), most (66%) will buy it, happy to avoid further shopping. If you offer them two different products (one by Sony, one by Aiwa), both on sale, nearly half (46%) will continue shopping rather than buy either. The addition of choices makes people less likely to choose.
In a different study (Strack & Mussweiler; pdf) asked one set of participants "Did Gandhi live to the age of 140?" The participants presumably all responded, "No." The second question was to estimate how long Gandhi lived. The average estimate was 67. The second group of participants was first asked "Did Gandhi live past the age of 9." Again, presumably everybody replied correctly. On the second question, they estimated on average that Gandhi lived to 50.
There are many other examples. This is why experts will tell you that polls are next to meaningless unless you know the exact wording of the question. It's not subliminal mind control like in the movies, but manipulating people's decisions (or, at least their answers to surveys) is fairly easy.
(BTW, Gandhi lived to 78.)
To the best of my knowledge, that type of subliminal messaging -- a hidden, language-based message -- doesn't exist (but if you have evidence of one, please comment!). Influencing another's actions turns out to be pretty easy. There are many well-documented ways to manipulate others. I will focus here on getting the answers you want. Basically, response management comes down to how you phrase the question.
In a classic study by Tversky and Kahneman, participants were given two options for combating a plague that was projected to kill 600 people. Plan A was sure to save 200 people. Plan B had a 1/3 probability of saving 600 and a 2/3 probability of saving nobody. 78% of participants took the safe option: A. Rephrasing the question in terms of deaths (400 guaranteed under Plan A; 1/3 probably of 0 and 2/3 probability of 600 under Plan B) reversed the result: 78% of participants chose plan B. This is because humans are risk-prone when dealing with losses ("let's hope for the best") but risk-averse when dealing with gains ("let's keep what we have").
In another study by Tversky and colleagues, they found that if you offer a shopper a "one time only" sale on a piece of merchandice (e.g., a Sony CD player), most (66%) will buy it, happy to avoid further shopping. If you offer them two different products (one by Sony, one by Aiwa), both on sale, nearly half (46%) will continue shopping rather than buy either. The addition of choices makes people less likely to choose.
In a different study (Strack & Mussweiler; pdf) asked one set of participants "Did Gandhi live to the age of 140?" The participants presumably all responded, "No." The second question was to estimate how long Gandhi lived. The average estimate was 67. The second group of participants was first asked "Did Gandhi live past the age of 9." Again, presumably everybody replied correctly. On the second question, they estimated on average that Gandhi lived to 50.
There are many other examples. This is why experts will tell you that polls are next to meaningless unless you know the exact wording of the question. It's not subliminal mind control like in the movies, but manipulating people's decisions (or, at least their answers to surveys) is fairly easy.
(BTW, Gandhi lived to 78.)
Subscribe to:
Comments (Atom)
