Field of Science

Are babies prejudiced?

In 1994, in discussing how children come to learn about inheritance, Susan Carey and Elizabeth Spelke wrote: "There are many ways children may come to resemble their parents: Curly-haired parents may have curly-haired children because they give them permanents; prejudiced parents may have prejudiced children because they taught them to be so. Such mechanisms are not part of a biological process of inheritance..."

It's not clear that Carey & Spelke thought prejudice is taught to children rather than inherited through genes, but it's interesting that in picking only two examples of non-biological inheritance, Carey & Spelke chose prejudice as one. What makes this quotation remarkable is how unremarkable it is. It seems quite natural to assume that prejudice is learned. Recently, however, a number of researchers -- including Spelke -- have been suggesting that although the specifics of a prejudice may come through experience, being prejudiced is innate.

(Just to be clear, nobody I know is saying that prejudice is natural, good, or something that cannot be overcome. The specific claim is that it isn't something you have to learn.)

It's actually been known for a few years that infants prefer to look at familiar-race faces. Very recently, Katherine Kinzler in the Spelke lab at Harvard has started looking at language prejudice. People can get very fired up about language. Think about the fights over bilingualism or ebonics in the US. Governments have actively pursued the extinction of various non-favored, minority languages.

In a long series of studies, Kinzler has found evidence that this prejudice against other languages and against speakers of other languages is innate. Young infants prefer to look at a person who previously spoke their language than somebody who spoke a foreign language. Infants show the same preference to somebody who speaks with their accent rather than with a foreign accent. Older infants (who can crawl), will crawl towards a toy offered by someone who speaks their language rather than towards a toy offered by a foreign-language speaker. Keep in mind that these infants probably do not understand what is being said. Also, the speakers are bilingual (the infants don't know this), which allows the experimenters to control for things like what the speakers look like. For instance, for some babies, one speaker speaks English and the other French, and for the other babies, they reverse. Also, French babies prefer French-speakers to English-speakers, while English babies prefer English-speakers to French-speakers.

Preschool children would rather be friends with somebody who speaks their own language, which is not surprising. They also prefer to be friends with somebody who uses their own accent rather than a foreign accent, even when they are able to understand what the foreign-accented child says.

Of course, none of this says that babies are born knowing which languages and accents to prefer. However, they seem to quickly work out which languages and accents are "in-group" and which are "out-group." This also doesn't say that linguistic prejudice cannot be overcome. For one thing, simply exposing children to many accents and language would presumably do much all by itself. Although it's not possible yet to rule out alternative explanations, what it does suggest is that prejudice -- at least, linguistic prejudice -- can't be overcome by simply not teaching it to children. They must be actively taught not to be prejudiced.

The paper, which is pretty easy to understand, is not available on the authors' website, but if you have a decent library:

Kinzler, Dupoux, Spelke. (2007). The native language of social cognition. Proceedings of the National Academy of Sciences, 104(30), 12577-12580.

Quantum Vision

Can quantum physics explain consciousness? The fact that the mind is instantiated in the physical brain has made it difficult for people to imagine how a physical object like the brain leads to conscious experience in similar ways that it becomes difficult to believe in free will. A number of people have hoped to find the solution in the indeterminacy of quantum physics.

There is a new hypothesis out from Efstratios Manousakis of Florida State University. The phenomenon that he is interested in understanding is binocular rivalry. In binocular rivalry, a different image is displayed to each of your eyes. Instead of seeing a mishmash of the two images, you tend to see one, then the other, then the first one again, ad infinitum. It's not possible to do a demonstration over the internet, but the experience is similar to looking at a Necker Cube, where you first see it popping out of the page, then receding from the page, then popping out, and so on. Notice that what your "eye" sees doesn't change. But your conscious experience does.

Manousakis has found that quantum waveform formulas describe this reasonably well. The question is whether they describe it well because the phenomenon is a quantum phenomenon or because there are two different phenomena for which the same formulas work. Keep in mind that binocular rivalry is something that can actually be seen with neuroimaging. That is, you can see the patterns in the brain change as the person first sees one image, then the other, etc. So if this is really a quantum effect, it is operating at a macro scale. New Scientist has an interesting article on this story this last week. It's not clear from the article if this is a problem Manousakis has thought about or not, and unfortunately his actual journal article isn't available on his website.

The neuroscience of theory of mind

The study of social cognition ("people thinking about people") and social neuroscience has exploded in the last few years. Much of energy -- but by no means all of it -- has focused on Theory of Mind.

"Theory of Mind" is something we are all assumed to have -- that is, we all have a theory that other people's actions are best explained by the fact that they have minds which contain wants, beliefs and desires. (One good reason for calling this a "theory" is that while we have evidence that other people have minds and that this governs their behavior, none of us actually has proof. And, in fact, some researchers have been claiming that, although we all have minds, those minds do not necessarily govern our behavior.)

Non-human animals and children under the age of 4 do not appear to have theory of mind, except in perhaps a very limited sense. This leads to the obvious question: what is different about human brains over the age of 4 that allows us to think about other people's thoughts, beliefs and desires?

It might seem like Theory of Mind is such a complex concept that it would be represented diffusely throughout the brain. However, in the last half-decade or so, neuroimaging studies have locked in on two different areas of the brain. One, explored by Jason Mitchell of Harvard, among others, is the medial prefrontal cortex (the prefrontal cortex is, essentially, in the front of your brain. "medial" means it is on the interior surface, where the two hemispheres face each other, rather than on the exterior surface, facing your skull). The other is the temporoparietal junction (where your parietal and temporal lobes meet), described first in neuroimaging by Rebecca Saxe of MIT and colleagues.

Not surprisingly, there is some debate about which of these brain areas is more important (this breaks down in the rather obvious way) and also what the two areas do. Mitchell and colleagues tend to favor some version of "simulation theory" -- the idea that people (at least in some situations) guess what somebody else might be thinking by implicitly putting themselves in the other person's shoes. Saxe does not.

Modulo that controversy, theory of mind has been tied to a couple fairly small and distinct brain regions. These results have been replicated a number of times now and seem to be robust. This opens up the possibility, among other things, of studying the cross-species variation in theory of mind, as well as the development of theory of mind as children reach their fourth birthdays.

Having solved the question of monkeys & humans, I move on to children and adults

Newborns are incredibly smart. They appear to either be born into the world knowing many different things (the difference between Dutch and Japanese, for instance), or they learn them in a blink of an eye. On the other hand, toddlers are blindingly stupid. Unlike infants, toddlers don't know that a ball can't roll through a solid wall. What is going on?

First, the evidence. Construct a ramp. Let a ball roll down the ramp until it hits a barrier (like a small wall). The ball will probably bounce a little and rest in front of the wall. Now let an infant watch this demonstration, but with a screen blocking the infant's view of the area around the barrier. That is, the infant sees the ball roll down a ramp and go behind a screen but not come out the other side. The infant can also see that there is barrier behind the screen. If you then lift the screen and show the ball resting beyond the barrier -- implying that the ball went through the solid barrier, the infant acts startled (specifically, the infant will look longer than if the ball was resting in front of the barrier as it should be).

Now, do a similar experiment with a toddler. The main difference is there are doors in the screen, one before the barrier and one after. The toddler watches the ball roll down the ramp, and their task is to open the correct door to pull out the ball. Toddlers cannot do this. They seem to guess randomly.

Here is another odd example. It's been known for many decades that three-year-olds do not understand false beliefs. One version of the task looks something like this. There are two boxes, one red and one green. They watch Elmo hide some candy in the red box and then leave. Cookie Monster comes by and takes the candy and moves it from the red box to the green box. Then Elmo returns. "Where," you ask the child, "is Elmo going to look for his candy?"

"In the green box," the child will reply. This has been taken as evidence that young children don't yet understand that other people have beliefs that can contradict reality. (Here's a related, more recent finding.)

However, Kristine Onishi and Renee Baillargeon showed in 2005 that 15-month-old infants can predict where Elmo will look, but instead of a verbal or pointing task, they just measured infant surprise (again, in terms of looking time). (Strictly speaking, they did not use "Elmo," but this isn't a major point.)

So why do infants succeed at these tasks -- and many others -- when you measure where they look, while toddlers are unable to perform verbal and pointing tasks that rely on the very same information?

One possibility is that toddlers lose an ability that they had as infants, though this seems bizarre and unlikely.

Another possibility I've heard is that the verbal and pointing tasks put greater demands on memory, executive functioning and other "difficult" processes that aren't required in the infant tasks. One piece of evidence is that the toddlers fail on the ball task described above even if you let them watch the ball go down the ramp, hit the wall and stop and then lower the curtain with two doors and make them "guess" which door the ball is behind.

A third possibility is something very similar to Marc Hauser's proposal for non-human primate thought. Children are born with many different cognitive systems, but only during development do they begin to link up, allowing the child to use information from one system in another system. This makes some intuitive sense, since we all know that even as adults, we can't always use all the information we have available. For instance, you may know perfectly well that if you don't put your keys in the same place every day, you won't be able to find them, put you still lose your keys anyway. Or you may know how to act at that fancy reception, but still goof up and make a fool of yourself.

Of course, as you can see from my examples, this last hypothesis may be hard to distinguish from the memory hypothesis. Thoughts?

How are monkeys and humans different (I mean, besides the tail)

Marc Hauser, one of a handful of professors to be tenured by Harvard University (most senior faculty come from other universities), has spent much of his career showing that non-human primates are smart. It is very dangerous to say "Only humans can do X," because Hauser will come along and prove that the cotton-top tamarin can do X as well. Newborn babies can tell Dutch from Japanese? Well, so can the tamarins.

For this reason, I have wondered what Hauser thinks really separates human cognition from that of other animals. He is well-known for a hypothesis that recursion is the crucial adaptation for language, but I'm never sure how wedded he is to that hypothesis, and certainly he can't think the ability to think recursively is all that separates human thought from tamarin thought.

Luckily for me, he gave a speech on just that topic at one of the weekly departmental lunches. Hopefully, he'll write a theory paper on this subject in the near future, if he hasn't already. In the meantime, I'll try to sketch the main point as best I understood it.

Hauser is interested in a paradox. In many ways, non-human primates look quite smart -- even the lowly tamarin. Cotton-top tamarins have been able to recognize fairly complex grammatical structures, yet they do not seem to use those abilities in the same ways we do -- for instance, they certainly don't use grammar.

In some situations, non-human primates seem to have a theory of mind (an understanding of the contents of another's mind). For instance, if a low-ranking primate (I forget the species, but I think this was with Chimpanzeees) sees two pieces of good food hidden and also sees that a high-ranking member of the troop can see where one piece was hidden but not the other, the low-ranking primate will high-tail it to the piece of food only he can see. That might seem reasonable. But contrast it with this situation: these primates also know how to beg for food from the researchers. What if primate is confronted with two researchers, one who has a cloth over her eyes and one who has a cloth over her ears. Does the primate know to beg only from the one who can see? No.

Similarly, certain birds can use deception to lure a predator away from their nest, but they never use that deceptive behavior in other contexts where it might seem very useful.

These are just three examples where various primates seem to be able to perform certain tasks, but only in certain contexts or modalities. Hauser proposes that part of what makes humans so smart are the interfaces between different parts of our brains. We can not only recognize statistical and rule-based regularities in our environment -- just like tamarins -- but we can also use that information to produce behavior with these same statistical and rule-based regularities. That is, we can learn and produce grammatical language. We can take something we learn in one context and use it in another. To use an analogy he didn't, our brains are an office full of computers after they have been efficiently networked. Monkey computer networks barely even have modems.

This same theory may also explain great deal of strange human infant behavior. More about that in the future.

Do ballplayers really hit in the clutch?

If you've been watching the playoffs on FOX, you'll notice that rather than present a given player's regular-season statistics, they've been mostly showing us their statistics either for all playoff games in their career, or just for the 2007 post-season. Is that trivia, or is it an actual statistic? For instance, David Ortiz hits better in the post-season than during the regular season. OK, one number is higher than the other, but that could just be random variation. Does he really hit better during the playoffs?

Why does this even matter? There is conventional wisdom in baseball that certain players hit better in clutch situations -- for instance, when men on base. This is why RBIs (runs-batted-in) are treated as a statistic, rather than as trivia. Some young Turks (i.e., Billy Beane of the Oakland A's) have argued vigorously that RBIs don't tell you anything about the batter -- they tell you about the people who bat in front of him (that is, they are good at getting on base). Statistically, it is said, few to no ballplayers hit better with men on and 2 outs.

So what about in the post-season?

I couldn't find Ortiz's lifetime post-season stats, so I compared this post-season, during which he's been phenomenally hot (.773 on-base percentage through the weekend -- I did this math last night during the game, so I didn't include last night's game), compared with the 2007 regular season, during which he was just hot (.445 on-base percentage).

There are probably several ways to do the math. I used a formula to compare two independent proportions (see the math below). I found that his OBP is significantly better this post-season than during the regular season. So that's at least one example...

Here's the math.

You need to calculate a t statistic, which is the difference between the two means (.773 and .445) divided by the standard deviation of the difference between those two means. The first part is easy, but the latter part is complicated by the fact that we're dealing with ratios. That formula is:

square root of: (P1*(1-P1)/N1 + P2*(1-P2)/N2)
where P1 = .773, P2 = .445, N1 = 659 (regular season at-bats - 1), N2 = 22 (post-season at-bats - 1).

t = 2.99, which gives a p value of less than .01.

I was also considering checking just how unusual Colorado's winning streak is, but that's where my knowledge of statistics broke down (maybe we'll learn how to do that next semester). If anybody has comments or corrections on the stats above or can produce other MBL-related math, please post it in the comments.

New Harvard president to be installed today

Drew Faust will be installed today as Harvard's 28th president. That's right -- Harvard has only had 28 presidents since Henry Dunster was named in 1640. That's not counting some acting presidents, like Samuel Willard (1701-1707) or Nathaniel Eaton, who was "schoolmaster" from 1637 to 1639.

Faust is of course the first female president of Harvard. She is also the first since Charles Chauncy (1654-1672) to have neither an undergraduate nor graduate degree from Harvard (From 1672 to 1971, all Harvard presidents had done their undergraduate work at Harvard. The last three -- Derek Bok, Neil Rudenstine and Larry Summers -- had graduate degrees from Harvard).

The ceremony will be outdoors on Harvard Yard at 2pm. It rained heavily overnight, but it seems to be clearing up now. If the weather is decent, I'll check it out and report back. I hear tell that Harvard ceremonies have all the pomp and splendor you would expect, but I have yet to see one.

People who can't count

Babies can't count. Adults can. When I say that babies don't count, I don't mean that they don't know the words "one," "two," "three," or "four." That's obvious. What I mean is that if you give an infant the choice between 5 graham crackers or 7, the baby doesn't know which to pick.

Does that mean we have to learn numbers, or does it mean that the number system simply comes online as we mature. Babies also have bad vision, but that doesn't mean the learn vision. One of the reasons we might assume that number is innate rather than learned is that all reasonably intelligent children learn to count around the same time...or do they? This is where a few cultures, such as the Piraha, become very important.

I believe that I have heard that there are some languages that only have words for "one," "two," and "many," but I'm not sure, so if you know, please make comments. I am fairly certain that the Piraha are the only known culture not to even have a word for "one."

How does one know whether they have a word for "one?" Your first impulse might be to check a bilingual dictionary, but that begs the question. How did the dictionary-maker know? The way a few people have done it (like Peter Gordon at Columbia and Ted Gibson & Mike Frank from MIT) is to show the Piraha a few objects and see what they say. According to Mike Frank's talk at our lab a couple weeks ago, they never found a word that was used consistently to describe "one" anything. Instead, there was a number that was used for small numbers of object (1, 2, etc.), another word for slightly larger numbers, and third word that seems to be used the way we use "many."

Well, maybe they just weren't using number words in this task. Did they really even understand what was being required of them? Who knows. But there are other ways to do the experiment. For instance, you can test them the same way we test babies. You show them two boxes. You put 5 pieces of candy into one box. Then you put 7 pieces of candy into the other box. Then you ask them which box they want. Remember that they never see both groups of candy at the same time, so they have to remember the groups of candy in order to compare them.

Well, Mike Frank tried this. This is an example of the responses he got:

"Can I have both boxes?"

No. You have to choose.

"Oh, is that what this game is about? I don't want to play this game. Who needs candy? Can we do spools of thread? My wife needs those. Or how about some shotgun shells?"

This experiment was a failure. Instead, they tried a matching task. You show them a row of, say, 5 spools of thread. Then you ask them to put down the same number of balloons as there are spools of thread. They can do this. Now, you change the game. You show them some number of spools of thread, then cover those spools. You then ask them to put down the same number of balloons. Since they can't see the thread, they have to do this by memory.

The Piraha fail at this and related tasks. People who can count do not.

Of course, they might not have understood the task. This is very hard to prove one way or another. I have been running a study in my lab that involves recent Chinese immigrants. I designed the study and tested it in English with Harvard undergrads. They found the task challenging, but they quickly figured out what I needed them to do. Some of my immigrant participants do so as well, but many of them find it impossibly difficulty -- literally. Some of them have to give up.

It's not that they aren't smart. Most of them are Harvard graduate students or even faculty. What seems to be going on is a culture clash. For one thing, they aren't usually familiar with psychology experiments, since very few are done in China. I suspect that some of the things I ask them to do (repeat a word out loud over and over, read as fast as possible, etc.) may seem perfectly normal requests to my American undergraduates but very odd to my Chinese participants, just as the Piraha discussed above didn't want to choose boxes of candy. So it is always possible that the Piraha act differently in these experiments because they have different cultural expectations and have trouble figuring out what exactly is required of them.

That said, it seems pretty unlikely at this point that the Piraha have number words or count. This suggests counting must be learned. In fact, it suggests counting must be taught. This contrasts with language itself, which often seems to spring up spontaneously even when the people involved have had little exposure to an existing language. (Click here for a really interesting take on why the Piraha don't seem particularly interested in learning how to count.)

Brazil issues warrant for scientist's arrest

The government of Brazil recently ordered the arrest of the well-known linguist and anthropologist, Dan Everett.

Everett has been both famous and infamous for his study of the Piraha people, a small tribe in Brazil. He has made a number of extraordinary claims about their culture and language, such as that they do not have number words or myths.

These claims are important because they undermine a great deal of current linguistic and psychological theory, and so they have been hotly debated. Some of these debates, however, have spilled over from arguments about data and method to personal attacks.

Just before Everett spoke at MIT last fall, a local linguist sent out an email to what amounted to much of Boston's scientific community involved in language and thought. It looked like the sort of email that one means to send to a close friend and accidentally broadcasts. Language Log describes it better than I can, but the gist was that Everett is a liar who exploits the poor Piraha for his own fame and glory.

This was just one instance in a series of ad hominem attacks on Everett over the last few years. I am not going to weigh in on whether Everett is exploiting anybody, because I simply don't feel I know enough. I've never met a Piraha -- not that any of Everett's detractors have either, to my knowledge. If this means anything, I am told by friends who have visited the Piraha that they really like Everett.

A couple weeks ago, I heard from a friend who has collaborated with Everett that all further research on Piraha language and has been essentially banned. A warrant is out for Everett's arrest on charges of, essentially, exploiting the Piraha. I have no idea how much this has to do with the controversy the aforementioned linguist has been raising, but I suspect that it is not unrelated.

On the topic of language, my Web-based study of how people interpret sentences is still ongoing, and I could use more participants. Not to exploit this post to further my own academic fame and glory...

Scientists create mice with human language gene

Scientists at the Max Plank institute in Germany recently announced that they had successfully knocked the human variant of the FOXP2 "language" gene into mice.

The FOXP2 gene, discovered in 2001, is the most famous gene known to be associated with human language. There has been some debate about what exactly it does, but a point mutation in the gene is known to cause speech and language disorders.

Part of the interest in FOXP2 stems from the fact that it is found in a wide range of species, including songbirds, fish and reptiles with only slight variations. Also, FOXP2 is expressed in many parts of the body, not just the brain. Previous research had found that removing the gene from mice decreased their vocalizations...and ultimately killed the mice.

In the new study, scientists created a new mouse "chimera" with the human variant of the FOXP2 gene. This time, the only differences they could find between the transgenic mice and typical mice was in their vocalizations.

Read more about FOXP2.

(Disclosure: This research does not appear to have been published yet. I heard about it from Marc Hauser of Harvard University, who heard about it this summer from a conference talk by Svante Paabo of Max Plank, one of the researchers involved in the project.)

GoogleScience

A Google search can help you find cutting-edge research. A Google search can also be cutting-edge research.

Many questions in linguistics (the formal study of language) and psycholinguistics (the study of language as human behavior) are answered by turning to a corpus. A corpus is a large selection of texts and/or transcripts. Just a few years ago, they were difficult and expensive to create. Arguably the most popular word frequency corpus in English -- the verable Brown Corpus -- was based off of one million words of text. One million words sounds like a lot, but so many of those are "the" and "of" that in fact many words do not appear in the corpus at all.

The Google corpus contains billions of pages of text.

So what does one do with a corpus? One obvious thing is to figure out which words are more common than others. The most common words in English are short function words like "a" (found on 4.95 billion web pages) and "of" (3.61 billion pages). Google, of course, doesn't tell you how many times a words appears, but only on how many pages it appears...which may actually be an advantage, since we're typically more interested in words that show up on many websites than a word that shows up many, many times on just one website.

You might be interested in what the most common noun is. Is it "time," "man," "city," "boy," or "Internet?" You might check to see whether verbs or nouns are more common in English by comparing a large sample of verbs and nouns. You can also compare across languages. Children learn verbs more slowly than nouns in English but not in Chinese. Is that because verbs are more common in Chinese than English?

OK, those are fun experiments, but none of them sound very cutting-edge. If you want to see the Google corpus in action, check out Language Log. The writers there regularly turn to the Google corpus to answer their questions. Google is probably less-commonly used in more formal contexts, but the PsychInfo database turned up 76 hits for "Google." Many were studies about how people use Google, but some were specifically using the Google corpus, such as "Building a customised Google-based collocation collector to enhance language learning," by Shesen Guo and Ganzhou Zhang. Another -- "Nine psychologists: mapping the collective mind with Google" by Jack Arnold -- looked at the organization of conceptual knowledge. At a recent conference, I saw a presentation by vision scientists using Google Image to explore the organization of visual memory. I expect to see more and more of this type of research in the near future.

Of course, there's nothing specific to Google about this. It's just what everybody seems to use.

Free will

In “Self is Magic,” a chapter from Psychology and the Free Will, which is unfortunately not yet out, Daniel Wegner presents some fascinating data showing how easy it is to trick ourselves into believing we are in control of events when in fact we are not. Most of us are in some sense familiar with this illusion. Perhaps we believe our favorite team won partly because we were watching the game (or, if we’re pessimists, because we weren’t).

In one of the many experiments he describes, “the participant was attired in a robe and positioned in front of a mirror such that the arms of a second person standing behind the participant could be extended through the robe to look as though they were the arms of the participant.” The helper’s arms moved through a series of positions. If the participant heard through the headphones a description of each movement before it happened, they were reported “enhanced feeling of control over the arm movements” than if they had not heard those commands in advance. (read the paper here.)

Ultimately, he tries to use these data to explain the “illusion” of free will. In the following paragraphs, I am going to try to unpack the claim and see where it is strong and where it may be weak.

First, what exactly is he claiming? He does not argue that free will does not exist – he assumes it does not exist. That’s not a criticism– you can’t fit everything into one article – but it means I won’t be able to evaluate his argument against free will, though I will discuss free will in terms of his data and hypotheses below. Instead, he is interested in why we think we have free will. In all, he claims that (1) we can be mistaken about whether our thoughts cause events in the world, (2) this is because when we think about something happening and then it happens, we’re biased to believe we’ve caused it, and (3) this illusion that our conscious thoughts lead to actions is useful and adaptive – that is, evolution gave it to us for a reason.

That’s what he says. First, he isn’t really arguing that we don’t have a conscious will. Clearly, we will things to happen all the time. Some of the time he seems to be arguing that our will is simply impotent. The rest of the time he appears to think that the contents of our will are actually caused by something else. That is, our arm decides to move and tells our conscious thoughts to decide to want a cookie (more on this later).

Otherwise, I think claim #1 is pretty straightforward. Sometimes we have an illusion of conscious control when in fact we have none. Wegner compares this to visual illusions, of which there are plenty. Just like with visual illusions, just because you know it’s an illusion (watching the ballgame isn’t going to affect the score) you can’t help not feel it anyway. In fact, given that illusions exist in sight, sound and probably many other senses, so it’s not surprising that the sense of conscious will also is subject to illusions.

It’s important to point out that this itself is not an argument against the potency of conscious will. The fact that we are sometimes mistaken does not mean we are always mistaken. Otherwise, we’d have to claim that because vision is sometimes mistaken, we are all blind. Of course, Wegner isn’t actually making this argument. He already assumes that free will is an illusion. He is just interested in this article in showing how that illusion might operate, which is point #2.

Although he describes the illusion of conscious choice as a magic show we put on for ourselves, this does not mean that he thinks conscious thought has no effect on behavior – that’s point #3 above. He simply doesn’t that deciding to pick up a cookie leads to your hand reaching out to pick up a cookie. In fact, it would be a pretty extraordinary claim that conscious thought (and the underlying brain processes) have no purpose, effect or use whatsoever. In that case, why did we evolve them? This question is answerable, but it would be hard to answer. The claim, then, is simply that the conscious decision to perform a behavior does not cause that behavior.

Suppose we agree that picking up a cookie is not caused by the conscious decision to pick up a cookie – which, just to be clear, I don’t – what does cause that conscious decision? Wegner does not get into this question, at least not in this chapter, which is a shame. In these last paragraphs I’ll try to describe what he might mean and what the consequences would be.

What would it mean if the conscious mind did cause cookie-picking-up? That depends on what the conscious mind is. Perhaps it’s an ethereal, non-corporeal presence that makes a decision, then reaches down and pulls a lever and the hand reaches out to grab a cookie. That would be similar to what Descartes argued for many centuries ago, but it’s not something many cognitive scientists take seriously now. The basic assumption – for which there is no proof but plenty of good evidence – is that the mind is the brain. Activity in your brain doesn’t cause your conscious mind to want a cookie, nor does your conscious mind cause brain activity. Your conscious mind is brain activity. If we assume that this is how Wegner thinks about the mind, then his hypothesis can be restated:

The part of the brain that is consciously deciding to pick up a cookie does not give orders to the part of the brain that actually gives the motor commands to your hand to pick up the cookie. The motor cortex gets its marching orders from somewhere else.

This is an interesting hypothesis, and I’m not going to discuss it in too much detail right now. What I am interested in is what does this hypothesis have to say about free will? I would argue: maybe nothing. If your decisions are made in your brain by a non-conscious part of your mind (of which there are many) and the conscious part of your mind turned out to simply be an echo chamber where you tell yourself what you’ve decided to do, would you say that you have no free will?

The real question becomes: what decides to pick up the cookie? Where is the ultimate cause? Lack of free will means that the ultimate cause is external to the person. They picked up the cookie because of events that occurred out there in the world. Free will means that the ultimate cause was internal to the person. Nothing in Wegner’s article is really relevant to distinguishing between these possibilities (again, this is not a criticism. That wasn’t what his article was about. It’s what my article is about).

The loss of a belief in a non-corporeal mind has left us with a dilemma. Nothing we know about physics or chemistry allows for causes to be internal to a person in the sense that we mean when we say “free will”. This makes many people feel that free will can only exist if there is a non-corporeal mind operating outside the constraints of physics. On the other hand, nothing we know about physics or chemistry allows for consciousness to exist, yet essentially all cognitive scientists – including, probably, Wegner – are reasonably comfortable believing in consciousness without believing in a non-corporeal mind.

In the 19th century, physicists said that the sun could not be millions of years old, much less billions of years old, because there was no known mechanism in physics or chemistry that would allow the sun to burn that bright that long. Although entire fields of thought – such as evolution or geology – required an old, old sun in order to make any sense of their own data, the physicists said “Impossible! There must be another explanation for your data.” Later, they discovered the mechanism: fusion.

In the early 20th century, there were chemists who said that the notion of a “gene” was hogwash, because there was no known chemical mechanism for inheritance in the form of a gene. The fact that mountains of experimental data could not be explained without reference to “genes” didn’t bother them. Then Watson and Crick found the mechanism in the structure of DNA.

We may be in the same situation now. We have an incredible amount of data that only makes sense with reference to internal causation – free will. Evolution, Wegner says, built the belief in free will into us. Liz Spelke and others have run fantastic experiments showing that even infants only a few months old believe in something akin to free will. The world makes very little sense if we don’t believe that our friends, colleagues and random people on the street are causing their own behavior. Or maybe we’re not in the same situation, and free will is truly a figment of our imagination.

Physicists were right about one thing: the sun hasn’t been burning for billions of years. It doesn’t burn. It does something else entirely. The real answer to the question of free will may look like rote, dumb physical causation – a snowball rolling down a hill. It may look very similar to Descartes’ non-corporeal soul. Or it may look very different from both.

Note: Wegner is a very engaging writer. If you are interested, most of his articles are available on his website.

Scientists prove that if money could buy happiness, you wouldn't know what to buy

Humans are miserable at predicting how happy or how unhappy a given thing will make them, according to Daniel Gilbert, professor of psychology at Harvard University and author of Stumbling on Happiness.

In many realms of knowledge, people aren't very good at predicting the future. It turns out that fund managers rarely beat the market, experts are poor predictors of future events, and just about everybody is impaired at predicting how those events will make them feel. It's disappointing that neither mutual fund managers nor talking heads are really earning their salaries, but it is astonishing that most people can't predict how winning the lottery or losing their job will make them feel.

You might be tempted not to believe it, but there are dozens of carefully-contrived studies that show just that, many of them authored by Gilbert. In one study that is particularly relevant to me, researchers surveyed junior professors, asking them to predict how happy would they be in the next few years if they didn't make tenure. Not surprisingly, they expected to be pretty unhappy, and this was true of both people who did eventually make tenure and those who did not. However, when actually surveyed in the first five years after the tenure decision, the self-described level of happiness of those who did not get tenure was the same as those who did get tenure. This is not to say that being denied tenure didn't make those junior professors temporarily unhappy, but they got over it quickly enough.

(As a side-note, Gilbert mentions that both professors who got tenure and those who did not were happier than junior professors. "Being denied tenure makes you happier," he joked.)

The Big Question, then, is why are we so terrible at predicting future levels of happiness? One possibility is that we're just not that smart. Either there is no selective pressure for this ability -- and thus it never evolved -- or evolution just hasn't quite gotten there yet. Rhesus monkeys can't figure out whether they want 2 bananas or 4 bananas, and humans don't know how happy 2 or 4 bananas would make them feel.

Another possibility is that there's actually an advantage buried in this strange behavior. One possibility is that it's important to be motivated to have children (happy!) or protect your children from harm (unhappy!) but once you've had children, there's not actual advantage to increased happiness and if your children die, there's no advantage in being unable to recover from it. (In fact, parents are typically less happy because of having children, and they live shorter lives as well. I don't know about happiness levels of parents who have lost children.)

Gilbert is unsure about this argument. For every evolutionary argument you can give me in favor of poor predictions of happiness, he argued, I can give you one against. For instance, you also predict that being rejected by a potential mate will decrease your happiness more unhappy than it actually does. Thus you might not approach potential mates and thus not have children. It's ultimately a hard question to test scientifically.

(If you are wondering where I got these Gilbert quotes, it's from his lecture to the first-year graduate students in the psychology program last Thursday.)