Field of Science

Why languages can't be learned

One of the most basic, essentially undisputed scientific facts about language -- and the one that tends to get the most interest from laypeople -- is that while learning a foreign language as an adult is very difficult, children learn their native languages with as much ease and proficiency as they learn to walk. This has led researchers such as Steven Pinker to call language learning an "instinct." In fact, this "instinct" is more than remarkable -- it's miraculous. On careful reflection it seems impossible to learn just a single word in any language, much less an entire vocabulary (and thus figuring out how we nonetheless all learned a language is a major area of research).

The paradox goes back to W. V. O. Quine (who, I'm proud to say, is a fellow Obie), who suggested this thought experiment: Suppose you are an anthropologist trying to learn the language of a new, previously undiscovered tribe. You are out in the field with a member of the tribe. Suddenly, a rabbit runs by. The tribesperson points and says, "Gavagai!"

What do you make of this? Most of us assume that "gavagai" means "rabbit," but consider the possibilities: "white," "moving whiteness," "Lo, food", "Let's go hunting", or even "there will be a storm tonight" (suppose this tribesperson is very superstitious). Of course, there are even more exotic possibilities: "Lo, a momentary rabbit-stage" or "Lo, undetached rabbit parts." Upon reflection, there are an infinite number of possibilities. Upon further reflection (trust me on this), you could never winnow away the possibilites and arrive at the meaning of "gavagai" ... that is, never unless you are making some assumptions about what the tribesman could mean (that is, if you assume definitions involving undetached rabbit parts are too unlikely to even consider).

Quine offered this thought experiment in a discussion about translation, but it clearly applies to the problems faced by any infant. To make matters worse, people rarely name objects in isolation -- parents don't say "bunny," they say "Look, see the bunny?" or "Look at that bunny go!"

Generally, it should be very clear that infants could not learn a language if they didn't make certain assumptions about which words meant what. One of the major areas of modern psycholinguistics is figuring out what those assumptions are and where do they come from (that is, are they innate or are they learned?).

Long-time readers know that the major focus of my research is on how people resolve ambiguity in language. My first web-based experiment on this topic has been running for a while. Last week I posted a new experiment. Participants hear sentences like "Show me the dax" and try to guess which of several new objects might be the "dax." As usual, I can't say much about the purpose of the experiment while it's still running, but participants who finish the experiment will get an explanation of the experiment and also will get to see their own results. You can try it by clicking here.

How does the brain read?

Reading is an important skill, so it's not surprising it gets a lot of attention from researchers. Reading is an ancient skill -- at least in some parts of the world -- but not so old that we don't know when it was invented (as opposed to, for instance, basic arithmetic). And, unlike language, it appeared recently enough in most of the world that it's unlikely that evolution has had time to select for reading skill...which would explain the high prevalence of dyslexia.

Some decades ago, there was a considerable amount of debate over whether reading was phonologically based -- that is, "sounding out" is crucial (CAT -> /k/ + /{/ + /t/ -> /k{t/) -- or visual-recognition based -- that is, you simply recognize each words as a whole form (CAT -> /k{t/). People who favored the former theory emphasized phonics-based reading instruction, while the latter theory resulted in "whole language" training.

At least from where I sit, this debate has been largely resolved in favor of phonics. This isn't to say that skilled readers don't recognize some high-frequency words as whole, but it does mean that sounding out words it crucial at least in learning to read. One important piece of evidence is that "phonological awareness" -- the ability to figure out that CAT has 3 sounds by COLON has 5 or that DOG and BOG rhyme -- is just about the best predictor of reading success. That is, preschoolers who are at the bottom of the pack in terms of phonological awareness tend to in the future be at the bottom of the pack in learning to read.

At least, that is the story for writing systems like English that are alphabetic. There has been some question as to the role of phonology in learning to read character-based systems like Chinese. Two years ago, a group including Li Hai Tan of Hong Kong University presented evidence that in fact phonological awareness may not be particularly important in learning to read Chinese.

I have been trying to test one aspect of their theory for some time. Not having collaborators in China or Taiwan, I have to recruit my Chinese-speakers here in Cambridge, which is harder than you might think. The first experiment I ran took nearly six months, most of which was spent trying to recruit participants, and it was ultimately inconclusive. Last spring I piloted a Web-based version of the experiment, thinking that I might have more luck finding Chinese participants through the Internet. However, that experiment failed. I think it was too complicated and participants didn't understand what to do.

I have spent the last few months thinking the problem through, and now I have a new Web-based study. I am trying it in English first, and if it works well enough, I will write a Chinese version of the experiment. If you are interested, please try it out here.

Knowing the meanings of words

In “On the evolution of human motivation: the role of social prosthetic systems,” Stephen Kosslyn makes a very interesting conjecture about social interactions. He argues that, for a given person, “other people serve as prosthetic devices, filling in for lacks in an individual’s cognitive or emotional abilities.” This part seems hard to argue with. Intuitively, we all rely on other people to do certain things for us (mow our grass, edit our papers, provide us with love). His crucial insight is that “the ‘self’ becomes distributed over other people who function as long-term social prosthetic systems.”

You may or may not agree with that stronger claim. I haven't made up my own mind yet. I recommend reading the paper itself, which unfortunately is not available on his website but should be available in a decent college library.

There is one interesting application of his idea to an old problem in linguistics and philosophy.
What is the problem? Intuitively, we would like to believe that our words pick out things in the world (although words and concepts are not interchangeable, for the purposes of this discussion, they have the same problems). When I say “cows produce milk,” I like to believe that this sentence is either true or false in the world. For this to even be plausible, we have to assume that the words “cow” and “milk” refer to sets of real, physical objects.

This is problematic in myriads of ways. It is so full of paradoxes that Chomsky has tried to define away the problem by denying that words refer to anything in the world. I will focus on one particular problem that is relevant to the Kosslyn conjecture.

If you are like me, you know nothing about rare plants such as the three-seeded mercury or the Nova Scotia false-foxglove. Yet, we are able to have conversations about them. I can tell you that the both are endangered in the state of Maine, for instance. If I tell you that they both survive on pure Boron, you would probably be skeptical. Thus, we can talk about these plants and make empirical claims about them and learn new things about them without having any idea what these words actually pick out in the world. This is true of a large number of things we talk about on a daily basis. We talk about people we have never met and places we have never been.

What distinguishes these words from words that truly have no reference? To you, likely neither the words “Thistlewart” nor the word “Moonwart” mean anything. Now, suppose I tell you the first is a made-up plant, while the second is a real plant. To you, these are still both essentially empty words, except one refers to something in the world (though you don’t know what) and the other doesn’t.

Intuitively, what makes “Thistlewart” an empty concept and “Moonwart” not is that you believe there is some expert who really does know what a Moonwart is and could pick one out of a lineup. This “Expert Principle” has seemed unsatisfying to many philosophers, but within the context of the “social prosthetic system” theory, it seems quite at home. Certainly, it seems like it might at least inform some of these classic problems of reference and meaning.

Wait -- are you suggesting your brain affects your behavior?

One of my office-mates burst out laughing on Monday after receiving an email. The email was a forward, but it wasn't intended to be funny. It was a brief news blurb about a recent study looking at teenage impulsiveness, entitled "Teens' brains hold key to their impulsiveness."

What's funny about that? Well, where did the journalist think the key to impulsiveness was hidden -- in teens' kidneys? Many scientists puzzle over the fact that 150 years of biology have not driven out Creationism, but 150 years of psychology and neuroscience have been even less successful. Many people -- probably most -- still believe in mind/brain duality.

Philosophers began suggesting that all human behavior is caused by the physical body at least as early as Thomas Hobbes in the 1600s. A century and a half of psychology and neuroscience has found no evidence of an immaterial mind, and now the assumption that all behavior and thought is caused by the physical body underlies essentially all modern research. It's true that nobody has proved that immaterial minds do not exist, but similarly nobody has ever proved the nonexistence of anything. It just seems very unlikely.

This leads to an interesting dichotomy between cognitive scientists and the general public. While journalists get very excited about studies that prove some particular behavior is related to some particular part of the brain, many cognitive scientists find such studies to be colossal wastes of time and money. It would be like a physicist publishing a study entitled "Silicon falls when dropped." Maybe nobody ever tested to see whether silicon falls when dropped, but the outcome was never really in doubt. 

This isn't to say that the study I mentioned above wasn't a useful study. I have no doubt that it is a very useful study. Determining mechanistically what changes in what parts of the brain during development affect impulsiveness is very informative. The mere fact that the brain changes during development, and that this affects our behavior, is not.

Scientists arguing about the scientific method

The scientific method should be at least passingly familiar to most people who took a high school science class. Generate a hypothesis, then design an experiment that will either support or contradict your hypothesis. A more nuanced version is to find two competing hypotheses, then design an experiment that will unambiguously support at most one of those two hypotheses.

But is this what scientists actually do? Is it what scientists should do?

This question was put to us by Ken Nakayama in our first-year graduate psych seminar last week. Though it may surprise some of you, his answer was "no." In contrast to theory-driven research (the proposal above), Nakayama prefers data-driven research.

Although there are some good descriptions and defenses of theory-driven research, I don't know of one for data-driven research. Here's my best effort at describing the two.

Suppose you are a tinkerer who wants to know how a car works. If you are a theory-driven tinkerer, you would start with competing hypotheses (that tube over there connects the gas tank to the engine VS that tube over there is part of an air circulation system) and conduct experiments to tease those hypotheses apart. The theory-driven tinkerer will focus her energies on experiments that will best tease apart the most important theories, ignoring phenomena that aren't theoretically important. 

A data-driven tinkerer would say, "I wonder what happens if I do this," do it, and see what happened. That is, she may run experiments without having any hypotheses about the outcome, just to see what happens. If the data-driven tinkerer's attention is caught by some odd phenomenon (the car seems to run better in the afternoon than in the morning), she may pursue that phenomenon regardless of whether it seems theoretically interesting or helps distinguish between competing hypotheses. 

One potential reason to favor data-driven research is that while theory-driven research is constrained by our theories (which, at this stage in psychology and cognitive neuroscience, frankly aren't very good), while data-driven research is constrained only by your imagination and skill as an experimentalist. Data-driven exploration, one might argue, is more likely to lead to surprising discoveries, while theory-driven research is may only show you what you expected to see.

I suspect that most psychologists use some combination of the two strategies, though  when it comes time to write a paper, it seems to be easier to publish data that is relevant to theory (whether it was theory that led you to do the experiment in the first place is another question).

Thoughts?

How do children learn to count? Part 3

Two posts ago, I presented some rather odd data about the developmental trajectory of counting. It turns out children learn the meanings of number words in a rather odd fashion. In my last post, I described the "number" systems that are in place in animals and in infants before they learn to count. Today, I'll try to piece all this together to explain how children come to learn to be able to count.

Children first learn to map number words onto a more basic numerical system. They learn that "one" maps on to keeping track of a single object. After a while, they learn "two" maps onto keeping track of one object plus another object. Then they learn that "three" maps onto keeping track of one object plus another object plus another object. All this follows from the Wynn experiments I discussed two posts ago.

Up to this point, they've been learning the meanings of these words independently, but around this time they notice a pattern. They know a list of words ("one, two, three, four") and that this list always goes in the same order. They also notice that "two" means one more object than "one," and that "three" means one more object than "two." They put two and two together and figure out that "four" must mean one more object than "three," even though their memory systems at that age don't necessarily allow them to pay attention to four objects simultaneously. Having made this connection, figuring out "five," "six," etc., comes naturally.

So what is that more basic number system? One possibility is that children to learn to map the early number words onto the analog number system I also described in the last post (the system adults use to estimate number when we don't have time to count)?

Something like this claim has been made by a number of well-known researchers (Dehaene, Gallistel, Gelman and Wynn, to name a few). There are a number of a priori reasons Susan Carey of Harvard thinks this won't work, but even more important is the data.

As I described two posts ago, very young children can hand you one marble when asked, but hand you random numbers of marbles if asked for "two," "three" or any larger number. They always give you more than one, but they can't distinguish between the other numbers. Following Wynn, these are called "one-knowers." Slightly older children are "two-knowers," who can give you one or two marbles, but give you random amounts greater than 2 if asked for another other number. At the next stage, the child becomes a "three-knower." Usually, the next stage is being able to succeed on any number. I'll call those "counters."

Recently, LeCorre and Carey replicated this trajectory using cards with circles on them. They presented the children a card with some number of circles (1 to 8) and asked the kid, "How many?" One-knowers tended to reply "one" to a card with one circle, and then guessed incorrectly for just about everything else. Two-knowers could count one or two circles, but guessed incorrectly for all the other cards. Three-knowers could count up to three, but just guessed beyond that. Counters answered correctly on essentially all cards.

So far this doesn't tell us whether children learn to count by bootstrapping off of analog magnitudes or some other system. Carey and Mathieu LeCorre published a paper this year that seems to settle the question. The setup was exactly the same as in the last paper (now with cards with anywhere from 1 to 10 circles), except that this time the children were only briefly shown the card. They didn't have enough time to actually count "one, two, three..." The data for one-, two- and three-knowers didn't change, which isn't surprising. Both the "3-object" and the analog magnitude systems are very fast and shouldn't require explicit counting.

However, counters fell into two groups. One group, about 4.5 years old on average, answered just as adults. When they saw six circles, their answers averaged around "six." When they saw ten circles, their answers averaged around "ten." This is what you'd expect if they have mapped number words onto the analog magnitude system.

However, the other group, which was slightly younger (average age of 4 years, 1 month), guessed randomly for cards with 5 or more circles, just as if they didn't know how to count. However, these kids can count. If given time to look at the cards, they would have said the right number. So despite the fact that they can count, they do not seem to have their analog magnitude system mapped onto number words.

This means that the analog magnitude system isn't fundamental in learning how to count, and it actually takes some time for children to learn that mapping even after they've learned to count. Carey takes this as meaning that the analog magnitude system doesn't play a fundamental role in learning to count, either, and there are other reasons as well that this is probably the case.

One remaining possibility is that children use the "3-object system" to understanding the meanings of 1, 2 and 3. This seems to work nicely, given that the limits of the system (3 objects in children, 4 in adults) seem to explain why children can learn "one," "two," and "three" without really learning to count. Carey actually has a somewhat more nuanced explanation where children learn the meanings of "one," "two," and "three" the same may that quantifiers (like "a" in English) are learned. However, to the best of my knowledge, she doesn't have an account of how such quantifiers are learned, and if she had an account, I suspect it would itself hinge off of the 3-object system, anyway.

That's it for how children learn to count, unless I get enough comments asking for more details on any point. For those who want to read more, there are many papers on this subject at Carey's web page.

How do children learn to count? Part 2

In my last post, I showed that children learn the meaning of number words in a peculiar but systematic fashion. Today, I'll continue trying to explain this odd behavior.

Important to this story is that children (and non-human primates) are born with several primitive but useful numerical systems that are quite different from the natural number system (1, 2, 3, ...). They can't use these systems to count, but they may be useful in learning to count. In this post, I'll try to give a quick summary of how they work.

One is a basic system that can track about 3-4 objects at a time. This isn't a number system per se, just an ability to pay attention to a limited and discrete number of things, and it may or may not be related to similar limits in visual short-term memory.

You can see this in action by playing the following game with a baby under the age of 2. Show the baby two small boxes. Put a single graham cracker into one of the boxes. Then put, one at a time, two graham crackers into the other box. Assuming your baby likes graham crackers, she'll crawl to the box with two graham crackers. Interestingly, this won't work if you put two graham crackers in one box and four in the other. Then, the baby chooses between the boxes randomly. This is understood to happen because the need to represent 6 different objects all in memory simultaneously overloads the poor baby's brain, and she just loses track. (If you want to experience something similar, try to find a "multiple object tracking" demo with 5 or more objects. I wasn't able to find one, but you can try this series of demos to get a similar experience.)

On the other hand, there is the analog magnitude system. Infants and non-human animals have an ability to tell when there are "more" objects. This isn't exact. They can't tell 11 objects from 12. But they can handle ratios like 1:2. (The exact ratio depends on the animal and also where it is in maturity. We can distinguish smaller ratios than infants can.)

You can see this by using something similar to the graham cracker experiment. Infants like novelty. If you show them 2 balls, then 2 balls again, then 2 balls again, they will get bored. Then show them 4 balls. They suddenly get more interested and look longer. However, this won't happen if you show them 4 balls over and over, then show them 5. That ratio is too similar. (I'm not sure if you get this effect in the graham cracker experiment. I suspect you do, but I couldn't find a reference off-hand. The graham cracker experiment is more challenging for infants, so it's possible the results might be somewhat different.)

You can also try this with adults. Show them a picture with 20 balls, and ask them how many there are. Don't give them time to count. The answer will average around 20, but with a good deal of variation. They may say 18, 19, 21, 22, etc. If you give the adult enough time to count, they will almost certainly say "20."

Those are the two important prelinguistic "number" systems. In my next post, I'll try to piece all this information together.

How do children learn to count? Part 1

How do children learn to count? You could imagine that numbers are words, and children learn them like any other word. (Actually, this wouldn't help much, since we still don't really understand how children learn words, but it would neatly deflect the question.) However, it turns out that children learn to count in a bizarre fashion quite unlike how they learn about other words.

If you have a baby and a few years to spend, you can try this experiment at home. Every day, show you baby a bowl of marbles and ask her to give you one. Wait until your baby can do this. This actually takes some time, during which you'll either get nothing or maybe a handful of marbles.

Then, one day, between 24 and 30 months of age, your toddler will hand you a single marble. But ask for 2 marbles or 3 marbles, etc., your toddler will give you a handful. The number of marbles won't be systematically larger if you ask for 10 than if you ask for 2. This is particularly odd, because because by this age the child typically can recite the count list ("one, two, three, four...").

Keep trying this, and within 6-9 months, the child will start giving you 2 marbles when asked for, but still give a random handful when asked for 3 or 4 or 5, etc. Wait a bit longer, and the child will manage to give you 1, 2 or 3 when asked, but still fail for numbers greater than 3.

This doesn't continue forever, though. At around 3 years old, children suddenly are able to succeed when asked for any set of numbers. They can truly count. (This is work done by Karen Wynn some years ago, who is now a professor of psychology at Yale University.)


Of course, this is just a description of what children do. What causes this strange pattern of behavior? We seem to be, as a field, homing in on the answer, and in my next post I'll describe some new research that sheds light onto the question.

SNPs and genes for language

Modern genetic analyses have told us a great deal about many aspects of the human body and mind. However, genetics has been relatively slow in breaking into the study of language. As I have mentioned before, a few years ago resarchers reported that a damaged version of the gene FOXP2 was responsible for the language impairments in the KE family. This sounds more helpful than it really was, since it turns out that even some reptiles have versions of the FOXP2 gene. In humans, FOXP2 isn't just expressed in the brain -- it's expressed in the gut as well. This means that there is a lot more going on than just having FOXP2 or not.

Over the weekend, researchers presented new data at the Boston University Conference on Language Development that hones in on what, just exactly, FOXP2 does.

It turns out that there is a certain amount of variation in genes. One type of variation is a Single Nucleotide Polymorphism (SNP), which is a single base pair in a string of DNA that varies from animal to animal within a species. Some SNPs may have little or no effect. Others can have disastrous effects. Others are intermediate. The Human Genome Project simply cataloged genes. Scientists are still working on cataloging these variations. (This is the extent of my knowledge. If any geneticists are reading this and want to add more, please do.)

The paper at BUCLD, written by J. Bruce Tomblin and Jonathan Bjork of the University of Iowa and Morten H. Christiansen of Cornell University, looked at SNPs in FOXP2. They selected 6 for study in a population of normally developing adolescents and a population of language-impaired adolescents.

Two of the six SNPs under study correlated well with a test of procedural memory (strictly speaking, one correlation was only marginally statistically significant). One of these SNPs predicted better procedural memory function and was more common in language-normal adolescents; the other predicted worse procedural memory function and was more common in language-impaired adolescents.

At a mechanistic level, the next step will be understanding how the proteins created by these different versions of FOXP2 do. From my perspective, I'm excited to have further confirmation of the theory that procedural memory is important in language. More importantly, though, I think this study heralds a new, exciting line of research in the study of human language.

(You can read the abstract of the study here.)

Finding guinea pigs

One problem that confronts nearly every cognitive science researcher is attracting participants. This is less true perhaps for vision researchers, who can sometimes get away with testing only themselves and their coauthors, but it is definitely a problem for people who conduct Web-based research, which often needs hundreds or even thousands of participants.

Many researchers when they start conducting experiments on the Internet are tempted to offer rewards for participation. It's too difficult to pay everybody, so this is often done in the context of a lottery (1 person will win $100). This seems like an intuitive strategy, since we usually attract participants to our labs by offering money or making it a requirement for passing an introductory psychology course.

If you've been reading the Scienceblog.com top stories lately, you might have noticed a recent study by University of Florida researchers, which suggested that people -- well, UF undergrads -- are less likely to give accurate information to websites which offered rewards.

Although these data are in largely in the context of marketing, this suggests that using lotteries to attract research participants on the Web may actually be backfiring.

Are babies prejudiced?

In 1994, in discussing how children come to learn about inheritance, Susan Carey and Elizabeth Spelke wrote: "There are many ways children may come to resemble their parents: Curly-haired parents may have curly-haired children because they give them permanents; prejudiced parents may have prejudiced children because they taught them to be so. Such mechanisms are not part of a biological process of inheritance..."

It's not clear that Carey & Spelke thought prejudice is taught to children rather than inherited through genes, but it's interesting that in picking only two examples of non-biological inheritance, Carey & Spelke chose prejudice as one. What makes this quotation remarkable is how unremarkable it is. It seems quite natural to assume that prejudice is learned. Recently, however, a number of researchers -- including Spelke -- have been suggesting that although the specifics of a prejudice may come through experience, being prejudiced is innate.

(Just to be clear, nobody I know is saying that prejudice is natural, good, or something that cannot be overcome. The specific claim is that it isn't something you have to learn.)

It's actually been known for a few years that infants prefer to look at familiar-race faces. Very recently, Katherine Kinzler in the Spelke lab at Harvard has started looking at language prejudice. People can get very fired up about language. Think about the fights over bilingualism or ebonics in the US. Governments have actively pursued the extinction of various non-favored, minority languages.

In a long series of studies, Kinzler has found evidence that this prejudice against other languages and against speakers of other languages is innate. Young infants prefer to look at a person who previously spoke their language than somebody who spoke a foreign language. Infants show the same preference to somebody who speaks with their accent rather than with a foreign accent. Older infants (who can crawl), will crawl towards a toy offered by someone who speaks their language rather than towards a toy offered by a foreign-language speaker. Keep in mind that these infants probably do not understand what is being said. Also, the speakers are bilingual (the infants don't know this), which allows the experimenters to control for things like what the speakers look like. For instance, for some babies, one speaker speaks English and the other French, and for the other babies, they reverse. Also, French babies prefer French-speakers to English-speakers, while English babies prefer English-speakers to French-speakers.

Preschool children would rather be friends with somebody who speaks their own language, which is not surprising. They also prefer to be friends with somebody who uses their own accent rather than a foreign accent, even when they are able to understand what the foreign-accented child says.

Of course, none of this says that babies are born knowing which languages and accents to prefer. However, they seem to quickly work out which languages and accents are "in-group" and which are "out-group." This also doesn't say that linguistic prejudice cannot be overcome. For one thing, simply exposing children to many accents and language would presumably do much all by itself. Although it's not possible yet to rule out alternative explanations, what it does suggest is that prejudice -- at least, linguistic prejudice -- can't be overcome by simply not teaching it to children. They must be actively taught not to be prejudiced.

The paper, which is pretty easy to understand, is not available on the authors' website, but if you have a decent library:

Kinzler, Dupoux, Spelke. (2007). The native language of social cognition. Proceedings of the National Academy of Sciences, 104(30), 12577-12580.

Quantum Vision

Can quantum physics explain consciousness? The fact that the mind is instantiated in the physical brain has made it difficult for people to imagine how a physical object like the brain leads to conscious experience in similar ways that it becomes difficult to believe in free will. A number of people have hoped to find the solution in the indeterminacy of quantum physics.

There is a new hypothesis out from Efstratios Manousakis of Florida State University. The phenomenon that he is interested in understanding is binocular rivalry. In binocular rivalry, a different image is displayed to each of your eyes. Instead of seeing a mishmash of the two images, you tend to see one, then the other, then the first one again, ad infinitum. It's not possible to do a demonstration over the internet, but the experience is similar to looking at a Necker Cube, where you first see it popping out of the page, then receding from the page, then popping out, and so on. Notice that what your "eye" sees doesn't change. But your conscious experience does.

Manousakis has found that quantum waveform formulas describe this reasonably well. The question is whether they describe it well because the phenomenon is a quantum phenomenon or because there are two different phenomena for which the same formulas work. Keep in mind that binocular rivalry is something that can actually be seen with neuroimaging. That is, you can see the patterns in the brain change as the person first sees one image, then the other, etc. So if this is really a quantum effect, it is operating at a macro scale. New Scientist has an interesting article on this story this last week. It's not clear from the article if this is a problem Manousakis has thought about or not, and unfortunately his actual journal article isn't available on his website.

The neuroscience of theory of mind

The study of social cognition ("people thinking about people") and social neuroscience has exploded in the last few years. Much of energy -- but by no means all of it -- has focused on Theory of Mind.

"Theory of Mind" is something we are all assumed to have -- that is, we all have a theory that other people's actions are best explained by the fact that they have minds which contain wants, beliefs and desires. (One good reason for calling this a "theory" is that while we have evidence that other people have minds and that this governs their behavior, none of us actually has proof. And, in fact, some researchers have been claiming that, although we all have minds, those minds do not necessarily govern our behavior.)

Non-human animals and children under the age of 4 do not appear to have theory of mind, except in perhaps a very limited sense. This leads to the obvious question: what is different about human brains over the age of 4 that allows us to think about other people's thoughts, beliefs and desires?

It might seem like Theory of Mind is such a complex concept that it would be represented diffusely throughout the brain. However, in the last half-decade or so, neuroimaging studies have locked in on two different areas of the brain. One, explored by Jason Mitchell of Harvard, among others, is the medial prefrontal cortex (the prefrontal cortex is, essentially, in the front of your brain. "medial" means it is on the interior surface, where the two hemispheres face each other, rather than on the exterior surface, facing your skull). The other is the temporoparietal junction (where your parietal and temporal lobes meet), described first in neuroimaging by Rebecca Saxe of MIT and colleagues.

Not surprisingly, there is some debate about which of these brain areas is more important (this breaks down in the rather obvious way) and also what the two areas do. Mitchell and colleagues tend to favor some version of "simulation theory" -- the idea that people (at least in some situations) guess what somebody else might be thinking by implicitly putting themselves in the other person's shoes. Saxe does not.

Modulo that controversy, theory of mind has been tied to a couple fairly small and distinct brain regions. These results have been replicated a number of times now and seem to be robust. This opens up the possibility, among other things, of studying the cross-species variation in theory of mind, as well as the development of theory of mind as children reach their fourth birthdays.

Having solved the question of monkeys & humans, I move on to children and adults

Newborns are incredibly smart. They appear to either be born into the world knowing many different things (the difference between Dutch and Japanese, for instance), or they learn them in a blink of an eye. On the other hand, toddlers are blindingly stupid. Unlike infants, toddlers don't know that a ball can't roll through a solid wall. What is going on?

First, the evidence. Construct a ramp. Let a ball roll down the ramp until it hits a barrier (like a small wall). The ball will probably bounce a little and rest in front of the wall. Now let an infant watch this demonstration, but with a screen blocking the infant's view of the area around the barrier. That is, the infant sees the ball roll down a ramp and go behind a screen but not come out the other side. The infant can also see that there is barrier behind the screen. If you then lift the screen and show the ball resting beyond the barrier -- implying that the ball went through the solid barrier, the infant acts startled (specifically, the infant will look longer than if the ball was resting in front of the barrier as it should be).

Now, do a similar experiment with a toddler. The main difference is there are doors in the screen, one before the barrier and one after. The toddler watches the ball roll down the ramp, and their task is to open the correct door to pull out the ball. Toddlers cannot do this. They seem to guess randomly.

Here is another odd example. It's been known for many decades that three-year-olds do not understand false beliefs. One version of the task looks something like this. There are two boxes, one red and one green. They watch Elmo hide some candy in the red box and then leave. Cookie Monster comes by and takes the candy and moves it from the red box to the green box. Then Elmo returns. "Where," you ask the child, "is Elmo going to look for his candy?"

"In the green box," the child will reply. This has been taken as evidence that young children don't yet understand that other people have beliefs that can contradict reality. (Here's a related, more recent finding.)

However, Kristine Onishi and Renee Baillargeon showed in 2005 that 15-month-old infants can predict where Elmo will look, but instead of a verbal or pointing task, they just measured infant surprise (again, in terms of looking time). (Strictly speaking, they did not use "Elmo," but this isn't a major point.)

So why do infants succeed at these tasks -- and many others -- when you measure where they look, while toddlers are unable to perform verbal and pointing tasks that rely on the very same information?

One possibility is that toddlers lose an ability that they had as infants, though this seems bizarre and unlikely.

Another possibility I've heard is that the verbal and pointing tasks put greater demands on memory, executive functioning and other "difficult" processes that aren't required in the infant tasks. One piece of evidence is that the toddlers fail on the ball task described above even if you let them watch the ball go down the ramp, hit the wall and stop and then lower the curtain with two doors and make them "guess" which door the ball is behind.

A third possibility is something very similar to Marc Hauser's proposal for non-human primate thought. Children are born with many different cognitive systems, but only during development do they begin to link up, allowing the child to use information from one system in another system. This makes some intuitive sense, since we all know that even as adults, we can't always use all the information we have available. For instance, you may know perfectly well that if you don't put your keys in the same place every day, you won't be able to find them, put you still lose your keys anyway. Or you may know how to act at that fancy reception, but still goof up and make a fool of yourself.

Of course, as you can see from my examples, this last hypothesis may be hard to distinguish from the memory hypothesis. Thoughts?

How are monkeys and humans different (I mean, besides the tail)

Marc Hauser, one of a handful of professors to be tenured by Harvard University (most senior faculty come from other universities), has spent much of his career showing that non-human primates are smart. It is very dangerous to say "Only humans can do X," because Hauser will come along and prove that the cotton-top tamarin can do X as well. Newborn babies can tell Dutch from Japanese? Well, so can the tamarins.

For this reason, I have wondered what Hauser thinks really separates human cognition from that of other animals. He is well-known for a hypothesis that recursion is the crucial adaptation for language, but I'm never sure how wedded he is to that hypothesis, and certainly he can't think the ability to think recursively is all that separates human thought from tamarin thought.

Luckily for me, he gave a speech on just that topic at one of the weekly departmental lunches. Hopefully, he'll write a theory paper on this subject in the near future, if he hasn't already. In the meantime, I'll try to sketch the main point as best I understood it.

Hauser is interested in a paradox. In many ways, non-human primates look quite smart -- even the lowly tamarin. Cotton-top tamarins have been able to recognize fairly complex grammatical structures, yet they do not seem to use those abilities in the same ways we do -- for instance, they certainly don't use grammar.

In some situations, non-human primates seem to have a theory of mind (an understanding of the contents of another's mind). For instance, if a low-ranking primate (I forget the species, but I think this was with Chimpanzeees) sees two pieces of good food hidden and also sees that a high-ranking member of the troop can see where one piece was hidden but not the other, the low-ranking primate will high-tail it to the piece of food only he can see. That might seem reasonable. But contrast it with this situation: these primates also know how to beg for food from the researchers. What if primate is confronted with two researchers, one who has a cloth over her eyes and one who has a cloth over her ears. Does the primate know to beg only from the one who can see? No.

Similarly, certain birds can use deception to lure a predator away from their nest, but they never use that deceptive behavior in other contexts where it might seem very useful.

These are just three examples where various primates seem to be able to perform certain tasks, but only in certain contexts or modalities. Hauser proposes that part of what makes humans so smart are the interfaces between different parts of our brains. We can not only recognize statistical and rule-based regularities in our environment -- just like tamarins -- but we can also use that information to produce behavior with these same statistical and rule-based regularities. That is, we can learn and produce grammatical language. We can take something we learn in one context and use it in another. To use an analogy he didn't, our brains are an office full of computers after they have been efficiently networked. Monkey computer networks barely even have modems.

This same theory may also explain great deal of strange human infant behavior. More about that in the future.

Do ballplayers really hit in the clutch?

If you've been watching the playoffs on FOX, you'll notice that rather than present a given player's regular-season statistics, they've been mostly showing us their statistics either for all playoff games in their career, or just for the 2007 post-season. Is that trivia, or is it an actual statistic? For instance, David Ortiz hits better in the post-season than during the regular season. OK, one number is higher than the other, but that could just be random variation. Does he really hit better during the playoffs?

Why does this even matter? There is conventional wisdom in baseball that certain players hit better in clutch situations -- for instance, when men on base. This is why RBIs (runs-batted-in) are treated as a statistic, rather than as trivia. Some young Turks (i.e., Billy Beane of the Oakland A's) have argued vigorously that RBIs don't tell you anything about the batter -- they tell you about the people who bat in front of him (that is, they are good at getting on base). Statistically, it is said, few to no ballplayers hit better with men on and 2 outs.

So what about in the post-season?

I couldn't find Ortiz's lifetime post-season stats, so I compared this post-season, during which he's been phenomenally hot (.773 on-base percentage through the weekend -- I did this math last night during the game, so I didn't include last night's game), compared with the 2007 regular season, during which he was just hot (.445 on-base percentage).

There are probably several ways to do the math. I used a formula to compare two independent proportions (see the math below). I found that his OBP is significantly better this post-season than during the regular season. So that's at least one example...

Here's the math.

You need to calculate a t statistic, which is the difference between the two means (.773 and .445) divided by the standard deviation of the difference between those two means. The first part is easy, but the latter part is complicated by the fact that we're dealing with ratios. That formula is:

square root of: (P1*(1-P1)/N1 + P2*(1-P2)/N2)
where P1 = .773, P2 = .445, N1 = 659 (regular season at-bats - 1), N2 = 22 (post-season at-bats - 1).

t = 2.99, which gives a p value of less than .01.

I was also considering checking just how unusual Colorado's winning streak is, but that's where my knowledge of statistics broke down (maybe we'll learn how to do that next semester). If anybody has comments or corrections on the stats above or can produce other MBL-related math, please post it in the comments.

New Harvard president to be installed today

Drew Faust will be installed today as Harvard's 28th president. That's right -- Harvard has only had 28 presidents since Henry Dunster was named in 1640. That's not counting some acting presidents, like Samuel Willard (1701-1707) or Nathaniel Eaton, who was "schoolmaster" from 1637 to 1639.

Faust is of course the first female president of Harvard. She is also the first since Charles Chauncy (1654-1672) to have neither an undergraduate nor graduate degree from Harvard (From 1672 to 1971, all Harvard presidents had done their undergraduate work at Harvard. The last three -- Derek Bok, Neil Rudenstine and Larry Summers -- had graduate degrees from Harvard).

The ceremony will be outdoors on Harvard Yard at 2pm. It rained heavily overnight, but it seems to be clearing up now. If the weather is decent, I'll check it out and report back. I hear tell that Harvard ceremonies have all the pomp and splendor you would expect, but I have yet to see one.

People who can't count

Babies can't count. Adults can. When I say that babies don't count, I don't mean that they don't know the words "one," "two," "three," or "four." That's obvious. What I mean is that if you give an infant the choice between 5 graham crackers or 7, the baby doesn't know which to pick.

Does that mean we have to learn numbers, or does it mean that the number system simply comes online as we mature. Babies also have bad vision, but that doesn't mean the learn vision. One of the reasons we might assume that number is innate rather than learned is that all reasonably intelligent children learn to count around the same time...or do they? This is where a few cultures, such as the Piraha, become very important.

I believe that I have heard that there are some languages that only have words for "one," "two," and "many," but I'm not sure, so if you know, please make comments. I am fairly certain that the Piraha are the only known culture not to even have a word for "one."

How does one know whether they have a word for "one?" Your first impulse might be to check a bilingual dictionary, but that begs the question. How did the dictionary-maker know? The way a few people have done it (like Peter Gordon at Columbia and Ted Gibson & Mike Frank from MIT) is to show the Piraha a few objects and see what they say. According to Mike Frank's talk at our lab a couple weeks ago, they never found a word that was used consistently to describe "one" anything. Instead, there was a number that was used for small numbers of object (1, 2, etc.), another word for slightly larger numbers, and third word that seems to be used the way we use "many."

Well, maybe they just weren't using number words in this task. Did they really even understand what was being required of them? Who knows. But there are other ways to do the experiment. For instance, you can test them the same way we test babies. You show them two boxes. You put 5 pieces of candy into one box. Then you put 7 pieces of candy into the other box. Then you ask them which box they want. Remember that they never see both groups of candy at the same time, so they have to remember the groups of candy in order to compare them.

Well, Mike Frank tried this. This is an example of the responses he got:

"Can I have both boxes?"

No. You have to choose.

"Oh, is that what this game is about? I don't want to play this game. Who needs candy? Can we do spools of thread? My wife needs those. Or how about some shotgun shells?"

This experiment was a failure. Instead, they tried a matching task. You show them a row of, say, 5 spools of thread. Then you ask them to put down the same number of balloons as there are spools of thread. They can do this. Now, you change the game. You show them some number of spools of thread, then cover those spools. You then ask them to put down the same number of balloons. Since they can't see the thread, they have to do this by memory.

The Piraha fail at this and related tasks. People who can count do not.

Of course, they might not have understood the task. This is very hard to prove one way or another. I have been running a study in my lab that involves recent Chinese immigrants. I designed the study and tested it in English with Harvard undergrads. They found the task challenging, but they quickly figured out what I needed them to do. Some of my immigrant participants do so as well, but many of them find it impossibly difficulty -- literally. Some of them have to give up.

It's not that they aren't smart. Most of them are Harvard graduate students or even faculty. What seems to be going on is a culture clash. For one thing, they aren't usually familiar with psychology experiments, since very few are done in China. I suspect that some of the things I ask them to do (repeat a word out loud over and over, read as fast as possible, etc.) may seem perfectly normal requests to my American undergraduates but very odd to my Chinese participants, just as the Piraha discussed above didn't want to choose boxes of candy. So it is always possible that the Piraha act differently in these experiments because they have different cultural expectations and have trouble figuring out what exactly is required of them.

That said, it seems pretty unlikely at this point that the Piraha have number words or count. This suggests counting must be learned. In fact, it suggests counting must be taught. This contrasts with language itself, which often seems to spring up spontaneously even when the people involved have had little exposure to an existing language. (Click here for a really interesting take on why the Piraha don't seem particularly interested in learning how to count.)

Brazil issues warrant for scientist's arrest

The government of Brazil recently ordered the arrest of the well-known linguist and anthropologist, Dan Everett.

Everett has been both famous and infamous for his study of the Piraha people, a small tribe in Brazil. He has made a number of extraordinary claims about their culture and language, such as that they do not have number words or myths.

These claims are important because they undermine a great deal of current linguistic and psychological theory, and so they have been hotly debated. Some of these debates, however, have spilled over from arguments about data and method to personal attacks.

Just before Everett spoke at MIT last fall, a local linguist sent out an email to what amounted to much of Boston's scientific community involved in language and thought. It looked like the sort of email that one means to send to a close friend and accidentally broadcasts. Language Log describes it better than I can, but the gist was that Everett is a liar who exploits the poor Piraha for his own fame and glory.

This was just one instance in a series of ad hominem attacks on Everett over the last few years. I am not going to weigh in on whether Everett is exploiting anybody, because I simply don't feel I know enough. I've never met a Piraha -- not that any of Everett's detractors have either, to my knowledge. If this means anything, I am told by friends who have visited the Piraha that they really like Everett.

A couple weeks ago, I heard from a friend who has collaborated with Everett that all further research on Piraha language and has been essentially banned. A warrant is out for Everett's arrest on charges of, essentially, exploiting the Piraha. I have no idea how much this has to do with the controversy the aforementioned linguist has been raising, but I suspect that it is not unrelated.

On the topic of language, my Web-based study of how people interpret sentences is still ongoing, and I could use more participants. Not to exploit this post to further my own academic fame and glory...

Scientists create mice with human language gene

Scientists at the Max Plank institute in Germany recently announced that they had successfully knocked the human variant of the FOXP2 "language" gene into mice.

The FOXP2 gene, discovered in 2001, is the most famous gene known to be associated with human language. There has been some debate about what exactly it does, but a point mutation in the gene is known to cause speech and language disorders.

Part of the interest in FOXP2 stems from the fact that it is found in a wide range of species, including songbirds, fish and reptiles with only slight variations. Also, FOXP2 is expressed in many parts of the body, not just the brain. Previous research had found that removing the gene from mice decreased their vocalizations...and ultimately killed the mice.

In the new study, scientists created a new mouse "chimera" with the human variant of the FOXP2 gene. This time, the only differences they could find between the transgenic mice and typical mice was in their vocalizations.

Read more about FOXP2.

(Disclosure: This research does not appear to have been published yet. I heard about it from Marc Hauser of Harvard University, who heard about it this summer from a conference talk by Svante Paabo of Max Plank, one of the researchers involved in the project.)

GoogleScience

A Google search can help you find cutting-edge research. A Google search can also be cutting-edge research.

Many questions in linguistics (the formal study of language) and psycholinguistics (the study of language as human behavior) are answered by turning to a corpus. A corpus is a large selection of texts and/or transcripts. Just a few years ago, they were difficult and expensive to create. Arguably the most popular word frequency corpus in English -- the verable Brown Corpus -- was based off of one million words of text. One million words sounds like a lot, but so many of those are "the" and "of" that in fact many words do not appear in the corpus at all.

The Google corpus contains billions of pages of text.

So what does one do with a corpus? One obvious thing is to figure out which words are more common than others. The most common words in English are short function words like "a" (found on 4.95 billion web pages) and "of" (3.61 billion pages). Google, of course, doesn't tell you how many times a words appears, but only on how many pages it appears...which may actually be an advantage, since we're typically more interested in words that show up on many websites than a word that shows up many, many times on just one website.

You might be interested in what the most common noun is. Is it "time," "man," "city," "boy," or "Internet?" You might check to see whether verbs or nouns are more common in English by comparing a large sample of verbs and nouns. You can also compare across languages. Children learn verbs more slowly than nouns in English but not in Chinese. Is that because verbs are more common in Chinese than English?

OK, those are fun experiments, but none of them sound very cutting-edge. If you want to see the Google corpus in action, check out Language Log. The writers there regularly turn to the Google corpus to answer their questions. Google is probably less-commonly used in more formal contexts, but the PsychInfo database turned up 76 hits for "Google." Many were studies about how people use Google, but some were specifically using the Google corpus, such as "Building a customised Google-based collocation collector to enhance language learning," by Shesen Guo and Ganzhou Zhang. Another -- "Nine psychologists: mapping the collective mind with Google" by Jack Arnold -- looked at the organization of conceptual knowledge. At a recent conference, I saw a presentation by vision scientists using Google Image to explore the organization of visual memory. I expect to see more and more of this type of research in the near future.

Of course, there's nothing specific to Google about this. It's just what everybody seems to use.

Free will

In “Self is Magic,” a chapter from Psychology and the Free Will, which is unfortunately not yet out, Daniel Wegner presents some fascinating data showing how easy it is to trick ourselves into believing we are in control of events when in fact we are not. Most of us are in some sense familiar with this illusion. Perhaps we believe our favorite team won partly because we were watching the game (or, if we’re pessimists, because we weren’t).

In one of the many experiments he describes, “the participant was attired in a robe and positioned in front of a mirror such that the arms of a second person standing behind the participant could be extended through the robe to look as though they were the arms of the participant.” The helper’s arms moved through a series of positions. If the participant heard through the headphones a description of each movement before it happened, they were reported “enhanced feeling of control over the arm movements” than if they had not heard those commands in advance. (read the paper here.)

Ultimately, he tries to use these data to explain the “illusion” of free will. In the following paragraphs, I am going to try to unpack the claim and see where it is strong and where it may be weak.

First, what exactly is he claiming? He does not argue that free will does not exist – he assumes it does not exist. That’s not a criticism– you can’t fit everything into one article – but it means I won’t be able to evaluate his argument against free will, though I will discuss free will in terms of his data and hypotheses below. Instead, he is interested in why we think we have free will. In all, he claims that (1) we can be mistaken about whether our thoughts cause events in the world, (2) this is because when we think about something happening and then it happens, we’re biased to believe we’ve caused it, and (3) this illusion that our conscious thoughts lead to actions is useful and adaptive – that is, evolution gave it to us for a reason.

That’s what he says. First, he isn’t really arguing that we don’t have a conscious will. Clearly, we will things to happen all the time. Some of the time he seems to be arguing that our will is simply impotent. The rest of the time he appears to think that the contents of our will are actually caused by something else. That is, our arm decides to move and tells our conscious thoughts to decide to want a cookie (more on this later).

Otherwise, I think claim #1 is pretty straightforward. Sometimes we have an illusion of conscious control when in fact we have none. Wegner compares this to visual illusions, of which there are plenty. Just like with visual illusions, just because you know it’s an illusion (watching the ballgame isn’t going to affect the score) you can’t help not feel it anyway. In fact, given that illusions exist in sight, sound and probably many other senses, so it’s not surprising that the sense of conscious will also is subject to illusions.

It’s important to point out that this itself is not an argument against the potency of conscious will. The fact that we are sometimes mistaken does not mean we are always mistaken. Otherwise, we’d have to claim that because vision is sometimes mistaken, we are all blind. Of course, Wegner isn’t actually making this argument. He already assumes that free will is an illusion. He is just interested in this article in showing how that illusion might operate, which is point #2.

Although he describes the illusion of conscious choice as a magic show we put on for ourselves, this does not mean that he thinks conscious thought has no effect on behavior – that’s point #3 above. He simply doesn’t that deciding to pick up a cookie leads to your hand reaching out to pick up a cookie. In fact, it would be a pretty extraordinary claim that conscious thought (and the underlying brain processes) have no purpose, effect or use whatsoever. In that case, why did we evolve them? This question is answerable, but it would be hard to answer. The claim, then, is simply that the conscious decision to perform a behavior does not cause that behavior.

Suppose we agree that picking up a cookie is not caused by the conscious decision to pick up a cookie – which, just to be clear, I don’t – what does cause that conscious decision? Wegner does not get into this question, at least not in this chapter, which is a shame. In these last paragraphs I’ll try to describe what he might mean and what the consequences would be.

What would it mean if the conscious mind did cause cookie-picking-up? That depends on what the conscious mind is. Perhaps it’s an ethereal, non-corporeal presence that makes a decision, then reaches down and pulls a lever and the hand reaches out to grab a cookie. That would be similar to what Descartes argued for many centuries ago, but it’s not something many cognitive scientists take seriously now. The basic assumption – for which there is no proof but plenty of good evidence – is that the mind is the brain. Activity in your brain doesn’t cause your conscious mind to want a cookie, nor does your conscious mind cause brain activity. Your conscious mind is brain activity. If we assume that this is how Wegner thinks about the mind, then his hypothesis can be restated:

The part of the brain that is consciously deciding to pick up a cookie does not give orders to the part of the brain that actually gives the motor commands to your hand to pick up the cookie. The motor cortex gets its marching orders from somewhere else.

This is an interesting hypothesis, and I’m not going to discuss it in too much detail right now. What I am interested in is what does this hypothesis have to say about free will? I would argue: maybe nothing. If your decisions are made in your brain by a non-conscious part of your mind (of which there are many) and the conscious part of your mind turned out to simply be an echo chamber where you tell yourself what you’ve decided to do, would you say that you have no free will?

The real question becomes: what decides to pick up the cookie? Where is the ultimate cause? Lack of free will means that the ultimate cause is external to the person. They picked up the cookie because of events that occurred out there in the world. Free will means that the ultimate cause was internal to the person. Nothing in Wegner’s article is really relevant to distinguishing between these possibilities (again, this is not a criticism. That wasn’t what his article was about. It’s what my article is about).

The loss of a belief in a non-corporeal mind has left us with a dilemma. Nothing we know about physics or chemistry allows for causes to be internal to a person in the sense that we mean when we say “free will”. This makes many people feel that free will can only exist if there is a non-corporeal mind operating outside the constraints of physics. On the other hand, nothing we know about physics or chemistry allows for consciousness to exist, yet essentially all cognitive scientists – including, probably, Wegner – are reasonably comfortable believing in consciousness without believing in a non-corporeal mind.

In the 19th century, physicists said that the sun could not be millions of years old, much less billions of years old, because there was no known mechanism in physics or chemistry that would allow the sun to burn that bright that long. Although entire fields of thought – such as evolution or geology – required an old, old sun in order to make any sense of their own data, the physicists said “Impossible! There must be another explanation for your data.” Later, they discovered the mechanism: fusion.

In the early 20th century, there were chemists who said that the notion of a “gene” was hogwash, because there was no known chemical mechanism for inheritance in the form of a gene. The fact that mountains of experimental data could not be explained without reference to “genes” didn’t bother them. Then Watson and Crick found the mechanism in the structure of DNA.

We may be in the same situation now. We have an incredible amount of data that only makes sense with reference to internal causation – free will. Evolution, Wegner says, built the belief in free will into us. Liz Spelke and others have run fantastic experiments showing that even infants only a few months old believe in something akin to free will. The world makes very little sense if we don’t believe that our friends, colleagues and random people on the street are causing their own behavior. Or maybe we’re not in the same situation, and free will is truly a figment of our imagination.

Physicists were right about one thing: the sun hasn’t been burning for billions of years. It doesn’t burn. It does something else entirely. The real answer to the question of free will may look like rote, dumb physical causation – a snowball rolling down a hill. It may look very similar to Descartes’ non-corporeal soul. Or it may look very different from both.

Note: Wegner is a very engaging writer. If you are interested, most of his articles are available on his website.

Scientists prove that if money could buy happiness, you wouldn't know what to buy

Humans are miserable at predicting how happy or how unhappy a given thing will make them, according to Daniel Gilbert, professor of psychology at Harvard University and author of Stumbling on Happiness.

In many realms of knowledge, people aren't very good at predicting the future. It turns out that fund managers rarely beat the market, experts are poor predictors of future events, and just about everybody is impaired at predicting how those events will make them feel. It's disappointing that neither mutual fund managers nor talking heads are really earning their salaries, but it is astonishing that most people can't predict how winning the lottery or losing their job will make them feel.

You might be tempted not to believe it, but there are dozens of carefully-contrived studies that show just that, many of them authored by Gilbert. In one study that is particularly relevant to me, researchers surveyed junior professors, asking them to predict how happy would they be in the next few years if they didn't make tenure. Not surprisingly, they expected to be pretty unhappy, and this was true of both people who did eventually make tenure and those who did not. However, when actually surveyed in the first five years after the tenure decision, the self-described level of happiness of those who did not get tenure was the same as those who did get tenure. This is not to say that being denied tenure didn't make those junior professors temporarily unhappy, but they got over it quickly enough.

(As a side-note, Gilbert mentions that both professors who got tenure and those who did not were happier than junior professors. "Being denied tenure makes you happier," he joked.)

The Big Question, then, is why are we so terrible at predicting future levels of happiness? One possibility is that we're just not that smart. Either there is no selective pressure for this ability -- and thus it never evolved -- or evolution just hasn't quite gotten there yet. Rhesus monkeys can't figure out whether they want 2 bananas or 4 bananas, and humans don't know how happy 2 or 4 bananas would make them feel.

Another possibility is that there's actually an advantage buried in this strange behavior. One possibility is that it's important to be motivated to have children (happy!) or protect your children from harm (unhappy!) but once you've had children, there's not actual advantage to increased happiness and if your children die, there's no advantage in being unable to recover from it. (In fact, parents are typically less happy because of having children, and they live shorter lives as well. I don't know about happiness levels of parents who have lost children.)

Gilbert is unsure about this argument. For every evolutionary argument you can give me in favor of poor predictions of happiness, he argued, I can give you one against. For instance, you also predict that being rejected by a potential mate will decrease your happiness more unhappy than it actually does. Thus you might not approach potential mates and thus not have children. It's ultimately a hard question to test scientifically.

(If you are wondering where I got these Gilbert quotes, it's from his lecture to the first-year graduate students in the psychology program last Thursday.)

New study shows that music and language depend on some of the same brain systems

Music and language depend on the some of the same neural substrates according to researchers at Georgetown University.

The quick summary is that the authors, Robbin Miranda and Michael Ullman of Georgetown University, found that memory for musical melodies uses the same part of the brain as memory for words, and that "rules" for music use the same part of the brain as rules (grammar) for language. In the case of this particular experiment, musical "rules" means following the key structure of a song.

Why is this interesting? To the extent that anything about the mind and brain is well understood, music is particularly not well understood. I suspect this is probably partly because it took a long time for anybody to figure out how to study it empirically. Language, on the other hand, is fairly well understood. That is, it's maybe like physics in the 1600s, whereas the study of music isn't even that advanced.

If researchers are able to tie aspects of music processing to parts of the brain that we already know something about, suddenly we know a whole lot more about music. That's one exciting outcome.

The other exciting outcome is that, as I said, language has been studied scientifically for some time. This means that psychologists and neuroscientists have a whole battery of empirical methods for probing different aspects of language. To the extent that music and language overlap, that same arsenal can be set loose on music.


This shouldn't be taken as implying that nobody else has ever studied the connection between language and music before. That's been going on for a long time. What's important here is that these aspects of music were tied to one of the most complete and best-specified models of how the brain understands and produces language -- the Declarative/Procedural model.

Unfortunately, the paper isn't yet available on the Ullman website, but you can read a press release here.

Full disclosure: I was working in the Ullman lab when Robbin joined as a graduate student. You can read about some of my research with Ullman here.

Your brain knows when you should be afraid, even if you don't

I just got back to my desk after an excellent talk by Paul Whalen of Dartmouth College. Whalen studies the amygdala, an almond-shaped region buried deep in the brain. Scientists have long known that the amygdala is involved in emotional processing. For instance, when you look at a person whose facial expression is fearful, your amygdala gets activated. People with damage to their amygdalas have difficulty telling if a given facial expression is "fear" as opposed to just "neutral."

It was an action-packed talk, and I recommend that anybody interested in the topic visit his website and read his latest work. What I'm going to write about here are some of his recent results -- some of which I don't think have been published yet -- investigating whether you have to be consciously aware of seeing a fearful face in order for your amygdala to become activated.

The short answer is "no." What Whalen and his colleagues did was use an old trick called "masking." If you present one stimulus (say, a fearful face) very quickly (say, 1/20 of a second) and then immediately present another stimulus (say, a neutral face) immediately afterwards, the viewer typically reports only having seen the second stimulus. Whalen used fMRI to scan the brains of people while they viewed emotional faces (fearful or happy) that were masked by neutral faces. The participants said they only saw neutral faces, but the brain scans showed that their amygdalas knew different.

One question that has been on researcher's minds for a while is what information does the amygdala care about? Is it the whole face? The color of the face? The eyes? Whalen ran a second experiment which was almost exactly the same, but he erased everything from the emotional faces except the eyes. The amygdala could still tell the fearful faces from the happy faces.

You could be wondering, "Does it even matter if the amygdala can recognize happy and fearful eyes or faces that the person doesn't remember seeing? If the person didn't see the face, what effect can it have?"

Quite possibly plenty. In one experiment, the participants were told about the masking and asked to guess whether they were seeing fearful or happy eyes. Note that the participants still claimed to be unable to see the emotional eyes. Still, they were able to guess correctly -- not often, but more often than if they were guessing randomly. So the information must be available on some level.

There are several ways this might be possible. In ongoing research in Whalen's lab, he has found that people who view fearful faces are more alert and more able to remember what they see than people who view happy faces. Experiments in animals show that when you stimulate the amygdala, various things happen to your body such as your eyes dilating. Whalen interprets this in the following way: when you see somebody being fearful, it's probably a clue that there is something dangerous in the area, so you better pay attention and look around. It's possible that subjects who guessed correctly [this is my hypothesis, not his] were tapping into the physiological changes in their bodies in order to make these guesses. "I feel a little fearful. Maybe I just saw a fearful face."

For previous posts about the dissociation between what you are consciously aware of from what your brain is aware of, click here, here and here.

Monkeys know their plurals

Anybody who reads this blog knows that I am deeply skeptical of claims about animal language. Some of the best work on animal language has come from Marc Hauser's lab at Harvard. Recently they reported that rhesus monkeys have the cognitive machinery to understand the singular/plural distinction.

First, a little background. Many if not most scientists who study language are essentially reverse-engineers. They/we are in the business of figuring out what all the parts are and how they work. This turns out to be difficult, because there are many parts and we don't really have the option of taking apart the brains of random people since they usually object. So the task is something like reverse-engineering a Boeing 747 while it's in flight.

There are many different ways you could approach the task. Hauser tries to get at language by looking at evolution. Obviously, rhesus monkeys can't speak English. Just as obviously, they can do some of the tasks that are necessary to speak English (like recognizing objects -- you have to recognize something before you can learn its name). Any necessary components of language that non-human animals can successfully perform must not be abilities that evolved for the purpose of language. If you can figure out what they did evolve for, you can better understand their structure and function. So the next step is perhaps to figure out why those particular abilities evolved and what non-human animals use them for. This ultimately leads to a better understanding of these components of language.

That is one reason to study language evolution in this manner, but there are many others (including the fact that it's just damn cool). If you are interested, I suggest you read one this manifesto on the subject.

Back to the result. Nouns in many languages such as English can either be singular or plural. You couldn't learn to use "apple" and "apples" correctly correctly if you couldn't distinguish between "one apple" and "more than one apple". This may seem trivial to you, but no non-human animals can distinguish between 7 apples and 8 apples -- seriously, they can't. In fact, some human groups seemingly cannot distinguish between 7 apples and 8 apples, either (more on that in a future post).

So can rhesus monkeys? Hauser and his colleagues tested wild rhesus monkeys on the beautiful monkey haven of Cayo Santiago in Puerto Rico. The monkeys were shown two boxes. The experimenters then put some number of apples into each box. The monkeys were then allowed to approach one box to eat the contents. Rhesus monkeys like apples, so presumably they would go to the box that they think has more apples.

If one box had 1 apple and the other had 2 apples, the monkeys went with the two apples. If one box had 1 apple and the other had 5, the monkeys picked the 5 apple box. But they chose at random between 2 and 4 apples or 2 and 5 apples. (For those who are familiar with this type of literature, there are some nuances. The 2, 4 or 5 apples had to be presented to the monkeys in a way that encouraged the monkeys to view them as a set of 2, 4 or 5 apples. Presenting them in a way that encourages the monkeys to think of each apple as an individual leads to different results.)

This suggests that when the monkeys saw one box with "apple" and one with "apples," they knew which box to choose. But when both boxes had "apples," they were at a loss. Unlike humans, they couldn't count the apples and use that as a basis to make their decision.



Full disclosure: I considered applying to his lab as a graduate student. I am currently a student in a different lab at the same school.

Caveat: These results have not been formally published. The paper I link to above is a theory paper that mentions these results, saying that the paper is under review.

How do universities choose professors? The survey is in.

Several studies have looked at university hiring practices. Search committees (committees in charge of filling vacant positions) around the country were surveyed, and the results are in.

The studies, published in Teaching of Psychology, looked specifically at psychology departments, so it may not generalize well to other departments. However, as a PhD student in psychology, it's the department I care most about.

The older of the two, written in 1998 by Eugene Sheehan, Teresa McDevitt & Heather Ross, all then at the University of Northern Colorado, had several interesting results. One was that teaching was valued more highly than research, which was surprising to me. I would like to know how this broke down by type of institution. Some schools are said to highly value teaching (i.e., Oberlin, Swarthmore) while others are said to highly value research (i.e., Harvard, Georgetown). Since they only got back 90 complete surveys, they probably couldn't do do an analysis breaking down by "teaching schools" and "research schools."

Luckily, R. Eric Landrum & Michael A. Clump from Boise State University wondered the same thing and published an answer in 2004,. They compared public and private universities as well as undergraduate-only departments against programs with graduate programs as well. Private schools were significantly more likely to care about teaching experience, whereas public institutions were significantly more likely to care about research-related issues and the ability to get grants. Undergraduate-only departments were similarly much more conserned with teaching-related issues, whereas programs with graduate students cared more about research- and grant-related issues.

Another interesting result was that the Sheehan study found that the job interview was the most important factor in deciding between interviewed candidates. This is not surprising in the sense that we all know that the interview is very important. It is surprising because it's well-known that job interviews are very poor indicators of future performance, and you would think a university psychology department would know that. The later study did not consider interviews vs. CVs vs. letters of recommendation.


Now for the data.


Sheehan et al. 1998:

The factors that the search committees considered when deciding who to interview are listed below, in order of most important to least:

Letters of recommendation
Fit between applicant's research interest and department needs
Experience teaching courses related to the position description
General teaching experience
Quality of course evaluations
Quality of journals in which the applicant has published
Number of publications
Potential for future research
Quality of applicant's doctoral granting institution
Awards for teaching


The factors considered when deciding among interviewed candidates were, in order of most important to least:
Performance at interview with search committee
Performance during colloquium (i.e., the "job talk")
Fit between applicants research interests and department needs
Experience teaching courses related to the position description
Performance during undergraduate lecture
Candidate's ability to get along with other faculty
General teaching experience
Letters of recommendation
Candidate's personality
Performance at interview with chair



Landrum & Clump, 2004:

Factors more imporant to private schools vs. public:
It is important that applicant publications be from APA journals only.
Teaching experience at the undergraduate level is important for applicants.
Research experience utilizing undergraduate undergraduates is important for our applicants.
Experience in academic advising is important for the successful job applicant.
It hurts an applicant if he or she does not address specific courses listed in the job adertisement.
In our department, teaching is more important that research.
Teaching experience.
Previous work with undergraduates.

Factors more important to public schools vs. private:
Our department has an expectation of grant productivity.
Faculty in our department need to receive grants in order to be successful.
In our department, research is more important than teaching.
Quality of publications.
Potential for successful grant activity.


The comparison between undergraduate-only and undergrad/grad programs revealed many more significant differences, so I refer you to the original paper.

Another non-human first

First ever Economist obituary for a non-human:

http://www.economist.com/obituary/displaystory.cfm?story_id=9828615

Not your granddaddy's subconscious mind

To the average person, the paired associate for "psychology," for better or worse, is "Sigmund Freud." Freud is probably best known for his study of the "unconscious" or "subconscious". Although Freudian defense mechanisms have long since retired to the history books and Hollywood movies, along with the ego, superego and id, Freud was largely right in his claim that much of human behavior has its roots outside of conscious thought and perception. Scientists are continually discovering new roles for nonconscious activities. In this post, I'll try to go through a few major aspects of the nonconscious mind.

A lab recently reported that they were able to alter people's opinions through a cup of coffee. This was not an effect of caffeine, since the cup of coffee was not actually drunk. Instead, study participants were asked to hold a cup of coffee momentarily. The cup was either hot or cold. Those who held the hot cup judged other people to be warmer and more sociable than those who held the cold cup.

This is one in a series of similar experiments. People are more competitive if a briefcase (a symbol of success) is in sight. They do better in a trivia contest immediately after thinking about their mothers (someone who wants you to succeed). These are all examples of what is called "social priming" -- where a socially-relevant cue affects your behavior.

Social priming is an example of a broader phenomenon (priming) that is a classic example of nonconscious processing. One simple experiment is to have somebody read a list of words presenting one at a time on a computer. The participant is faster if the words are all related (dog, cat, bear, mouse) than if they are relatively unrelated (dog, table, mountain, car). The idea is that thinking about dogs also makes other concepts related to dogs (i.e., other animals) more accessible to your conscious thought. In fact, if you briefly present the word "dog" on the screen so fast that the participant isn't even aware of having seen it, they will still be faster at reading "cat" immediately afterwards than if "mountain" had flashed on the screen.

Mahzarin Banaji has made a career around the Implicit Association Test. In this test, you press a key (say "g") when you see a white face or a positive word (like "good" or "special" or "happy") and a different key (say "b") when you see a black face or a negative word (like "bad" or "dangerous"). You do this as fast as you can. Then the groupings switch -- good words with black faces and bad words with white faces. The latter condition is typically harder for white Americans, even those who self-report being free of racial prejudice. Similar versions of the test have been used in different cultures (i.e., Japan) and have generally found that people are better able to associate good words with their own in-group than a non-favored out-group. I didn't describe the methodology in detail here, but trust me when I say it is rock-solid. The interpretation that this is a measure of implicit, nonconscious prejudice is up for debate. For the purposes of my post here, though, this is clearly a nonconscious prejudice. (Try it for yourself here.)

Vision turns out to be divided into conscious vision and nonconscious vision. Yes, you read that correctly: nonconscious vision. The easiest way to tell this for yourself is to blindfold a single eye. You probably know that you need two eyes for depth perception, but with one eye blindfolded, the world doesn't suddenly look flat. (At least, it doesn't for me.) You may notice some small differences, but to get a real sense of what you have lost, try playing tennis. The ball becomes nearly impossible to find. This is because the part of your vision that you use to orient in space is largely inaccessible to your conscious mind.

An even more interesting case study of this -- though not one you can try at home -- is blindsight. People with blindsight report being blind. As far as they can tell, they can't see a thing. However, if you show them a picture and ask them to guess what the picture is of, they can "guess" correctly. They can also reach out and take the picture. They are unaware of being able to see, but clearly on some level they are able to do so.

It is also possible to learn something without being aware of learning it. My old mentor studies contextual cueing. The experiment works like this: You see a bunch of letters on the screen. You are looking for the letter T. Once you find it, you press an arrow key to report which direction the "T" faces. This repeats many hundreds of times. Some of the displays repeat over and over (the letters are all in the same places). Although you aren't aware of the repetition -- if asked, you would be unable to tell a repeated display from a new display -- you are faster at finding the T on repeated displays than new displays.

In similar experiments about language learning, you listed to nonsense sentences made of nonsense words. Unknown to you, the sentences all conform to a grammar. If asked to explain the grammar, you would probably just say "huh?" but if asked to pick between two sentences, one of which is grammatical and one of which is not, you can do so successfully.

Actually, an experiment isn't needed to prove this last point. Most native speakers are completely ignorant of the grammar rules governing their language. Nobody knows all the grammar rules of their language. Yet we are perfectly capable of following those grammar rules. When presented with an ungrammatical sentence, you may not be able to explain why it's ungrammatical (compare "Human being is important" with "The empathy is important"), yet you still instinctively know there is a problem.

And the list goes on. If people can think of other broad areas of subconscious processing, please comment away. These are simply the aspects of the unconscious I have studied.

You'll notice I haven't talked about defense mechanisms or repressed memories. These Freudian ideas have fallen out of the mainstream. But the fact remains that conscious thought and perception are just one corner of our minds.