Field of Science

This week at the cognition and language lab

I just finished watching several episodes of Scrubs. If you watch enough TV, you get a sense of what it's like to be a doctor or a lobbyist or a policeman or a Mafioso. Some of these shows are more accurate, some are less. But it's very hard to get even an inaccurate sense of what it's like to be a working scientist by watching TV. Even if we go to movies, all that comes to mind is Brent Spiner or Dennis Quaid .

I have no idea what it's like to be a xenobiologist or a paleoglaciologist (though I did once spend a couple days hitchhiking with a pair of paleoglaciologist on Sakhalin, taking tree cores), but I can open a window on a week in the life of a psychology graduate student.

My first year proposal was due Tuesday. I spent last weekend reading papers in preparation to write about my work on pronoun resolution. The purpose of that project was/is to determine whether a particular odd linguistic phenomenon generalized to a large number of words in English, or if it was specific to just a relatively small number of famous examples. That didn't seem like enough to propose as a year-long project, but beyond that I didn't have any particular hypotheses.

Sunday morning I finally thought of something, but at 8pm, I decided I didn't like what I had written, gave up and wrote about a different project instead. Monday morning, I sent the project proposal to my advisor and spent most of the day extending that essay into my final paper for my developmental proseminar class. Monday night, I began our take-home final for the developmental proseminar.

I worked on the final for most of Tuesday as well, finishing in the evening. Having spent all day on a frustrating exam, I wanted to do something fun...which for me meant analyze data. I downloaded the results from the Birth Order survey. Over 2,500 people participated, and the data were the stuff of dreams -- much better than I hoped. So I archived that survey. I don't need any more data, and I'd rather people who visit the site do one of the new experiments.

Wednesday and Thursday were spent writing and testing the code for a new pilot study on a particular type of linguistic inference. There are three versions of that experiment, and I ran all three on myself (one of them more than once) until I was satisfied. I also had two coworkers give it a run-through.

On Friday, I ran 13 subjects on that pilot study. It's the end of the semester, and many of the undergraduate psych students waited to the last minute to participate in the required number of experiments in order to get credit in their classes. This is always a good time to find subjects.

Since my experiments are all computer-based, running subjects is fairly dull. I greet the participant when s/he shows up, explain the procedure, have him/her sign the consent form, and then get the program up and running. When the participant finishes, I give him/her a debriefing form and answer any questions. I spent the time while waiting reading papers about birth order effects, working on the blog, answering email, and working on the one final project of the week I haven't yet mentioned.

The results of the Video Experiment were much more interesting than expected. I don't want to say anything about it, because we may have to run more conditions in the future, but basically, we were doing what was supposed to be a confirmatory study, proving something everybody already knew. Psychologists often get criticized for spending all their time proving the obvious (people who like to eat tend to eat more, for instance), but the Video Experiment was an example of why we run these studies: we found exactly the opposite of what we and I suppose everybody else would have predicted.

My co-author is in charge of writing the paper, but he has been shooting me emails all week asking for additional analyses. I've also read and commented on several drafts. It is looking pretty good -- much better than if I had written it -- but the senior author hasn't read it yet. We'll see what she says. After she's satisfied (if she's satisfied), we'll send it off to a journal, where the reviewers will tear it to shreds and reject it. We'll rewrite it (after perhaps running another experiment or two) and then resubmit it. It's a long process.

And that was one week in the life of a psychology graduate student. There's definitely a TV show in there somewhere!

What is neuroimaging good for?

On page 32 of the November/December issue, Seed Magazine reports that in July of 2007
Neuroscientists seeking to discern whether culture affects the human brain examined those of a group of Americans and Nicaraguans as they watched different hand gestures specific to their respective cultures.
Hopefully, this is not what said neuroscientists (no reference is given) were actually trying to do, because fMRI is very expensive (it typically costs hundreds of dollars an hour just to rent the machine), and you wouldn't really need to do an experiment to answer this question.

I think it's fairly obvious that people respond differently to language-specific hand gestures (for one thing, they are more likely to respond to them). If people respond differently, then their brains should also respond differently. To suggest otherwise means that you believe that the difference in behavior is due to either (1) an immaterial soul that controls that can engage in activities independently of the body, or (2) these behaviors are controlled by some organ of the body outside the brain.

These are both logically possible hypotheses, but the research over the last few centuries makes them so unlikely to be the case that unless you have a really, really good reason to suspect that the brain is not involved in interpreting hand gestures, then it's not really worth the incredible cost of fMRI to answer this particular research question.

Seed is a decent, informative magazine, so the fact that they let this slip is just more evidence of how pervasive this thinking is.

Does diversity increase productivity?

One of the arguments for diversity-based hiring is that a more diverse workforce is more productive. Is that true?

Scott E. Page, a professor of complex systems, political science and economics and the University of Michigan argues that it does. He uses mathematical models and case studies to support the claims, which themselves are pretty straight forward.

Here's a quote from a recent interview in the New York Times:
The problems we face in the world are very complicated. Any one of us can get stuck. if we're in an organization where everyone thinks in the same way, everyone will get stuck in the same place.

But if we have people with diverse tools, they'll get stuck in different places. One person can do their best, and then someone else can come in and improve on it.
Of course, this isn't exactly a new idea. But ideas are a dime a dozen. What Page has are data.

On a related topic, Richard Hackman of Harvard University, who also studies the productivity of work teams, is now arguing that panels of experts can be less productive due to their expertise. He specifically argues that blue-ribbon commissions like the 9/11 Commission are often unproductive because, although they are filled with people with a great deal of expertise, such panels are often very inefficient at using that expertise.

Ambiguity

I came across this excellent quote in a journal article yesterday about ambiguity in language:

However, the flexibility of language allows us to go far beyond this. For example, as revealed by a brief Internet search, speakers can use “girl” for their dog (“This is my little girl Cassie…she's much bigger and has those cute protruding bulldog teeth”), their favorite boat (“This girl can do 24 mph if she has to”), or a recently restoredWorld War II Sherman tank (“The museum felt that the old girl was historically unique”). Such examples reveal that for nouns, it is often not enough to just retrieve their sense, i.e., some definitional meaning, from our mental dictionaries.

-Van Berkum, Koornneef, Otten, Nieuwland (2007) Establishing reference in language comprehension: an electrophysiological perspective. Brain Research, 1146, 158-171.

For more about ambiguity in language, check here, here and here.

Why scientists need to do better PR

A couple days ago I asked whether psychology was a science. Many of the responses I got confirmed what I already knew from reading message boards and talking with other academics. Psychology must have done a terrible job of PR, given that so many well-educated folk and scientists in other fields have absolutely no idea what it's about.

I commonly hear statements like "psychologists don't do experiments" or "psychology experiments aren't well-controlled" or "psychology experiments aren't replicable." Saying psychologists don't use experimental controls is like saying that the existence of electrons is "unproven" or that evolution is a "theory in crisis." Basically, the only way somebody could say something like this is if they are entirely ignorant of the field.

The big difference, though, is I doubt it's widely accepted among biologists that electrons probably don't exist or among physicists that evolution is an unproven, shaky hypothesis, but it does seem that an embarrassingly large number of physicists and biologists (and other scientists, too -- I'm not picking on physics or biology) have similarly unfounded views about psychology.

(If you really need an example of a replicable, robust psychological phenomenon, try out the Stroop effect, which is also an example of an experiment with good controls. This is a bit like defending evolution, though. Leafing through any reputable journal should be sufficient.)

So why care if people are ignorant of psychology? For the same reasons it's important that they be informed about every branch of science. First, there's a lot of information there that would be useful to people in their daily lives. Second, if people don't understand and value a discipline, they're less likely to fund it. Science in America is largely funded directly or indirectly by the public. If you believe a particular science is important for the health of the country, then it's important that enough voters also value it.



On the question of replicability, it's true that some results in psychology don't replicate, sometimes because the results were a fluke and sometimes because the experimenters made a mistake. I wonder, though, if it's actually less common in other fields. In physics lab in college, my lab partners and I measured the speed of light, getting an answer way different from the accepted figure (no, we weren't even within measuring error of the correct number).
So that's at least an existence proof that it's possible to do an experiment in physics that won't replicate (that is, our experimental results don't replicate).

Is there a moral grammar?

Morality may seem like a topic for philosophers and theologians rather than psychologists. While it is true that during the last few decades moral reasoning hasn't been a hot topic of psychological research, moral reasoning is a behavior -- and an important one -- and that makes it a worthy topic for psychology. (I don't mean that psychologists should study what is moral and what isn't, but rather what humans think is moral and what they think is not, and why.) In the last few years, interest in the field has exploded.

One of the most controversial new approaches, promoted by Marc Hauser of Harvard University, is to study moral reasoning by analogy to linguistics. For instance, what are the phonemes of moral reasoning? What is the grammar that determine whether an action is considered moral or not?

There has been a lot of criticism of this analogy, none of which seems to particularly bother Hauser. What is interesting is that he has put forward the analogy of moral reasoning to linguistic reasoning not so much because he thinks it's literally true (in fact, he thinks it would be bizarre if morality was exactly like language -- they are obviously different systems), but because he thinks the analogy leads to new questions about moral reasoning that nobody was asking. This leads to new experiments, new data, and hopefully better theories. Hauser argues that the linguistic analogy has does just this.

There is something to this argument. Obviously having a correct theory is ideal. However, few if any theories -- psychological or otherwise -- are through-and-through true, and so it's better to have an incorrect theory that at least points research in a profitable new direction than an incorrect theory that leads nowhere.

You can find some of his recent published papers here. For a less technical treatment, though, you might read his new book. You can also participate in his Moral Sense Test here. For more thoughts about the scientific method, read this. For more about the scientific method and psychology in particular, read yesterday's post.

Is psychology a science?

Is psychology a science? I see this question asked a lot on message boards, and I thought it was time to discuss it here. The answer depends entirely on what you mean by "psychology" and what you mean by "science."

First, if by "psychology" you mean seeing clients (like in Good Will Hunting or Silence of the Lambs), then, no, it's probably not a science. But that's a bit like asking whether engineers or doctors are scientists. Scientists create knowledge. Client-visiting psychologists, doctors and engineers use knowledge. Of course, you could legitimately ask whether client-visiting psychologists base their interventions on good science. They often don't. But that can also be said about doctors and, I'd be willing to bet, engineers.

However, there is a different profession that, largely for historical reasons, shares the same name. That is the branch of science which studies human and animal behavior, and it is also called "psychology." It's not as well known, and nobody makes movies about us (though if paleoglaciologists get to save the world, I don't see why experimental psychologists don't!), but it does exist.

A friend of mine (a physicist) once claimed psychologists don't do experiments (he said this un-ironically over IM while I was killing time in a psychology research lab). My response now would be to invite him to participate in one of these experiments. Based on this Facebook group, I know I'm not the only one who has heard this.

There are also those, however, who are aware that psychologists do experiments, but deny that it's a true science. Some of this has to do with the belief that psychologists still use introspection (there are probably some somewhere, but I suspect there are also physicists who use voodoo dolls somewhere as well, along with mathematicians who play the lottery). The more serious objection has to do with the statistics used in psychology.

In the physical sciences, typically a reaction takes place or does not, or a neutrino is detected is not. There is some uncertainty given the precision of the tools being used, but on the whole the results are fairly straight-forward and the precision is pretty good.

In psychology, however, the phenomena we study are noisy and the tools lack much precision. When studying a neutrino, you don't have to worry about whether it's hungry or sleepy or distracted. You don't have to worry about whether the neutrino you are studying is smarter than average, or maybe too tall for your testing booth, or maybe it's only participating in your experiment to get extra credit in class and isn't the least bit motivated. It does what it does according to fairly simple rules. Humans, on the other hand, are terrible test subjects. Psychology experiments require averaging over many, many observations in order to detect patterns within all that noise.

Some people find this noisiness deeply unsettling and dislike the methods social scientists have developed to compensate for it, and thus would prefer to exclude the social sciences from the term "science." This is fair in the sense that you can define words however you want, but it does mean that a great deal of the world -- basically all of human and animal behavior -- is necessarily unexplainable by science.

So what do you think? Are the social sciences sciences? Comments are welcome.

Why languages can't be learned

One of the most basic, essentially undisputed scientific facts about language -- and the one that tends to get the most interest from laypeople -- is that while learning a foreign language as an adult is very difficult, children learn their native languages with as much ease and proficiency as they learn to walk. This has led researchers such as Steven Pinker to call language learning an "instinct." In fact, this "instinct" is more than remarkable -- it's miraculous. On careful reflection it seems impossible to learn just a single word in any language, much less an entire vocabulary (and thus figuring out how we nonetheless all learned a language is a major area of research).

The paradox goes back to W. V. O. Quine (who, I'm proud to say, is a fellow Obie), who suggested this thought experiment: Suppose you are an anthropologist trying to learn the language of a new, previously undiscovered tribe. You are out in the field with a member of the tribe. Suddenly, a rabbit runs by. The tribesperson points and says, "Gavagai!"

What do you make of this? Most of us assume that "gavagai" means "rabbit," but consider the possibilities: "white," "moving whiteness," "Lo, food", "Let's go hunting", or even "there will be a storm tonight" (suppose this tribesperson is very superstitious). Of course, there are even more exotic possibilities: "Lo, a momentary rabbit-stage" or "Lo, undetached rabbit parts." Upon reflection, there are an infinite number of possibilities. Upon further reflection (trust me on this), you could never winnow away the possibilites and arrive at the meaning of "gavagai" ... that is, never unless you are making some assumptions about what the tribesman could mean (that is, if you assume definitions involving undetached rabbit parts are too unlikely to even consider).

Quine offered this thought experiment in a discussion about translation, but it clearly applies to the problems faced by any infant. To make matters worse, people rarely name objects in isolation -- parents don't say "bunny," they say "Look, see the bunny?" or "Look at that bunny go!"

Generally, it should be very clear that infants could not learn a language if they didn't make certain assumptions about which words meant what. One of the major areas of modern psycholinguistics is figuring out what those assumptions are and where do they come from (that is, are they innate or are they learned?).

Long-time readers know that the major focus of my research is on how people resolve ambiguity in language. My first web-based experiment on this topic has been running for a while. Last week I posted a new experiment. Participants hear sentences like "Show me the dax" and try to guess which of several new objects might be the "dax." As usual, I can't say much about the purpose of the experiment while it's still running, but participants who finish the experiment will get an explanation of the experiment and also will get to see their own results. You can try it by clicking here.

How does the brain read?

Reading is an important skill, so it's not surprising it gets a lot of attention from researchers. Reading is an ancient skill -- at least in some parts of the world -- but not so old that we don't know when it was invented (as opposed to, for instance, basic arithmetic). And, unlike language, it appeared recently enough in most of the world that it's unlikely that evolution has had time to select for reading skill...which would explain the high prevalence of dyslexia.

Some decades ago, there was a considerable amount of debate over whether reading was phonologically based -- that is, "sounding out" is crucial (CAT -> /k/ + /{/ + /t/ -> /k{t/) -- or visual-recognition based -- that is, you simply recognize each words as a whole form (CAT -> /k{t/). People who favored the former theory emphasized phonics-based reading instruction, while the latter theory resulted in "whole language" training.

At least from where I sit, this debate has been largely resolved in favor of phonics. This isn't to say that skilled readers don't recognize some high-frequency words as whole, but it does mean that sounding out words it crucial at least in learning to read. One important piece of evidence is that "phonological awareness" -- the ability to figure out that CAT has 3 sounds by COLON has 5 or that DOG and BOG rhyme -- is just about the best predictor of reading success. That is, preschoolers who are at the bottom of the pack in terms of phonological awareness tend to in the future be at the bottom of the pack in learning to read.

At least, that is the story for writing systems like English that are alphabetic. There has been some question as to the role of phonology in learning to read character-based systems like Chinese. Two years ago, a group including Li Hai Tan of Hong Kong University presented evidence that in fact phonological awareness may not be particularly important in learning to read Chinese.

I have been trying to test one aspect of their theory for some time. Not having collaborators in China or Taiwan, I have to recruit my Chinese-speakers here in Cambridge, which is harder than you might think. The first experiment I ran took nearly six months, most of which was spent trying to recruit participants, and it was ultimately inconclusive. Last spring I piloted a Web-based version of the experiment, thinking that I might have more luck finding Chinese participants through the Internet. However, that experiment failed. I think it was too complicated and participants didn't understand what to do.

I have spent the last few months thinking the problem through, and now I have a new Web-based study. I am trying it in English first, and if it works well enough, I will write a Chinese version of the experiment. If you are interested, please try it out here.

Knowing the meanings of words

In “On the evolution of human motivation: the role of social prosthetic systems,” Stephen Kosslyn makes a very interesting conjecture about social interactions. He argues that, for a given person, “other people serve as prosthetic devices, filling in for lacks in an individual’s cognitive or emotional abilities.” This part seems hard to argue with. Intuitively, we all rely on other people to do certain things for us (mow our grass, edit our papers, provide us with love). His crucial insight is that “the ‘self’ becomes distributed over other people who function as long-term social prosthetic systems.”

You may or may not agree with that stronger claim. I haven't made up my own mind yet. I recommend reading the paper itself, which unfortunately is not available on his website but should be available in a decent college library.

There is one interesting application of his idea to an old problem in linguistics and philosophy.
What is the problem? Intuitively, we would like to believe that our words pick out things in the world (although words and concepts are not interchangeable, for the purposes of this discussion, they have the same problems). When I say “cows produce milk,” I like to believe that this sentence is either true or false in the world. For this to even be plausible, we have to assume that the words “cow” and “milk” refer to sets of real, physical objects.

This is problematic in myriads of ways. It is so full of paradoxes that Chomsky has tried to define away the problem by denying that words refer to anything in the world. I will focus on one particular problem that is relevant to the Kosslyn conjecture.

If you are like me, you know nothing about rare plants such as the three-seeded mercury or the Nova Scotia false-foxglove. Yet, we are able to have conversations about them. I can tell you that the both are endangered in the state of Maine, for instance. If I tell you that they both survive on pure Boron, you would probably be skeptical. Thus, we can talk about these plants and make empirical claims about them and learn new things about them without having any idea what these words actually pick out in the world. This is true of a large number of things we talk about on a daily basis. We talk about people we have never met and places we have never been.

What distinguishes these words from words that truly have no reference? To you, likely neither the words “Thistlewart” nor the word “Moonwart” mean anything. Now, suppose I tell you the first is a made-up plant, while the second is a real plant. To you, these are still both essentially empty words, except one refers to something in the world (though you don’t know what) and the other doesn’t.

Intuitively, what makes “Thistlewart” an empty concept and “Moonwart” not is that you believe there is some expert who really does know what a Moonwart is and could pick one out of a lineup. This “Expert Principle” has seemed unsatisfying to many philosophers, but within the context of the “social prosthetic system” theory, it seems quite at home. Certainly, it seems like it might at least inform some of these classic problems of reference and meaning.

Wait -- are you suggesting your brain affects your behavior?

One of my office-mates burst out laughing on Monday after receiving an email. The email was a forward, but it wasn't intended to be funny. It was a brief news blurb about a recent study looking at teenage impulsiveness, entitled "Teens' brains hold key to their impulsiveness."

What's funny about that? Well, where did the journalist think the key to impulsiveness was hidden -- in teens' kidneys? Many scientists puzzle over the fact that 150 years of biology have not driven out Creationism, but 150 years of psychology and neuroscience have been even less successful. Many people -- probably most -- still believe in mind/brain duality.

Philosophers began suggesting that all human behavior is caused by the physical body at least as early as Thomas Hobbes in the 1600s. A century and a half of psychology and neuroscience has found no evidence of an immaterial mind, and now the assumption that all behavior and thought is caused by the physical body underlies essentially all modern research. It's true that nobody has proved that immaterial minds do not exist, but similarly nobody has ever proved the nonexistence of anything. It just seems very unlikely.

This leads to an interesting dichotomy between cognitive scientists and the general public. While journalists get very excited about studies that prove some particular behavior is related to some particular part of the brain, many cognitive scientists find such studies to be colossal wastes of time and money. It would be like a physicist publishing a study entitled "Silicon falls when dropped." Maybe nobody ever tested to see whether silicon falls when dropped, but the outcome was never really in doubt. 

This isn't to say that the study I mentioned above wasn't a useful study. I have no doubt that it is a very useful study. Determining mechanistically what changes in what parts of the brain during development affect impulsiveness is very informative. The mere fact that the brain changes during development, and that this affects our behavior, is not.

Scientists arguing about the scientific method

The scientific method should be at least passingly familiar to most people who took a high school science class. Generate a hypothesis, then design an experiment that will either support or contradict your hypothesis. A more nuanced version is to find two competing hypotheses, then design an experiment that will unambiguously support at most one of those two hypotheses.

But is this what scientists actually do? Is it what scientists should do?

This question was put to us by Ken Nakayama in our first-year graduate psych seminar last week. Though it may surprise some of you, his answer was "no." In contrast to theory-driven research (the proposal above), Nakayama prefers data-driven research.

Although there are some good descriptions and defenses of theory-driven research, I don't know of one for data-driven research. Here's my best effort at describing the two.

Suppose you are a tinkerer who wants to know how a car works. If you are a theory-driven tinkerer, you would start with competing hypotheses (that tube over there connects the gas tank to the engine VS that tube over there is part of an air circulation system) and conduct experiments to tease those hypotheses apart. The theory-driven tinkerer will focus her energies on experiments that will best tease apart the most important theories, ignoring phenomena that aren't theoretically important. 

A data-driven tinkerer would say, "I wonder what happens if I do this," do it, and see what happened. That is, she may run experiments without having any hypotheses about the outcome, just to see what happens. If the data-driven tinkerer's attention is caught by some odd phenomenon (the car seems to run better in the afternoon than in the morning), she may pursue that phenomenon regardless of whether it seems theoretically interesting or helps distinguish between competing hypotheses. 

One potential reason to favor data-driven research is that while theory-driven research is constrained by our theories (which, at this stage in psychology and cognitive neuroscience, frankly aren't very good), while data-driven research is constrained only by your imagination and skill as an experimentalist. Data-driven exploration, one might argue, is more likely to lead to surprising discoveries, while theory-driven research is may only show you what you expected to see.

I suspect that most psychologists use some combination of the two strategies, though  when it comes time to write a paper, it seems to be easier to publish data that is relevant to theory (whether it was theory that led you to do the experiment in the first place is another question).

Thoughts?

How do children learn to count? Part 3

Two posts ago, I presented some rather odd data about the developmental trajectory of counting. It turns out children learn the meanings of number words in a rather odd fashion. In my last post, I described the "number" systems that are in place in animals and in infants before they learn to count. Today, I'll try to piece all this together to explain how children come to learn to be able to count.

Children first learn to map number words onto a more basic numerical system. They learn that "one" maps on to keeping track of a single object. After a while, they learn "two" maps onto keeping track of one object plus another object. Then they learn that "three" maps onto keeping track of one object plus another object plus another object. All this follows from the Wynn experiments I discussed two posts ago.

Up to this point, they've been learning the meanings of these words independently, but around this time they notice a pattern. They know a list of words ("one, two, three, four") and that this list always goes in the same order. They also notice that "two" means one more object than "one," and that "three" means one more object than "two." They put two and two together and figure out that "four" must mean one more object than "three," even though their memory systems at that age don't necessarily allow them to pay attention to four objects simultaneously. Having made this connection, figuring out "five," "six," etc., comes naturally.

So what is that more basic number system? One possibility is that children to learn to map the early number words onto the analog number system I also described in the last post (the system adults use to estimate number when we don't have time to count)?

Something like this claim has been made by a number of well-known researchers (Dehaene, Gallistel, Gelman and Wynn, to name a few). There are a number of a priori reasons Susan Carey of Harvard thinks this won't work, but even more important is the data.

As I described two posts ago, very young children can hand you one marble when asked, but hand you random numbers of marbles if asked for "two," "three" or any larger number. They always give you more than one, but they can't distinguish between the other numbers. Following Wynn, these are called "one-knowers." Slightly older children are "two-knowers," who can give you one or two marbles, but give you random amounts greater than 2 if asked for another other number. At the next stage, the child becomes a "three-knower." Usually, the next stage is being able to succeed on any number. I'll call those "counters."

Recently, LeCorre and Carey replicated this trajectory using cards with circles on them. They presented the children a card with some number of circles (1 to 8) and asked the kid, "How many?" One-knowers tended to reply "one" to a card with one circle, and then guessed incorrectly for just about everything else. Two-knowers could count one or two circles, but guessed incorrectly for all the other cards. Three-knowers could count up to three, but just guessed beyond that. Counters answered correctly on essentially all cards.

So far this doesn't tell us whether children learn to count by bootstrapping off of analog magnitudes or some other system. Carey and Mathieu LeCorre published a paper this year that seems to settle the question. The setup was exactly the same as in the last paper (now with cards with anywhere from 1 to 10 circles), except that this time the children were only briefly shown the card. They didn't have enough time to actually count "one, two, three..." The data for one-, two- and three-knowers didn't change, which isn't surprising. Both the "3-object" and the analog magnitude systems are very fast and shouldn't require explicit counting.

However, counters fell into two groups. One group, about 4.5 years old on average, answered just as adults. When they saw six circles, their answers averaged around "six." When they saw ten circles, their answers averaged around "ten." This is what you'd expect if they have mapped number words onto the analog magnitude system.

However, the other group, which was slightly younger (average age of 4 years, 1 month), guessed randomly for cards with 5 or more circles, just as if they didn't know how to count. However, these kids can count. If given time to look at the cards, they would have said the right number. So despite the fact that they can count, they do not seem to have their analog magnitude system mapped onto number words.

This means that the analog magnitude system isn't fundamental in learning how to count, and it actually takes some time for children to learn that mapping even after they've learned to count. Carey takes this as meaning that the analog magnitude system doesn't play a fundamental role in learning to count, either, and there are other reasons as well that this is probably the case.

One remaining possibility is that children use the "3-object system" to understanding the meanings of 1, 2 and 3. This seems to work nicely, given that the limits of the system (3 objects in children, 4 in adults) seem to explain why children can learn "one," "two," and "three" without really learning to count. Carey actually has a somewhat more nuanced explanation where children learn the meanings of "one," "two," and "three" the same may that quantifiers (like "a" in English) are learned. However, to the best of my knowledge, she doesn't have an account of how such quantifiers are learned, and if she had an account, I suspect it would itself hinge off of the 3-object system, anyway.

That's it for how children learn to count, unless I get enough comments asking for more details on any point. For those who want to read more, there are many papers on this subject at Carey's web page.

How do children learn to count? Part 2

In my last post, I showed that children learn the meaning of number words in a peculiar but systematic fashion. Today, I'll continue trying to explain this odd behavior.

Important to this story is that children (and non-human primates) are born with several primitive but useful numerical systems that are quite different from the natural number system (1, 2, 3, ...). They can't use these systems to count, but they may be useful in learning to count. In this post, I'll try to give a quick summary of how they work.

One is a basic system that can track about 3-4 objects at a time. This isn't a number system per se, just an ability to pay attention to a limited and discrete number of things, and it may or may not be related to similar limits in visual short-term memory.

You can see this in action by playing the following game with a baby under the age of 2. Show the baby two small boxes. Put a single graham cracker into one of the boxes. Then put, one at a time, two graham crackers into the other box. Assuming your baby likes graham crackers, she'll crawl to the box with two graham crackers. Interestingly, this won't work if you put two graham crackers in one box and four in the other. Then, the baby chooses between the boxes randomly. This is understood to happen because the need to represent 6 different objects all in memory simultaneously overloads the poor baby's brain, and she just loses track. (If you want to experience something similar, try to find a "multiple object tracking" demo with 5 or more objects. I wasn't able to find one, but you can try this series of demos to get a similar experience.)

On the other hand, there is the analog magnitude system. Infants and non-human animals have an ability to tell when there are "more" objects. This isn't exact. They can't tell 11 objects from 12. But they can handle ratios like 1:2. (The exact ratio depends on the animal and also where it is in maturity. We can distinguish smaller ratios than infants can.)

You can see this by using something similar to the graham cracker experiment. Infants like novelty. If you show them 2 balls, then 2 balls again, then 2 balls again, they will get bored. Then show them 4 balls. They suddenly get more interested and look longer. However, this won't happen if you show them 4 balls over and over, then show them 5. That ratio is too similar. (I'm not sure if you get this effect in the graham cracker experiment. I suspect you do, but I couldn't find a reference off-hand. The graham cracker experiment is more challenging for infants, so it's possible the results might be somewhat different.)

You can also try this with adults. Show them a picture with 20 balls, and ask them how many there are. Don't give them time to count. The answer will average around 20, but with a good deal of variation. They may say 18, 19, 21, 22, etc. If you give the adult enough time to count, they will almost certainly say "20."

Those are the two important prelinguistic "number" systems. In my next post, I'll try to piece all this information together.

How do children learn to count? Part 1

How do children learn to count? You could imagine that numbers are words, and children learn them like any other word. (Actually, this wouldn't help much, since we still don't really understand how children learn words, but it would neatly deflect the question.) However, it turns out that children learn to count in a bizarre fashion quite unlike how they learn about other words.

If you have a baby and a few years to spend, you can try this experiment at home. Every day, show you baby a bowl of marbles and ask her to give you one. Wait until your baby can do this. This actually takes some time, during which you'll either get nothing or maybe a handful of marbles.

Then, one day, between 24 and 30 months of age, your toddler will hand you a single marble. But ask for 2 marbles or 3 marbles, etc., your toddler will give you a handful. The number of marbles won't be systematically larger if you ask for 10 than if you ask for 2. This is particularly odd, because because by this age the child typically can recite the count list ("one, two, three, four...").

Keep trying this, and within 6-9 months, the child will start giving you 2 marbles when asked for, but still give a random handful when asked for 3 or 4 or 5, etc. Wait a bit longer, and the child will manage to give you 1, 2 or 3 when asked, but still fail for numbers greater than 3.

This doesn't continue forever, though. At around 3 years old, children suddenly are able to succeed when asked for any set of numbers. They can truly count. (This is work done by Karen Wynn some years ago, who is now a professor of psychology at Yale University.)


Of course, this is just a description of what children do. What causes this strange pattern of behavior? We seem to be, as a field, homing in on the answer, and in my next post I'll describe some new research that sheds light onto the question.