Field of Science

Stupid babies learn language

It is well-known that infants learn their native languages with incredible ease. I just came across a passage that puts this into particularly striking context:
A first point to note here is the obvious intellectual limitations that children have while language acquisition proceeds apparently without any effort. We are all extremely impressed if a two-year-old figures out to put the square blocks in the square holes and the round blocks in the round holes. Yet somehow by this age children are managing to cope with the extraordinarily difficult task of learning language.
This is particularly impressive, the authors point out, given that according to a number of theories
we are to believe that children do both of these things using the very same domain neutral intellectual resources. THis is all the more remarkable given that a complete grammar for a single language remains an uncompleted goal for professional linguists.

Laurence, S., Margolis, E. (2001). The poverty of the stimulus argument. British Journal of the Philosophy of Science, 52, 217-276.

Should you talk to your baby?

ABC news is running story about talking to your baby. They start with some alarming news: You may not be talking to your baby enough. How much you talk to your baby affects everything from school performance to IQ. They suggest that an optimal amount is 30,000 words per day. They even peddle a new device that will count how many words you say to your baby so that you know if you are hitting that magic 30,000, or even the "more realistic" 17,000.

Does it really matter how much you talk to your baby?

Maybe, maybe not. It is true that babies who are talked to more have higher IQ scores and do better in school. This is partly because smarter parents talk more to their kids, and smarter parents also tend to have smarter kids. It is also the case that middle- and upper-class parents talk to their kids more than do lower-class parents. This may be a factor in why lower-class children do worse in school, but it is probably not the only reason.

The problem is one of random assignment. We can't assign some babies to hear a lot of speech and some babies to hear no speech. WIthout that, it is impossible to un-confound genetic and socio-economic factors. A very creative study might be able to do so, but I have not come across any such study, and the news piece doesn't mention such research.

Nonetheless, it is certainly possible that talking to your baby makes a difference (even though in some cultures, parents do not talk to their children before the age of 2 or 3 and the children turn out fine). However, the 30,000 word rule almost certainly has to be fictitious. Assuming the baby is awake for 12 hours a day, that comes out to 42 words per minute nonstop every waking minute. I'm not sure I would want a parent who talked that much.

What would you do if you were stuck in an elevator for two days?

What would you do if you do if you were stuck in an elevator for two days? Here is what one person did:

Sure, that's plausible

I am happy to say that the results from the recently revived Video Experiment have been excellent, and while we're still collecting some data just in case, the revised paper should be submitted for publication shortly. That is one month since we got the reviewer's comments back on the original manuscript, which is a faster turn-around than I've ever managed before.

In the meantime, a lab-mate is running a new online survey called, "How Likely? A Plausibility Study."

The idea goes like this. We use lots of different types of information to understand what people are saying: Word order, general knowledge, intonation, emotion... and plausibility. If you hear a restaurant employee ask, "Can I bake your order?" you know that the resulting interpretation is implausible. It would be much more plausible to ask, "Can I take your order?"

That sounds like common sense, but we still don't have a good idea of how and when plausibility is used in comprehension. To do research in this area, the first thing we need is some sentences that are more or less plausible than others. The easy way to do it might be to decide for ourselves what we consider to be plausible and implausible sentences.

However, being people who study language all day, we probably aren't very typical. The point of this study is to get a range of people to say how plausible they think different sentences are. Then, these sentences and those ratings can be used in further research.

The survey contains 48 sentences and should take about 10 minutes to do. You can participate in it by clicking here.

Does language affect thought?

The New York Times is running a pretty good state-of-the-field piece on the Whorfian debate.

Neuroimaging study does not disprove free will

An excellent new paper in Nature Neuroscience made a big splash last week by purporting to show activity in the brain related to muscle movement starts up to ten seconds before the person is consciously aware of having made a decision to move. This study is in fact a replication and extension of previous research that had suggested that related brain activity starts at least 300 ms before the conscious decision. The big news presumably is that new technology (specifically pattern recognition algorithms) allowed the researchers to push back this time window (which really is big news and an excellent application of this technology).

The reason I am using words like "purported" is that there are some important methodological assumptions buried into this experiment (to be sure the authors did not mention "free will" in the actual paper, though I imagine they were aware of the implications). In this particular version of the experiment

The subjects were asked to relax while fixating on the center of the screen where a stream of letters was presented. At some point, when they felt the urge to do so, they were to freely decide between one of two buttons, operated by the left and right index fingers, and press it immediately. In parallel, they should remember the letter presented when their motor decision was consciously made.
Afterwards, they reported which letter had been on the screen when they made their decision. The assumption is that participants are reporting the letter correctly. We already know that conscious perception is a distortion of reality (in fact, this study is a demonstration of that fact), so this may not be a fair assumption.

This case was made some time ago by the philosopher Daniel Dennett in his excellent Freedom Evolves. The argument is somewhat long, but it goes like this. First, we have to assume that participants weren't deciding to press a particular button as soon as the next letter popped up; if they were doing that, they would have already made the decision before that letter appeared, throwing off the scientists' measurements. But even if we assume that is not the case, there is a bigger confound:
If we monitor your brain with an array of surface electrodes ... we will find that the brain activity leading up to [a hand movement] has a definite and repeatable time course, and a shape. It lasts the better part of a second ... ending when your wrist actually moves.
Dennett points out that we aren't aware that it takes our brains a good second to plan, coordinate and execute a simple motor movement.
When we perform an intentional action, we normally monitor it visually (and by hearing and touch, of course) to make sure it is coming off as intended. Hand-eye coordination is accomplished by a tightly interwoven system of sensory and motor systems. Suppose I am intentionally typing hte words "flick the wrist" and wish to monitor my output for typographical errors. Since the motor commands take some time to execute, my brain should not compare the current motor command with the current visual feedback, since by the time I see the word "flick" on the screen, my brain is already sending the command type "wrist" to my muscles.
The effect, though Dennett doesn't put it this way, of actually being aware of the time it takes for your conscious decision to be converted into muscle movement would create a bewildering sense of out-of-sync-ness, something like being drunk or watching a baseball game at a far distance, where the crack of the bat reaches your ears the same time the image of the runner reaches first base.

Dennett formulated this argument to explain the 300ms difference between conscious decision making and the related brain activity found in previous experiments. However, it certainly can be extended to the new study. If it turns out that it takes 10 seconds from the beginning of a decision to move until the actual movement is carried out, then we most definitely do not want to be aware of it. Much better if our minds trick us into thinking movement follows thought instantaneously.

This argument does require some mental time distortion: just because we think two things are happening simultaneously does not mean that they are. But why should they be? If we have learned anything about the brain in the last couple centuries, it is that perception is useful, not accurate.

Soon, C.S., Brass, M., Heinze, H.J., Haynes, J.D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience

Talk about the extraordinary

In a chapter from The Handbook of Social Psychology, 4th Edition, Gilbert notes that people have a
an odd habit and a not so odd habit. The not so odd habit is that they describe behavior that is driven by extraordinary dispositions as having been driven by extraordinary dispositions. The odd habit is that they describe behavior that is driven by ordinary dispositions as having been caused by external agencies.
This may sound like a lot of unnecessary jargon, but he immediate breaks it down (Gilbert is an extremely clear and entertaining writer and definitely worth reading):
When one runs screaming from a baby rabbit, one usually owes the bystanders an explanation. Such explanations are acceptable when they are couched in terms of one's extraordinary dispositions--for example, "I have a morbid fear of fur" or "I sometimes mistake baby rabbits for Hitler." On the other hand, when one retreats from a hissing rattlesnake, one does not typically explain that behavior in terms of ordinary dispositions ("I dislike being injected with venom" or "I feel death is bad") but rather, in terms of the stimuli that invoked them ("It shook its thing at me").
This turns out to be part of a broader phenomenon in language, as Gilbert notes. People tend to avoid saying the obvious and focus on the unusual (Grice was probably the first to notice this). This might seem like a very reasonable thing to do, but there is nothing necessary about it. That is, it's easy to imagine people who are as likely to state the obvious as the non-obvious (and there in fact seem to be some people like that, at least in sitcoms). 

What I think is the most interesting part of this, though, is not that people tend to state the non-obvious, but we as listeners expect the speaker to do this. That suggests either some very sophisticated learning or evolution. (The fact that young children are terrible at distinguishing the obvious from non-obvious in conversation doesn't mean that it is a learned skill; it could be a genetically-programmed behavior that simply comes online later in development, just like puberty.)

Against peer review

In response to a recent post, an anonymous commenter wrote that

It would be scientific misconduct ... to make statements based on someone else's unpublished work ... Scientific results don't exist until they have been peer reviewed and published.
Peer review has become the gold standard of the scientific community. Bring up a scientific finding, and the first thing you may be asked is, "Ah, well, is this peer reviewed?" (For those who don't know, peer review means that, before the journal will publish a paper, one or more other scientists who study similar topics). There is now even a popular blog aggregater that focuses exclusively on blogging about peer reviewed research.

In the age of the Discovery Institute there are some good reasons to focus on peer reviewed research as a way of excluding quacks. It's a way of saying that this research has been vetted.

That said, when I read comments like the one above, I think the time has come to push back, and point out that peer review is not the arbiter of truth. Truth is the arbiter of truth, and peer review is merely a flawed tool we use to help get there.

Peer reviewers don't check to make sure the results are true. Peer reviewers do not typically replicate the experiment in question. They do not check the math. Most of what they do is check that the arguments are reasonable and that the experiment(s) were well designed. Peer reviewers do not necessarily even have to agree with a paper they accept. They may simply think the data are compelling and the arguments are worth hearing, even if they may be wrong.

Thus, peer review does a reasonable job of weeding out quacks. Luckily, most scientists are not quacks, so what does it do for the rest of us? I'm not sure, but I think a partial answer is that two minds are better than one. Reviewers often notice things that the authors missed -- not because the authors weren't smart, but because research is damn complicated and you can never think of everything.

Typically what happens, at least in psychology, is that the reviewers suggest additional analyses or additional experiments that would make the paper stronger. Based on those comments, the authors may run new experiments then revise the paper and resubmit. Peer reviewers, in this sense, aren't so much vetters or fact-checkers as editors. Peer review is a way of improving -- not perfecting -- an article.

So is it scientific misconduct to refer to unpublished work? I don't think so. It is dangerous, though, because there are good reasons (above) to be more confident of something that has gone through peer review. It is a bit impolite to refer to something that has not been published, because then your audience can't go look at it themselves. And that's the main point. Peer reviewers are not the judges of truth, but all of us are on the jury.

The chair vs. stool debate: solved?

Two weeks ago I wrote about the problem with definitions. At scienceblog.com, this post got over 11,000 hits and 41 comments, most of which had to do with answering the age-old challenge of defining the word "chair." There were some very good attempts, none of which ultimately work, which isn't surprising since may of the greatest minds in the 20th century have tried and failed to solve this problem.

It happens that this week I am reading from Greg Murphy's Big Book of Concepts, which contains an excellent explanation of the problem, one which I think is probably right.

He starts with a big-picture view of the problem of concepts:

We do not wish to have a concept for every single object--such concepts would be of little use and would require enormous memory space. Instead, we want to have a fairly small number of concepts that are still informative enough to be useful (Rosch 1978). The ideal situation would probably be one in which these concepts did pick out objects... Unfortunately, the world is not arranged so as to conform to our needs.

Translating this into the world of words, we don't want a different word for every single piece of furniture (that is, where each of your dining room chairs has its own name). That would be impossible to learn and pretty useless. We also don't want one single word to describe anything on which you might sit -- that would be too broad to be very useful in communication. He continues:

For example, it may be useful to distinguish chairs from stools, due to their differences in size and comfort... However, there is nothing to stop manufacturers from making things that are very large, comfortable stools; things that are just like chairs, only with three legs; or stools with a back. These intermediate items are the things that cause trouble for us, because they partake of the properties of both...

The gradation of properties in the world means that our smallish number of categories will never map perfectly onto all objects: The distinction between members and nonmembers will always be difficult to draw or will even be arbitrary in some cases.
I think Murphy makes a very plausible explanation of why, even in the best of cases, our words could never perfectly divide up the world. It's not possible to have words that pick out discrete categories of things, because there aren't discrete categories of things in the world.

This does leave open the question of what words mean, given that they don't have definitions, since they clearly mean something. I'm still on the second chapter of the book, but I suspect the answer won't be in chapter three, since this is still an active area of research and debate.

What if we could run history twice? (Would Madonna still be popular?)

One difficulty in the study of history is that, although you can make predictions, many are difficult to test. You can argue that the US would have still entered World War II even without Pearl Harbor, but the only way to know for sure is to re-run history without the Japanese sneak attack.

Similarly -- and this is the point of the article -- you can argue that Madonna rose to become perhaps the cultural icon of the 80s because she anticipated the zeitgeist of the times...or because of luck or good marketing or whatever. Who knows. We as Americans would like to believe that talent always rises to the top, although research shows actual social mobility is less impressive.

Duncan Watts and colleagues
at Columbia University dreamed up an ingenious scheme to essentially run history twice, harnessing the power of the Internet. They created a website (now closed, sorry) that allowed people to listen to and then download songs by unknown bands for free. The songs were all ranked according to how often they were downloaded.

The trick is that there were actually 9 different "worlds." When you logged in, you were randomly assigned to one of these worlds. The information you saw about download activity was for your world only. (In one of the 9 worlds, you were given no information about download activity.)

Not surprisingly, people were more likely to listen to and download songs that other people had downloaded. This effect was much weaker in the world in which people didn't know what other people were listening to.

What is more interesting is that how popular a song was in one world predicted how popular it would be in all the worlds pretty well, but not perfectly. That is, there were songs that tended to be extremely popular in all worlds, and there were songs that were extremely unpopular in all worlds. Otherwise, there was a lot of variability. There were some songs that were loved in one world but hated in others.

This experiment, at least, suggests that some people are destined to be stars, but not everybody who is a star was destined to be so. One thing this research doesn't tell us is how to tell which is which. But it does narrow the range of options.

The experimenters were interested in social networks and not psychology, per se. From a psychology standpoint, one limitation of this study is that we don't know whether people actually liked songs better because they knew that other people liked the songs. All we know is that they listened to songs that other people had downloaded, and that they were more likely to download songs that other people downloaded. Of course, it makes sense to listen to what other people are listening to, and you can't download the song (in this study) until you've listened to it. Hopefully some future research will work this out.

Salganik, M.J., Doddds, P.S., Watts, D.J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311, 854-856.

How blind children learn the verb "see"

See is one of the most common words in English. For instance, while time, the most common English noun, gets 3,550,000,000 Google hits, see gets a very respectable 2,980,000,000. This compares well with talk (711,000,000) and eat (253,000,000). This means that blind children can't really avoid the verb altogether. In fact, look and see are among the very first verbs that blind children learn, just like sighted children.

So what do they think it means?

I probably can't answer the question completely, but here are some relevant research results:

When a sighted 3-year-old is asked to "look up," he will tilt their heads upwards, even if they are blindfolded. A blind 3-year-old raises her hands instead.

If told "You can touch that table, but don't look at it," the blind 3-year-old will lightly touch the table. If you later tell her she cal look at the table, she may explore all the surfaces of the table with her hands.

It's not likely that blind children are explicitly taught these meanings for these words, so they probably created what are very reasonable meanings for them.

(This research is summarized in Language and experience: Evidence from the blind child by Barbara Landau and Lila Gleitman.)

What if you heard your first word at age 6?

Developmental researchers have learned a great deal about the order in which children learn different aspects of language. Their first words are almost always nouns. Verbs come later. Early "sentences" consist of only 1 word. Then comes the 2 word stage. Etc. These stages tend to happen at particular ages.

We know much less about how children learn language. For instance, just because children typically produce one word sentences first doesn't mean that it's a necessary step (in fact, some children appear to only start speaking when they are capable of producing multi-word sentences). Maybe the classic developmental trajectory doesn't say anything about how language must be learned, but instead says a lot more about what babies at different ages are able developmentally able to do. Thus, maybe 1 year olds speak like 1 year olds as opposed to 3 year olds not because they only have 1 year of experience but because their brains are only 1 year old.

One early interesting "experiment" involved the discovery of a severely abused 6-year-old by the name of "Isabelle." She was locked in her attic by her mother and apparently never spoken to. With a year of being discovered and rescued, she was able to speak at the level of her 7-year-old peers and even started an ordinary school. This was pretty good evidence that the slow pace at which babies learn language may have more to do with their brains than the nature of learning language.

Unfortunately for researchers, but luckily for children, cases like Isabelle are very rare, limiting how much research can be done. Members of our lab were able to discover another way of doing this research: cross-linguistic adoption. Many babies and young children immigrate to the USA each year as adoptees, typically from the former Soviet Union or from China. When they come to the US, they typically are no longer exposed to their home language (even when their adoptive parents try to learn the baby's original language, they often learn the wrong language -- Mandarin instead of Fukinese or Russian instead of Ukranian, for instance). If caught at the right age, before they have learned much of whatever their home language was, they are excellent case studies in how language develops if you start at, say, 3 years old instead of Day 1.

The results of this study (full disclosure: I was not involved in this study) are that these children seem to go through all the typical stages of language development, just much, much faster. They very quickly catch up to their American-born peers. Which is good news for them, and tells us a great deal about how language develops.



Davis, K. (1989). Final note on a case of extreme social isolation. American Journal of Sociology, 42, 432-437.

Snedeker, J., Geren, J., Shafto, C. (2007). Starting over: International adoption as a natural experiment in language development. Psychological Science, 18(1), 79-87.

Babies learn a language by just listening to it. Can you?

Children seem to drink in language, while learning a language as an adult seems to be quite a challenge.

That said, it's not the case that you can't learn anything about language just by listening. A number of studies over the last few decades have shown that even adults can learn a certain amount of word and grammatical structure just by listening to a speech stream for a few minutes. The current focus of research is determining what, exactly, is learnable and what is not.

A lab-mate is currently running one such study online. I mentioned it on this blog a couple weeks ago, but then there were technical difficulties with the experiment, and I pulled the post. Here it is again.

The study does take 20 minutes or more, and you have to promise to pay attention and not do other things at the same time (which I hope is the case whenever you do a Web-based study!), but it's a good project and worth your donation time. When he has results ready to publish, I'll be sure to post them here.

You can find the study here.

Sorry, New York Times, cognitive dissonance still exists

Earlier this week, New York Times columnist John Tierney reported a potential flaw in a classic psychology experiment. It turns out that the experimental finding -- cognitive dissonance -- is safe and sound (see below). But first, here are the basic claims:

Cognitive dissonance generally refers to changing your beliefs and desires to match what you do. That is, rather than working hard for something you like, you may believe you like something because you worked so hard for it. 

Laboratory experiments (of which there have been hundreds if not thousands) tend to be of the following flavor (quoted from the Tierney blog post). Have someone rate several different objects (such as different colored M&Ms) in terms of how much they like them. From that set of objects, choose three (say, red, blue and green) that the person likes equally well. Then let the person choose between two of them (the red and blue M&M). 

Presumably (and this will be the catch) the person chooses randomly, since she likes both equally. Say she chooses the red M&M. Then let her choose between red and green. You would predict that she would choose randomly, since she likes the two colors equally, but she nearly invariably will be the red M&M. This is taken as evidence that her originally random choice of the red M&M actually changed her preferences to where she now likes red better than either blue or green.

The basic problem with this experiment, according to M. Keith Chen of Yale and as reported by Tierney, is that we don't really know that the person didn't originally prefer red. She may have rated them similarly, but she chose red over blue. The math works out such that if she in fact already preferred red over blue, she probably also actually preferred red over green.

Tierney calls this a "fatal flaw" in cognitive dissonance research, and asks "choice rationalization has been considered one of the most well-established theories in social psychology. Does it need to be reconsidered?"

Short answer: No.

First, it is important to point out that Chen has shown that if the original preferences were measured incorrectly, then this type of experiment might suggest cognitive dissonance even where there is none. He does not show that the original measurements were in error. 

However, even if that were true, that would not mean that cognitive dissonance does not exist. This is a classic problem in logic. Chen's argument is of the following form: If Socrates is a woman, then he is mortal. Socrates is not a woman. Therefore, he is not mortal.

In any case, cognitive dissonance has been shown in studies that do not fall under Chen's criticisms. Louisa Egan and collaborators solved this problem by having their subjects choose between items they couldn't see. Since the subjects knew nothing about the items, they couldn't possibly have a pre-existing preference. Even so, they showed the classic pattern of results.

By all appearances in the Tierney article, Chen is unaware of this study (which, to be fair, has not yet been published). "I wouldn't be completely surprised if [cognitive dissonance] exists, but I've never seen in measured correctly." This is hard to believe, since Chen not only works in the same university as Egan, he is a close collaborator of Laurie Santos (Egan's graduate advisor). It's not clear why he would neglect to mention this study, particularly since this blanket critique of cognitive dissonance research in the New York Times is embarrassing to Egan and Santos at a time when Egan is on the job market (and it appears to have a lot of people upset). 

Thus, it's puzzling that Chen claims that no existing study unambiguously shows cognitive dissonance. He might, however, be able to make the weaker claim that it is possible that some studies that have been claimed to show cognitive dissonance in fact to not. That is a reasonable claim and worth testing. In fact, Chen reports that he is testing it now. It is worth keeping in mind that for the time being, Chen has only an untested hypothesis. It's an intriguing and potentially valuable hypothesis, but there isn't any evidence yet that it is correct.

See the original article here.

What is the first language?

Linguists debate whether all languages are descended from a common ancestor. This can't be completely true, since many sign languages have been invented out of whole cloth in modern time (Nicaraguan sign is a famous example), as was, to a meaningful extent, Hawaiian Pidgin.

However, students of history know from the ancient Greek historian Herodotus that Phrygian is the first language. According to his writings, an Egyptian king by the name of Psammetichus ordered that two children "of the ordinary sort" be raised in an isolated cabin without exposure to language. At the age of two or so, the children began to speak Phrygian, which was taken as proof that Phrygian, not Egyptian, is the world's earliest language.

This study is a great example of why experiments need to be replicated before they are taken too seriously.

A new use for Facebook in research

Both sociologists and market researchers have been using Facebook for a few years now in order study human activity. I recently came across a new use: developing stimuli.

A colleague is using a Facebook group to collect photographs for a face perception/memory study. Read about it here.

If words have definitions, they have odd definitions

Last night, when the KU-Memphis NCAA basketball championship went into overtime, one of the announcers remarked, "Kansas knows what it's like to be in an overtime championship game."

This struck me as an odd statement, since they had mentioned only a few minutes ago that there hadn't been overtime in an NCAA basketball championship since 1997. A few moments later, we learned that the announcer was referring to a triple-overtime game in the 1950s.

The 1950s! There may have been some in the audience who remember that game, but I doubt anybody directly associated with the KU basketball team does.

You may be willing to view this as announcers spewing nonsense as they usually do, but it's actually an example of an incredibly odd feature of language (or, perhaps, of the way we think). Chomsky's favorite example is London, which has existed for a thousand years, during which nearly every pebble has been replaced. You could tear down London and rebuild it across the Thames a thousand years from now, and it would still be London.

More colloquially, there is a joke about an farmer who says, "This is an excellent hammer. I've had it for years. I've only had to replace the head three times and the handle twice."

This problem applies to people as well. It turns out (I don't remember where I read this, unfortunately) that every molecule in your body is replaced every few years, such that nothing that is in your body now was in it a decade ago. Yet you still believe you are the same person.

Clearly, humans are comfortable with the notion that the object remains constant even if all the parts change. This interestingly squares well with work in developmental psychology which suggests that infants recognize objects based on spatial cohesiveness (objects don't break apart and reform) and spatial continuity (objects don't disappear and reappear elsewhere). However, they are perfectly comfortable with objects that radically change shape -- for instance, from a duck into a truck. It isn't until about the time that children begin to speak that they expect ducks to stay ducks and trucks to stay trucks.

Publishing papers is slow

The bread and butter of scientific communication is the peer-reviewed journal. For those who are not familiar with the process, when a scientist (me for instance) wants to report some data, he writes it up and sends it to a journal. The editors of the journal ask a few other scientists who are experts in the same field to read the article and decide if it's any good.

This process has been criticized for being arbitrary and for being unable or unwilling to catch fraud. For all that, I honestly believe that peer review serves to improve the quality of papers. At least in psychology, it is rare for a journal to accept a paper on the first round. Instead, the reviewers suggest changes and additional experiments. Since they are experts in the field and bring a fresh eye to the problem, they often have good ideas.

There is one issue with peer review, however, that drives me nuts. That is how long the process takes. In January, a collaborator and I submitted short paper to a journal that promises extra-fast reviews of short papers. Three months later, we our expected rejection along with suggestions from the reviewers.

The thing is, in the three months that have passed, we've gotten busy with other things. I had to reread the paper a few times because I had forgotten all the details (for some reason, January feels like it was years ago). I spent the last week figuring out how to edit the experiment software, because it required some fancy programing that I had forgotten how to do.

Without further complaining, I'd like to announce the re-launch of The Video Experiment. If you have already participated (this is the only experiment I have ever run that involved a video), please do not participate in this version.* First of all, you'll be bored, because this is only a slight variation on the old experiment, and the video is the same. But more importantly, knowing what the experiment is about could affect your results.

That said, if you've never participated in the video experiment -- if you've never seen the "Bill et John" video or the "Kiwi" bird animation, you haven't participated -- please do so. It only takes 5-7 minutes, and it's easily the most entertaining experiment I've run online. Plus you get to see your own results at the end. With any luck, I can collect all the data we need within a few weeks, and then we can resubmit this paper.



*If you really want to participate, go ahead, but be sure to mark on the demographic form that you participated in the past.

Do words have definitions?

Defining a word is notoriously difficult. Try to explain the difference between hatred and enmity, or define chair in such a way that includes bean bag chairs but excludes stools.

This is an annoyance for lexicographers and a real headache for philosophers and psychologists. Several centuries ago, British philosophers like Hobbes worked out what seemed like a very reasonable theory that explained human knowledge and how we acquire it. However, this system is based on the idea that all words can be defined in terms of other words, except for a few basic words (like blue) which are defined in terms of sensations.

This difficulty led at least one well-known philosopher, Jerry Fodor, to declare that words cannot be defined in terms of other words because word meaning does not decompose into parts the way a motorcycle can be disassembled and reassembled. You can't define chair as an artifact with legs and a back created for sitting in because chair is not a sum of its parts. The problem with this theory is that it makes learning impossible. Fodor readily acknowledges that if he is correct, babies must be born with the concept airplane and video tape, and in fact all babies who have ever been born were born with every concept that ever has or ever will exist.

This seems unlikely, but Fodor is taken seriously partly because his arguments against definitions have been pretty convincing.

Ray Jackendoff, a linguist at Tufts University, argued in his recent Foundations of Knowledge, that words do in fact have definitions. However, those definitions themselves are not made up of words composed into sentences.
Observing (correctly) that one usually cannot find airtight definitions that work all of the time, Fodor concludes that word meanings cannot be decomposed. However, his notion of definition is the standard dictionary sort: a phrase that elucidates a word meaning. So what he has actually shown is that word meanings cannot be built by combining other word meanings, using the principles that also combine words into phrases. (p. 335)
That is, there are ways that words can be combined in sentences to achieve meaning that is greater than the sum of the meanings of the words (compare dog bites man to man bites dog). This is called phrasal semantics. Although linguists still haven't worked out all the rules of phrasal semantics, we know that there are rules, and that these allow for certain combinations and not others.

Jackendoff has proposed that a very different system (lexical semantics) using different rules is employed when we learn the meanings of new words by combining little bits of meaning (that themselves may not map directly on to any words).

I think that this is a very attractive theory, in that it explains why definitions have been so hard to formulate: we were using phrasal semantics, which is just not equipped for the task. However, he hasn't yet proven that words do have definitions in terms of lexical semantics. He has the sketch of a theory, but it's not yet complete.

Field psychology

Most psychology experiments are performed in a laboratory setting. This leads critics to wonder about their ecological validity: that is, just because somebody acts one way in the lab, do you know that is how they would act in real life.

There is another problem, summarized very nicely in a recent paper by Sugiyama and colleagues. In a footnote explaining some "oddities" in the results of their study of one of the world's most remote civilizations (the Shiwiar of the Amazon), they note:

Experimentation under field conditions injects higher levels of error variance into results than are obtainable under well-controlled laboratory conditions. More significant than factors such as added distractions, interruptions, and language difficulties is the extreme cultural strangeness of experimental testing itself, with its unfamiliar necessity of adhering to formal, abstract, and seemingly arbitrary behavioral and communicative constraints. Shiwiar subjects had no prior experience with experimental test-taking situations. This situation introduces confusion into the communicative pragmatics inherent in the task situation, and error variance into results. Restricting one's responses to the question explicitly asked, and ignoring information (such as who may be exhibiting generosity to whom) that is relevant to real life but not to a test problem, is a skill one learns in classrooms and courtrooms.

I have run into this in less exotic locales than the Amazon. As part of my ongoing study of reading, I have tested a number of native Chinese speakers who reside in the US -- mostly graduate students at Harvard. Even though these were smart, well-educated people, a number of them had great difficulty understanding how to do the experiment. Colleagues of mine who study visual perception have had similar difficulties when dealing with Chinese participants. The fact is that psychology experiments are relatively new and relatively rare in Asia, and so fewer people are familiar with what to do. No doubt we American scientists also design our experiments in ways that are culturally familiar to us (and thus not to the Chinese).


Sugiyama, L.S., Tooby, J., Cosmides, L. (2002). Cross-cultural evidence of cognitive adaptations for social exchange among the Shiwiar of Ecuadorian Amazonia. Proceedings of the National Academy of Sciences of the United States of America, 99(17), 11537-11542.

Who gets National Science Foundation fellowships?

The National Science Foundation awards around 900 graduate fellowships each year to a wide variety of sciences, including everything from linguistics and mathematics to physics. These fellowships are a big deal, being both very hard to get and making a significant impact on the finances of the awardees.

NSF has not yet officially contacted awardees for 2008, but word is spreading rapidly. Last night, some enterprising hopefully applicants hacked the NSF website to get the list of awardees. By morning, a number of applicants had logged on to the NSF applicant website and found an "accept fellowship" link on their applicant homepage. A little later in the morning, the list was made available on the website, though the page itself claimed that the awards list was still not available (that has now, as of this afternoon, been fixed).

So, which universities cleaned up? This is an incomplete survey of the 913 awards made:

Berkeley: 87
Stanford: 58
MIT: 40
Harvard: 36
University of Washington: 25
Cornell: 23
University of Michigan: 22
Princeton: 21
Columbia: 19
Yale: 18
UC-San Francisco: 17
Northwestern: 17
UT-Austin: 16
CalTech: 16
Rice: 14
University of Wisconsin-Madison: 13
University of Chicago: 12
Carnegie Mellon: 11
University of Florida: 11
Duke: 12
UCLA: 10

This doesn't list universities that got fewer awards, and it also doesn't account for 73 entering graduate students who did not list what university they will be attending, or any number of entering graduate students who haven't made up their minds and may switch universities. But it is a rough count.

What matters most, though, of the list, is that Oberlin beat Swarthmore 5 to 3.