Field of Science

Why languages can't be learned

One of the most basic, essentially undisputed scientific facts about language -- and the one that tends to get the most interest from laypeople -- is that while learning a foreign language as an adult is very difficult, children learn their native languages with as much ease and proficiency as they learn to walk. This has led researchers such as Steven Pinker to call language learning an "instinct." In fact, this "instinct" is more than remarkable -- it's miraculous. On careful reflection it seems impossible to learn just a single word in any language, much less an entire vocabulary (and thus figuring out how we nonetheless all learned a language is a major area of research).

The paradox goes back to W. V. O. Quine (who, I'm proud to say, is a fellow Obie), who suggested this thought experiment: Suppose you are an anthropologist trying to learn the language of a new, previously undiscovered tribe. You are out in the field with a member of the tribe. Suddenly, a rabbit runs by. The tribesperson points and says, "Gavagai!"

What do you make of this? Most of us assume that "gavagai" means "rabbit," but consider the possibilities: "white," "moving whiteness," "Lo, food", "Let's go hunting", or even "there will be a storm tonight" (suppose this tribesperson is very superstitious). Of course, there are even more exotic possibilities: "Lo, a momentary rabbit-stage" or "Lo, undetached rabbit parts." Upon reflection, there are an infinite number of possibilities. Upon further reflection (trust me on this), you could never winnow away the possibilites and arrive at the meaning of "gavagai" ... that is, never unless you are making some assumptions about what the tribesman could mean (that is, if you assume definitions involving undetached rabbit parts are too unlikely to even consider).

Quine offered this thought experiment in a discussion about translation, but it clearly applies to the problems faced by any infant. To make matters worse, people rarely name objects in isolation -- parents don't say "bunny," they say "Look, see the bunny?" or "Look at that bunny go!"

Generally, it should be very clear that infants could not learn a language if they didn't make certain assumptions about which words meant what. One of the major areas of modern psycholinguistics is figuring out what those assumptions are and where do they come from (that is, are they innate or are they learned?).

Long-time readers know that the major focus of my research is on how people resolve ambiguity in language. My first web-based experiment on this topic has been running for a while. Last week I posted a new experiment. Participants hear sentences like "Show me the dax" and try to guess which of several new objects might be the "dax." As usual, I can't say much about the purpose of the experiment while it's still running, but participants who finish the experiment will get an explanation of the experiment and also will get to see their own results. You can try it by clicking here.

How does the brain read?

Reading is an important skill, so it's not surprising it gets a lot of attention from researchers. Reading is an ancient skill -- at least in some parts of the world -- but not so old that we don't know when it was invented (as opposed to, for instance, basic arithmetic). And, unlike language, it appeared recently enough in most of the world that it's unlikely that evolution has had time to select for reading skill...which would explain the high prevalence of dyslexia.

Some decades ago, there was a considerable amount of debate over whether reading was phonologically based -- that is, "sounding out" is crucial (CAT -> /k/ + /{/ + /t/ -> /k{t/) -- or visual-recognition based -- that is, you simply recognize each words as a whole form (CAT -> /k{t/). People who favored the former theory emphasized phonics-based reading instruction, while the latter theory resulted in "whole language" training.

At least from where I sit, this debate has been largely resolved in favor of phonics. This isn't to say that skilled readers don't recognize some high-frequency words as whole, but it does mean that sounding out words it crucial at least in learning to read. One important piece of evidence is that "phonological awareness" -- the ability to figure out that CAT has 3 sounds by COLON has 5 or that DOG and BOG rhyme -- is just about the best predictor of reading success. That is, preschoolers who are at the bottom of the pack in terms of phonological awareness tend to in the future be at the bottom of the pack in learning to read.

At least, that is the story for writing systems like English that are alphabetic. There has been some question as to the role of phonology in learning to read character-based systems like Chinese. Two years ago, a group including Li Hai Tan of Hong Kong University presented evidence that in fact phonological awareness may not be particularly important in learning to read Chinese.

I have been trying to test one aspect of their theory for some time. Not having collaborators in China or Taiwan, I have to recruit my Chinese-speakers here in Cambridge, which is harder than you might think. The first experiment I ran took nearly six months, most of which was spent trying to recruit participants, and it was ultimately inconclusive. Last spring I piloted a Web-based version of the experiment, thinking that I might have more luck finding Chinese participants through the Internet. However, that experiment failed. I think it was too complicated and participants didn't understand what to do.

I have spent the last few months thinking the problem through, and now I have a new Web-based study. I am trying it in English first, and if it works well enough, I will write a Chinese version of the experiment. If you are interested, please try it out here.

Knowing the meanings of words

In “On the evolution of human motivation: the role of social prosthetic systems,” Stephen Kosslyn makes a very interesting conjecture about social interactions. He argues that, for a given person, “other people serve as prosthetic devices, filling in for lacks in an individual’s cognitive or emotional abilities.” This part seems hard to argue with. Intuitively, we all rely on other people to do certain things for us (mow our grass, edit our papers, provide us with love). His crucial insight is that “the ‘self’ becomes distributed over other people who function as long-term social prosthetic systems.”

You may or may not agree with that stronger claim. I haven't made up my own mind yet. I recommend reading the paper itself, which unfortunately is not available on his website but should be available in a decent college library.

There is one interesting application of his idea to an old problem in linguistics and philosophy.
What is the problem? Intuitively, we would like to believe that our words pick out things in the world (although words and concepts are not interchangeable, for the purposes of this discussion, they have the same problems). When I say “cows produce milk,” I like to believe that this sentence is either true or false in the world. For this to even be plausible, we have to assume that the words “cow” and “milk” refer to sets of real, physical objects.

This is problematic in myriads of ways. It is so full of paradoxes that Chomsky has tried to define away the problem by denying that words refer to anything in the world. I will focus on one particular problem that is relevant to the Kosslyn conjecture.

If you are like me, you know nothing about rare plants such as the three-seeded mercury or the Nova Scotia false-foxglove. Yet, we are able to have conversations about them. I can tell you that the both are endangered in the state of Maine, for instance. If I tell you that they both survive on pure Boron, you would probably be skeptical. Thus, we can talk about these plants and make empirical claims about them and learn new things about them without having any idea what these words actually pick out in the world. This is true of a large number of things we talk about on a daily basis. We talk about people we have never met and places we have never been.

What distinguishes these words from words that truly have no reference? To you, likely neither the words “Thistlewart” nor the word “Moonwart” mean anything. Now, suppose I tell you the first is a made-up plant, while the second is a real plant. To you, these are still both essentially empty words, except one refers to something in the world (though you don’t know what) and the other doesn’t.

Intuitively, what makes “Thistlewart” an empty concept and “Moonwart” not is that you believe there is some expert who really does know what a Moonwart is and could pick one out of a lineup. This “Expert Principle” has seemed unsatisfying to many philosophers, but within the context of the “social prosthetic system” theory, it seems quite at home. Certainly, it seems like it might at least inform some of these classic problems of reference and meaning.

Wait -- are you suggesting your brain affects your behavior?

One of my office-mates burst out laughing on Monday after receiving an email. The email was a forward, but it wasn't intended to be funny. It was a brief news blurb about a recent study looking at teenage impulsiveness, entitled "Teens' brains hold key to their impulsiveness."

What's funny about that? Well, where did the journalist think the key to impulsiveness was hidden -- in teens' kidneys? Many scientists puzzle over the fact that 150 years of biology have not driven out Creationism, but 150 years of psychology and neuroscience have been even less successful. Many people -- probably most -- still believe in mind/brain duality.

Philosophers began suggesting that all human behavior is caused by the physical body at least as early as Thomas Hobbes in the 1600s. A century and a half of psychology and neuroscience has found no evidence of an immaterial mind, and now the assumption that all behavior and thought is caused by the physical body underlies essentially all modern research. It's true that nobody has proved that immaterial minds do not exist, but similarly nobody has ever proved the nonexistence of anything. It just seems very unlikely.

This leads to an interesting dichotomy between cognitive scientists and the general public. While journalists get very excited about studies that prove some particular behavior is related to some particular part of the brain, many cognitive scientists find such studies to be colossal wastes of time and money. It would be like a physicist publishing a study entitled "Silicon falls when dropped." Maybe nobody ever tested to see whether silicon falls when dropped, but the outcome was never really in doubt. 

This isn't to say that the study I mentioned above wasn't a useful study. I have no doubt that it is a very useful study. Determining mechanistically what changes in what parts of the brain during development affect impulsiveness is very informative. The mere fact that the brain changes during development, and that this affects our behavior, is not.

Scientists arguing about the scientific method

The scientific method should be at least passingly familiar to most people who took a high school science class. Generate a hypothesis, then design an experiment that will either support or contradict your hypothesis. A more nuanced version is to find two competing hypotheses, then design an experiment that will unambiguously support at most one of those two hypotheses.

But is this what scientists actually do? Is it what scientists should do?

This question was put to us by Ken Nakayama in our first-year graduate psych seminar last week. Though it may surprise some of you, his answer was "no." In contrast to theory-driven research (the proposal above), Nakayama prefers data-driven research.

Although there are some good descriptions and defenses of theory-driven research, I don't know of one for data-driven research. Here's my best effort at describing the two.

Suppose you are a tinkerer who wants to know how a car works. If you are a theory-driven tinkerer, you would start with competing hypotheses (that tube over there connects the gas tank to the engine VS that tube over there is part of an air circulation system) and conduct experiments to tease those hypotheses apart. The theory-driven tinkerer will focus her energies on experiments that will best tease apart the most important theories, ignoring phenomena that aren't theoretically important. 

A data-driven tinkerer would say, "I wonder what happens if I do this," do it, and see what happened. That is, she may run experiments without having any hypotheses about the outcome, just to see what happens. If the data-driven tinkerer's attention is caught by some odd phenomenon (the car seems to run better in the afternoon than in the morning), she may pursue that phenomenon regardless of whether it seems theoretically interesting or helps distinguish between competing hypotheses. 

One potential reason to favor data-driven research is that while theory-driven research is constrained by our theories (which, at this stage in psychology and cognitive neuroscience, frankly aren't very good), while data-driven research is constrained only by your imagination and skill as an experimentalist. Data-driven exploration, one might argue, is more likely to lead to surprising discoveries, while theory-driven research is may only show you what you expected to see.

I suspect that most psychologists use some combination of the two strategies, though  when it comes time to write a paper, it seems to be easier to publish data that is relevant to theory (whether it was theory that led you to do the experiment in the first place is another question).

Thoughts?