Field of Science

I say "uncle", you say "DaJiu"

Kinship terms (mother, uncle, niece, etc.) are socially important and generally learned early in acquisition. Interestingly, different languages have different sets of terms. Mandarin, for instance, divides "uncle" into "father's older brother", "father's younger brother", and "mother's brother".
Stranger things (to an anglophone, anyway) happen, too: In Northern Paiute, the kin terms for grandparents and grandchildren are self-reciprocal: you would use the same word to refer to your grandmother (if you are female) that she uses to refer to you. (See my previous post on "mommy" across languages.)






































Kinship terms in English and Northern Paiute. Ignore all the logical terms for now.
(Figure taken from Kemp & Regier, 2012)

Even so, there are a lot of similarities across languages. Disjunctions are relatively rare; that is, it's unusual to see a word that means "father or cousin". Usually there are more words to distinguish varieties of closely-related relatives (sister, brother) than distant relatives (cousin). How come? One obvious answer is that maybe the kinship systems we have are just better than the alternatives (ones with words like "facousin" = "father or cousin"), but it would be nice to show this.

Optimal Kinship Terms

In a paper earlier this year, Charles Kemp and Terry Regier did just that.
We show that major aspects of kin classification follow directly from two general principles: Categories tend to be simple, which minimizes cognitive load, and to be informative, which maximizes communicative efficiency ... The principles of simplicity and informativeness trade off against each other... A system with a single category that includes all possible relatives would be simple but uninformative because this category does not help to pick out specific relatives. A system with a different name for each relative would be complex but highly informative because it picks out individual relatives perfectly. 
That seems intuitively reasonable, but these are computational folk, so they formalized this with math. The details are in the paper, but roughly: They formalize the notion of complexity by using minimum description length in a representational language based on primitives like FEMALE and PARENT. The descriptions of the various terms in English and Northern Paiute are shown in parts C and D of the figure above. Communicativeness is formalized by measuring how ambiguous each term is (how many people it could potentially refer to).

A language is considered "better" than another if it out-scores the other on one dimension (e.g., simplicity) and no worse on the other (informativeness). A language is near-optimal if it there is hardly any possible language that is better. They looked at a number of different existing kinship systems (English, Northern Paiute, and a bunch of others) and found that all of them were near-optimal.

Evolution, Culture, or Development?

There are generally three ways of explaining any given behavior: evolution (we evolved to behave that way), culture (culture -- possibly through cultural evolution -- made us that way), or development (we learned to behave that way). For instance, it's rare to find people who chiefly eat arsenic. This could be because of evolution (we evolved to avoid arsenic because the arsenic-eaters don't have children and pass on their genes), cultural evolution (cultures that prized arsenic-eating all died out, leaving the non-arsenic cultures as the only game in town), or development (we learned as children, through trial and error, that eating arsenic is a bad idea). If I remember my Psych 101, food preferences actually involve all three.

What about kinship terms? If they are optimal, who do we credit with their optimality? Probably not development (we don't each individually create optimal kinship terms in childhood). Kemp and Regier seem to favor cultural evolution: over time, more useful kinship terms stuck in the lexicon of a given language and useless ones like "facousin" died out. It would be nice to show, however, that it is not actually genetic. This wouldn't have to be genes for kinship terms, but it could be genes that bias you to learn naming systems that are near-optimal (kinship naming systems or otherwise). One would need to show that these arose for language and not just cognition in general.

------
ResearchBlogging.org Kemp, C., and Regier, T. (2012). Kinship Categories Across Languages Reflect General Communicative Principles Science, 336 (6084), 1049-1054 DOI: 10.1126/science.1218811

Still testing...

I was hoping to post the results of That Kind of Person today. When I announced the study two weeks ago, I estimated that it would take about two weeks to get enough data. For some reason, traffic on the site plummeted late last week.

So maybe one more week. As soon as I know the results, you will, and since this is (please let it be) the last experiment (#8!) for a paper, I am checking the numbers constantly. Many thanks to those who have already participated (those who haven't, you can find the experiment here; it shouldn't take more than 5 minutes).

Findings: Linguistic Universals in Pronoun Resolution - Episode II

A new paper, based on data collected through GamesWithWords.org, is now in press (click here for the accepted draft). Below is an overview of the paper.

Many of the experiments at GamesWithWords.org have to do with pronouns. I find pronouns interesting because, unlike many other words, the meaning of a pronoun is almost entirely dependent on context. So while "Jane Austen" refers to Jane Austen no matter who says it or when, "I" refers to a different person, depending mostly on who says it (but not entirely: an actor playing a part uses "I" to refer not to himself but to the character he's playing). Things get even hairier when we start looking at other pronouns like "he" and "she". This means that pronouns are a good laboratory animal for investigating how people use context to help interpret language.

Mice make lousy laboratory animals for studying the role of context in language.
Pronouns are better.

I have spent a lot of time looking at one particular contextual effect, originally discovered by Garvey and Caramazza in the mid-70s:

(1) Sally frightens Mary because she...
(2) Sally loves Mary because she...

Although the pronoun is ambiguous, most people guess that she refers to Sally in (1) but Mary in (2). That is, the verb used (frightens, loves) seems to affect pronoun resolution. Replace "frightens" and "loves" with other verbs, and what happens to the pronoun depends on the verb: some verbs lead to subject resolutions like frightens, some to object resolutions like loves, and some leave people unsure (that is, they think that either interpretation of the pronoun is equally reasonable).

The question is why. One possibility is that this is some idiosyncratic fact about the verb. Just as you learn that the past tense of walk is walked but the past tense of run is ran, you learn that some verbs lead you to resolve pronouns to the verbs' subject and some the verbs' object (and some verbs have no preference). This was what was tentatively suggested in the original Garvey and Caramazza paper.

Does the meaning of the verb matter?

One of the predictions of this account is that there's nothing necessary about the fact that frightens leads to subject resolutions whereas loves leads to object resolutions, just as there is no deep reason that run's past tense is ran. English could have been different.

Many researchers have suspected that the pronoun effects we see are not accidental; the pronoun effects arise from some fundamental aspect of the meanings of frightens and loves. Even Garvey & Caramazza suspected this, but all the hypotheses they considered they were able to rule out. Recently, using data from GamesWithWords.org, we presented some evidence that this is right. Interestingly, while researchers studying pronouns were busy trying to come up with some theory of verb meaning that would explain the pronoun effects, many semanticists were independently busy trying to explain verb meaning for entirely different reasons. Usually, they are interested in explaining things like verb alternations. So, for instance, they might notice that verbs for which the subject experiences an emotion about the object:

(3) Mary likes/loves/hates/fears John.

can take "that" complements:

(4) Mary likes/loves/hates/fears that John climbs mountains.

However, verbs for which the object experiences an emotion caused by the subject do not:

(5) Mary pleases/delights/angers/frightens John.
(6) *Mary pleases/delights/angers/frightens that John climbs mountains.

[The asterisk means that the sentence is ill-formed in English.]

Linguists working on these problems have put together lists of verbs, all of which have similar meanings and which can be used in the same way. (VerbNet is the most comprehensive of these.) Notice that in this particular work, "please" and "frighten" end up in the same group as each other and a different group from "like" and "fear" are in a different one: Even though "frighten" and "fear" are similar in terms of the emotion they describe, they have a very different structure in terms of who -- the subject or the object -- feels the emotion.

We took one such list of verb classes and showed that it explained the pronoun effect quite well: Verbs that were in the same meaning class had the same pronoun effect. This suggests that meaning is what is driving the pronoun effect.

Or does it?

If the pronoun effect is driven by the meaning of a verb, then it shouldn't matter what language that verb is in. If you have two verbs in two languages with the same meaning, they should both show the same pronoun effect.

We aren't the first people to have thought of this. As early as 1983, Brown and Fish compared English and Mandarin. The most comprehensive study so far is probably Goikoetxea, Pascual and Ancha's mammoth study of Spanish verbs. The problem was determining identifying cross-linguistic synonyms. Does the Spanish word asustar mean frighten, scare, or terrify?
Is this orangutan scared, frightened or terrified? Does it matter?

Once we showed that frighten, scare and terrify all have the same pronoun effect in English, the problem disappeared. It no longer mattered what the exact translation of asustar or any other word was: Given that entire classes of verbs in English have the same pronoun effect, all we needed to do was find verbs in other languages that fit into the same class.

We focused on transitive verbs of emotion. These are the two classes already introduced: those where the subject experiences the emotion (like/love/hate/fear) and those where the object does (please/delight/anger/frighten) (note that there are quite a few of both types of verbs). We collected new data in Japanese, Mandarin and Russian (the Japanese and Russian studies were run at GamesWithWords.org and/or its predecessor, CogLangLab.org) and re-analyzed published data from English, Dutch, Italian, Spanish, and Finnish.

Results for English verbs (above). "Experiencer-Subject" verbs are the ones like "fear" and "Experiencer-Object" are the ones like "frighten". You can see that people were consistently more likely to think that the pronoun in sentences like (1-2) referred to the subject of Experiencer-Object verbs than Experiencer-Subject verbs.

The results are the same for Mandarin (above). There aren't as many dots because we didn't test as many of the verbs in Mandarin, but the pattern is striking.

The Dutch results (above). The pattern is again the same. Again, Dutch has more of these verb, but the study we re-analyzed had only tested a few of them.

You can read the paper and see the rest of the graphs here. In the future, we would like to test more different kinds of verbs and more languages, but the results so far are striking, and suggest that the pronoun effect is caused by what verbs mean, not some idiosyncratic grammatical feature of the language. There is still a lot to be worked out, though. For instance, we're now pretty sure that some component of meaning is relevant to the pronoun effect, but which component and why?

------------
Hartshorne, J., and Snedeker, J. (2012). Verb argument structure predicts implicit causality: The advantages of finer-grained semantics Language and Cognitive Processes, 1-35 DOI: 10.1080/01690965.2012.689305

Goikoetxea, E., Pascual, G., and Acha, J. (2008). Normative study of the implicit causality of 100 interpersonal verbs in Spanish Behavior Research Methods, 40 (3), 760-772 DOI: 10.3758/BRM.40.3.760

Garvery, C., and Caramazza, A. (1974). Implicit causality in verbs Linguistic Inquiry, 5 (3), 459-464

Roger Brown and Deborah Fish (1983). Are there universal schemas of psychological causality? Archives de Psychologie, 51, 145-153

New Experiment: That Kind of Person

I just got back reviews on one of the pronoun papers. Although the paper already had seven experiments, they want two more. The worst part about it is that they are right.

Luckily, the experiment they asked for can be done online. It takes about 5 minutes. Native English speakers preferred (though I look at all data).

That Kind of Person (takes about 5 minutes)

My target is to post the results for this and the seven previous experiments in 2 weeks ... if I get enough participants quickly. Thank you in advance to everyone who participates.

Is Psychology a science?: Redux

The third-most read post on this blog is "Is Psychology a science?". I was a few years younger then and still had strong memories of one of my friends complaining, when we were both undergraduates, that he had to take a psychology course as part of his science distributional requirements. "Psychology isn't a science," he said, "because they don't do experiments." Since he was telling me this over AIM as I was sitting in my psychology laboratory, analyzing an experiment, it didn't go over well.

It's been a popular post, but I haven't written about the subject much since in part because I started to suspect that the "psychology isn't a science" bias might actually be a thing of ignorant undergraduates and a few cranks. It's not something I've rarely heard in the last few years, and there's no need to write diatribes against a non-existant prejudice.

In retrospect, maybe I haven't come across these opinions because I mostly hang out with other psychologists. A colleague recently forwarded me this blog post ("Keep Psychology out of the science club"), which links to a few other similar pieces on blogs and in newspapers. So it seems the issue is alive and well.

Some articles one comes across are of the "psychologists don't do experiments" variety; these are easily explained by ignorance and an inability to use Google. But some folks raise some real concerns which, while I think they are misplaced, really are worth thinking about.


Psychology is too hard

One common theme that I came across is that psychology is simply too difficult. We'll never understand human behavior very well, so maybe we shouldn't even try. For instance, Gary Gutting, writing in the Opinionator at the New York Times, said:
Social sciences may be surrounded by the "paraphernalia" of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments. But when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events ... while the physical sciences produce many detailed and precise predictions, the social sciences do not ... Because of the many interrelated causes at work in social systems, many questions are simply "impervious to experimentation" ... even when we can get reliable experimental results, the causal complexity restricts us...
In a Washington Post editorial, Charles Lane wrote:
The NSF shouldn't fund any social science. Federal funding for mathematics, engineering and other "hard" sciences is appropriate. In these fields, researchers can test their hypotheses under controlled conditions; then those experiments can be repeated by others. Though quantitative methods may rule economics, political science and psychology, these disciplines can never achieve the objectivity of the natural sciences. Those who study social behavior -- or fund studies of it -- are inevitably influenced by value judgments, left, right, and center. And unlike hypotheses in the hard sciences, hypotheses about society usually can't be proven or disproven by experimentation. Society is not a laboratory.
Alex Berezow at the Newton Blog agrees:
Making useful predictions is a vital part of the scientific process, but psychology has a dismal record in this regard.
Is that a fair critique?

These writers don't entirely miss the mark. It really is true that psychology does not make as precise or as accurate predictions as, say, physics. That is not the same thing as saying that we can't make any predictions. Berezow complains about happiness research:
Happiness research is a great example of why psychology isn't a science. How exactly should "happiness" be defined? The meaning of the word differs from person to person, and especially between cultures. What makes Americans happy doesn't necessarily make Chinese people happy. How does one measure happiness? Psychologists can't use a ruler or a microscope, so they invent an arbitrary scale. Today, personally, I'm feeling about a 3.7 out of 5. How about you? ...  How can an experiment be consistently reproducible or provide any useful predictions if the basic terms are vague and unquantifiable?
That's a great question! Let's start with the facts. It is true that we don't know exactly what it means to be a 3.7 on a scale of 1-5. But we do know a few interesting things.

People's predictions of how happy they will rate themselves in the future are systematically biased. People will say that good things (like getting tenure) will make them very happy (a 5 out of 5) whereas bad things (like not getting tenure) will make them very sad (a 1 out of 5), whereas when you then ask those same people to rate their happiness a little while after the event, people generally rate themselves as not nearly so happy or unhappy as they predicted. (Similarly, people who lose a limb usually rate themselves as about as happy afterwards as before, provided you give them a little time to adjust.) People who have children normally see a drop in how happy they rate themselves. They only start to recover after their children leave the nest. There is also the "future ahedonia" effect: people think that good things (e.g., an ice cream sunday) will make them happier now (on our 1-5 scale) than those same good things would make them happy in the future, and conversely for bad things (e.g., doing my homework won't feel so bad if I do it tomorrow rather than today). And so on. (These and many other examples can be found in Dan Gilbert's Stumbling on Happiness.)

These and other findings are highly reliable, despite the fact that we don't have a direct, objective measurement of happiness. In fact, as Dan Gilbert has pointed out, we would only consider that "direct" measurement to be a measurement of happiness if it correlated really well with how happy people said they were. To the extent it diverged from how happy people claim to be, we would start to distrust the "direct" measurement.

I personally am glad that we know what we know about happiness, though I wish we knew more. I picked happiness to defend because I've noticed that even those who defend psychology in comments sections give up happiness research as a lost cause. I think it's pretty interesting, useful work. It would be even easier to defend, for instance, low-level vision research, which makes remarkably precise predictions, has clear theories of the relationship between the psychological phenomena and the neural implementations, etc. (See also this post for some psychology success stories.)

Just how good do you need your predictions to be?

Still, it is true that we can't always make the precise predictions that can be made in some other fields. Of course, other fields can't always make the precise predictions, either. While physicists are great at telling you what will happen to rigid objects moving through vacuums, predicting the motions of real objects in the real world has been traditionally a lot harder, and understanding fluid dynamics has been deeply problematic (though I understand this has been getting a lot better in recent years). And that's without pulling out the Heisenberg Uncertainty Principle, which should cause anyone who wants precise, deterministic predictions to declare physics a non-science.

Also, some parts of psychology are able to make much more precise predictions than others do. Anything amenable to psychophysics tends to be much more precise, and vision researchers, as already noted, have remarkably well worked-out theories of low- and mid-level vision.

This line of discussion also raises an interesting question: when exactly did physics become a science? Was it a science in Newton's day, when we still new squat about electromagnetism -- much less elementary particles -- and couldn't make even rough predictions about turbulent air or fluid systems? And to people from 350 years from now, will the physics of today seem like a "real" science (my guess: no).

Worries

Berezow ends his post with the following caution:
To claim [psychology] is a "science" is inaccurate. Actually, it's worse than that. It's an attempt to redefine science. Science, redefined, is no longer the empirical analysis of the natural world; instead, it is any topic that sprinkles a few numbers around. This is dangerous, because, under such a loose definition, anything can qualify as science. And when anything qualifies as science, science can no longer claim to have a unique grasp on secular truth.
I have a different worry. My worry is that someone gets ahold of a time machine, goes back in time to 1661 and convinces Newton to lay off that non-scientific "physics" crap. Pre-Newtonian physics was a hodgepodge of knowledge, little resembling what we think of science today. Making precise predictions about the messy, physical world we live in no doubt seemed an impossible pipe-dream to many. Luckily, folks like Newton kept plugging away, and three and a half centuries later, here we are.

We should keep in mind that the serious study of the mind only began in the mid-1800s; physics has a significant head-start. And, as the anti-psychology commentators are happy to point out, psychology is much, much harder than physics or chemistry. But the only reason I can see to pull the plug is if we are sure that (a) we have learned nothing in the last 150 years, and (b) we will never make any further progress. These are empirical claims and so subject to test (I think the first one has already been falsified). So here's a proposed experiment: psychologists keep on doing psychology, and people who don't want to don't have to. And we'll wait a few decades and see who knows more about the human mind.

What you missed lately on the Web: 11/12/2012

I've switched the title of these posts from "last week" to "lately", since apparently posting every week is too ambitious (last Monday my excuse was the hurricane + BUCLD, but that wasn't the first time I missed a week).

An elegant defense of prescriptivism
Quoted by Harm*less Drudg*ery, with some additional discussion at Language Log.

Are  differences in brain connectivity in Autism actually motion artifact?
Neuroskeptic considers the possibilities.

What is the purpose of a university?
Boston Magazine argues that it is to produce tech start-ups, not lawyers and doctors. Funny, I thought it was to gather, create, and disseminate knowledge, but apparently that is old-school thinking.

Neuroskeptic publishes under own name
Future archaeologists are going to be very confused.

Bidenology
Language Log tries to figure out whether Biden is proud to be vice president.

Open Science Collaboration's first paper

Perspectives in Psychological Science is making history this issue by publishing a paper by a blogger under the blogger's pseudonym (reportedly the first such paper), as well as the first paper by the Open Science Collaboration (to which I am a contributor, so here's to many more!).

The issue, which is currently open access, is focused on issues of replicability. The Open Science Collaboration has a number of goals with respect to changing research practices in psychology. The main project so far has been the Reproducibility Project, which
is an open, large-scale collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science. So far, 72 volunteer researchers from 41 institutions have organized to openly and transparently replicate studies published in three prominent psychological journals in 2008.
This is something pretty close to my heart, which is why I am involved. As my co-author and I pointed out in "Tracking replicability as a method of post-publication open evaluation," for all the concern about replicability in psychology and other sciences, there is remarkably little systematic evidence one way or another (we did our best to thoroughly review the literature; you can check out our findings in the paper). What kinds of reforms we should put in place depend on how bad the problem is. If the problem isn't that bad -- and for all we know, it isn't -- then there is no reason to implement costly, unpleasant reforms.

You can read more about the project in the paper. There is still time for interested researchers to join the project. Just sayin'.
-------

Open Science Collaboration. (2012). An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science Perspectives on Psychological Science, 7 (6), 657-660 DOI: 10.1177/1745691612462588

New Experiment: Ignore That!

**UPDATE Apparently the examples below didn't display correctly on some computes. I think this is now fixed.**

It can be very hard to ignore irrelevant information. I personally can't work when there is music with English lyrics playing (overheard conversations are difficult, too, so I don't often work in cafes, at least not without ear plugs).

There are a number of classic studies in psychology looking at our ability to ignore distracting information. For instance, suppose that you are asked to identify which direction the arrow in the middle of the sequence below is pointing:

<--  <--  <--

You will typically do that faster and more accurately than you would for the sequence below:


<--  -->  <--

Even though the first and last arrow are irrelevant, they distract you and lead to incorrect responses. The original study (to my knowledge) to demonstrate this effect -- using a slightly different method involving letters rather than numbers -- was Eriksen & Eriksen's 1974 paper cited at the end of this post. 

White Bears

Another classic study is the "White Bear" study from Daniel Wegner and colleagues. Do the following: For the next five seconds, try not to think about a white bear. 

This turns out to be very difficult to do. Although in general you probably rarely think about white bears, when asked not to do so, it becomes nearly impossible.

A new GamesWithWords.org experiment

I recently posted a new experiment -- Ignore That! -- at GamesWithWords.org, which investigates another classic "mental control" phenomenon. In it, you will try to answer, as quickly and accurately as possible, which color a word is written in. For instance "hello" is in red. This seems simple enough, but add some distracting information, and it becomes quite difficult. (There are actually two parts to the experiment -- one part uses color and the other uses the direction of an arrow, but both get at the same phenomenon.)

The experiment takes about 5 minutes or less. At the end, you will be able to see your own results and find out just how distracted you were by the distracting information. 

Try the experiment here:


-----

Wegner, D., Schneider, D., Carter, S., and White, T. (1987). Paradoxical effects of thought suppression. Journal of Personality and Social Psychology, 53 (1), 5-13 DOI: 10.1037//0022-3514.53.1.5


Eriksen, B., and Eriksen, C. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task Perception & Psychophysics, 16 (1), 143-149 DOI: 10.3758/BF03203267
claimtoken-509944af17bd2

Boston University Conference on Language Development: Day 2

This year marks my 7th straight BUCLD, the major yearly language acquisition conference. See previous posts for my notes on Day 1 and Day 3.

Verbing nouns

Many if not all English nouns can be turned into verbs. The verb's meaning is related to the noun, but not always in the same way. Consider "John milked the cow" and "John watered the garden". In the first face, John extracts a liquid from the cow; in the second, he adds liquid to the garden.

Maybe this is just something we have to learn in each case, but people seem to have strong intuitions about new verbs. Let's say that there is a substance called "dax" that comes from the dax tree. If I were to dax a tree, am I taking dax out of the tree or adding dax to the tree? Most people think the first definition is right. Now let's say there is something called "blick" which is a seasoning that people often add to soup. If I blick some soup, most people think I'm adding blick to the soup, not taking blick out of the soup. (There are other types of noun-derived verbs as well, but they are a topic for another time.)

These examples suggest a hypothesis: if a noun refers to a substance that usually comes from a specific source, then the derived verb probably refers to the action of extracting that substance. If the noun refers to something that doesn't come from any particular source but is often added to things, then the derived verb refers to that process of adding the substance to something.

Mahesh Srinivasan of UCSD presented joint work with David Barner in which they tested this hypothesis. Probably the most informative of the experiments was one with made-up nouns, much like my "dax" and "blick" examples above. Interestingly, while children were pretty sure that "to blick" meant "put blick on something" (the experiment involved several such nouns, and the children had strong intuitions about all of them), they were much less sure what "to dax" (and similar verbs) meant. Other experiments also showed that young children have more difficulty understanding existing substance-extraction noun-derived verbs (to milk/dust/weed/etc.) than substance-adding noun-derived verbs (to water/paint/butter). And interestingly, English has many more of the latter type of verb than the former.

So, as usual, answer one question leads to another. While they found strong support for their hypothesis about why certain noun-derived verbs have the meanings they do, they also found that children find the one kind of verb easier to learn than the other, which demands an explanation. They explored a few hypotheses. One has to do with the "goal" bias described in previous work by Laura Lakusta and colleagues: generally, when infants watch a video in which an object goes from one location to another, they pay more attention to and remember better the location the object ended up at than the location it came from. Whatever the answer, learning biases -- particularly in young children -- are interesting because they provide clues as to the structure of the mind.

Verb biases in structure priming

One of the talks most-mentioned among the folks I talked to at BUCLD was one on structural priming by Michelle Peter (with Ryan Blything, Caroline Rowland, and Franklin Chang, all of the University of Liverpool). The idea behind structural priming is that using a particular syntactic structure once tends to lead to using it more again in the future (priming). The structure under consideration here was the so-called dative alternation:

(1) Mary gave a book to John.
(2) Mary gave John a book

Although the two sentences mean the same thing (maybe -- that's a long post in itself), notice the difference in word order between (1) and (2). The former is called the "prepositional object" structure, and the second is called the "double object" structure. Some time ago, it was discovered that if people use a given verb (e.g., give) in the prepositional object form once, they are more likely to use that verb in the same form again next time they have to use that verb (and vice versa for the double object form). More recently, it was discovered that using one verb (e.g., give) in the prepositional object form made it more likely to use another verb (e.g., send) in that same form (and again vice versa for the double object form). This suggests that the syntactic form itself is represented in some way that is (at least partially) independent of the verb in question, which is consistent with theories involving relatively abstract grammar.

Or maybe not. This has been highly controversial over the last number of years, with groups of researchers (including the Rowland group) showing evidence of what they call a "lexical boost" -- priming is stronger from the same verb to the same verb, which they take as evidence that grammar is at least partly word-specific. Interestingly, they have now found that children do *not* show the same lexical boost (which, if I remember correctly, has been found by other researchers from the "abstract grammar" camp before, but not by those in the "lexically-specific grammar" camp).

This seems consistent with a theory of grammar on which children start out with relatively general grammatical structures, but as you get older you tend to memorize particularly frequent constructions -- thus, as far as processing goes, grammar becomes increasingly lexically-specific as you get older (though the abstract structures are still around in order to allow for productivity). This is the opposite of the speakers' favored theory, one which grammar becomes more abstract as you get older. They did find some aspects of their data that they thought reflected lexically-specific processing in children; it's complex so I won't discuss it here (I didn't have time to get it all down in my notes and don't want to make a mistake).

There was also a talk by Kyae-Sung Park (collaborator: Bonnie D. Schwartz, both of the University of Hawai'i) on the Korean version of the dative alternation, finding that the more common form is learned earlier by second-language learners of Korean. I was interested in finding out more about the structure of Korean, but I don't know the second-language acquisition research well enough to integrate their main findings into the larger literature.

Other studies

There were many other good talks. The ones I saw included a study by Wang & Mintz, arguing that previous studies that looked at the overlap in the contexts in which different determiners occur in child speech -- which had been used to suggest that young children don't have an abstract grammatical category "determiner" -- were confounded by the small size of the corpora used. If you use a similarly small corpus of adult speech, you'd come to the same conclusion. [The analyses were much cooler and more detailed than this quick overview can get across.]

------------
Lakusta, L., Wagner, L., O'Hearn, K., and Landau, B. (2007). Conceptual Foundations of Spatial Language: Evidence for a Goal Bias in Infants Language Learning and Development, 3 (3), 179-197 DOI: 10.1080/15475440701360168

Language fact of the day

The name that appears most often in Genesis is "Jacob", followed by "Joseph".

In other news, the most common word in Moby Dick is "the"; the most common noun (excluding pronouns) is, not surprisingly, "whale".

In Genesis, Moby Dick, and a number of other texts, three-letter words are more common than word of any other length (the one exception I've found so far is Moby Dick)

(Yes, I am learning to use NLTK, which so far I like a lot)
claimtoken-509944af17bd2

Boston University Conference on Language Development: Day 3

This post continues my series on this years' BUCLD. While conferences are mostly about networking and seeing your friends, I also managed to attend a number of great talks.

Autism and homophones

Hugh Rabagliati got the morning started with a study (in collaboration with Noemi Hahn and Jesse Snedeker) of ambiguity (homophone) resolution. One of the better-known theories of Autism is that people with Autism have difficulty thinking about context (the "weak central coherence theory"). Rabagliati has spent much of his career so far looking at how people use context to interpret ambiguous words, so he decided to check to see whether people with Autism had any more difficulty than typically-developing folk. (Note that many people with Autism have general language delays. Presumably people with language delays will have trouble on language tasks. This work focused on people with Autism who have roughly normal syntax and semantics.)

Participants listened to sentences with homophones (e.g., "bat") that were either had very constraining contexts (e.g., "John fed the bat that he found in the forest") or not-very-constraining contexts (e.g., "John saw the bat that he found in the forest"). These sentences were part of a longer story. What the participant had to do was pick out a relevant picture (of four on the computer screen) for part of the story. The trick was that one of the pictures was related to the other meaning of the homophone (e.g., a baseball glove, which is related to a baseball bat). Due to priming, if people are thinking about that other meaning of the homophone (baseball bat), they are likely to spend some of their time looking at the picture related to that meaning (the baseball glove). If they have successfully determined that the homophone "bat" refers to the animal bat, they should ignore the glove picture. Which is exactly what happened. For both typically developing 6-9 year-olds and 6-9 year-olds with Autism. This is a problem for the weak central coherence theory.

Autism and prosody

In the same session, the Snedeker Lab presented work on prosody and Autism. This study, described by Becky Nappa, looked at contrast stress. Consider the following:

(1) "Look at the blue house. Now, look at the GREEN..."

What do you expect to come next? If you are like most people, you think that the next word is "house". Emphasizing "green" suggests that the contrast between the two sentences is the color, not the type of object to be looked at. Instead, if the color word was not stressed:

(2) "Look at the blue house. Now, look at the green..."

You don't know what is coming up, but it's probably not a house.

Atypical prosody is a diagnostic of Autism, at least according to some diagnostic criteria. That is, people with Autism often use prosody in unusual ways. But many of these folk have, as I pointed out above, general language difficulties. What about the language-intact Autism population? Here, the data has been less clear. There is still some unusual production of prosody, but that doesn't mean that they don't understand prosody.

Nappa and Snedeker tested children's understanding of contrastive stress. While typically-developing children performed as expected (interpreting contrastive stress as meaning a new example of the same type of object will be described), highly verbal children with Autism performed exactly opposite: they expected a new type of object for (1) and the same type of object for (2).

A second study looking at given/new stress patterns. Compare:

(3) Put the candle on the table. Now put the candle on the counter.
(4) Put the candle on the table. Now put the CANdy on the counter.

In general, if you are going to re-mention the same object ("candle" in (3)), you don't stress it the second time around. When you are mentioning a new object -- especially if its name sounds similar to something you have already described -- you are likely to stress it. Here, interestingly, the ASD children were just as good as typically-developing children.

Nappa puts these two findings together and suggest that children with Autism have overgeneralized the stress pattern in (3-4) to cases like (1-2). In general, they think stressed words refer to something new.

Other Day 3 talks

There were other good talks on Day 3, but by my notes always get more and more sparse as a conference goes on. Researchers from Johns Hopkins University (the speaker was Kristen Johannes) argued that "differences between child and adult spatial language have been previously attributed to underdeveloped conceptual representations" (this is a quote from the abstract). In particular, children use the preposition "on" in strange ways. They argue that this is because children have impoverished spatial vocabulary (there are a number of useful words they don't know) and, given that they don't have those words, they over-apply "on" not so much because they conceptualize of "on"ness differently, but because they are, literally, at a loss for words. When you make adults describe spatial arrangements without using the fancy adult words they normally use, they end up over-applying "on" in much the same way kids do. (Here I am working from memory plus the abstract -- my notes, as I mentioned, are incomplete).

Careful readers will notice that I haven't written about Day 2 yet. Stay tuned.

Boston University Conference on Language Development: Day 1

This year marks my 7th straight BUCLD. BUCLD is the major yearly language acquisition conference. (IASCL is the other sizable language acquisition conference, but meets only every three years; it is also somewhat more international than BUCLD and the Empiricist contingent is a bit larger, whereas BUCLD is *relatively* Nativist).

NOTE I'm typing this up during a break at the conference, so I've spent less time making these notes accessible to the general public than usual. Some parts may be opaque to you if you don't know the general subject matter. Feel free to ask questions in the comments.

Day 1 (Friday, Nov. 2)

What does eyetracking tell us about kid's sentence processing

The conference got off to a great start with Jesse Snedeker's 9am talk, "Negation in children's online language comprehension" (for those who don't know, there are 3 talks at any given time; no doubt the other two 9am talks were good, but I wasn't at them). I was actually more interested in the introduction than the conclusion. Over the last 15 years, the Visual World Paradigm has come to dominate how we study children's language processing. Here is how I usually describe the paradigm to participants in my studies: "People typically look at what is being talked about. So if I talk about the window, you'll probably automatically look at the window. So we can measure what people look at as they listen to sentences to get a sense of what they think the sentence is about at any given time."

Snedeker's thesis was that we actually don't know what part of language comprehension this paradigm measures. Does it measure your interpretation of individual words or of the sentence as a whole? One of the things about language is that words have meanings by themselves, but when combined into sentences, new meanings arise that aren't part of any individual word. So "book" is a physical object, but if I say "The author started the book", you likely interpret "book" as something closer to an activity ("writing the book") than a physical object.

Because the Visual World Paradigm is used extensively by sentence-comprehension people (like me), we hope that it measures sentence comprehension, not just individual words. Snedeker walked through many of the classic results from the Visual World Paradigm and argued that they are consistent with the possibility that the Visual World Paradigm just measures word meaning, not sentence meaning.

She then presented a project showing that, at least in some cases, the Visual World Paradigm is sensitive to sentence meaning, which she did by looking at negation. In "John broke the plate", we are talking about a broken plate, where as in "John didn't break the plate", we are not. So negation completely changes the meaning of the sentence. She told participants stories about different objects while the participants looked at pictures of those objects on a computer screen (the screen of an automatic eyetracker, which can tell where the participant is looking). For example, the story might be about a clumsy child who was carrying dishes around and broke some of them but not others (and so, on the screen, there was a picture of a broken plate and a picture of a not-broken plate). She found that adults and even children as young as three years old look at the broken plate when they heard "John broke the plate" but at the not-broken plate when they heard "John didn't break the plate", and they did so very quickly ... which is what you would expect if eyetracking was measuring your current interpretation of the sentence rather than just your current interpretation of the individual words (in which case, when you hear the word "plate", either plate will do).

(This work was joint work with Miseon Lee -- a collaborator of mine -- Tracy Brookhyser and Matthew Jiang.)

The First Mention Effect

W. Quin Yow of Singapore University of Technology and Design presented a project looking at pronoun interpretation (a topic close to my heart). She looked at sentences in which adults typically interpret the pronoun as referring to the previous subject (these are not the so-called "implicit causality" sentences I discuss most on this blog):
Miss Owl is going out with Miss Ducky. She wants her bag. 
She found, as usual, a strong preference for "she" to refer to Miss Owl in this (and similar) sentences. There is one older study that did not find such a preference in children roughly 4-6 years old, but several other studies have found evidence of (weak) first-mention effects in such sentences, including [shameless self-plug] work I presented at BUCLD two years ago.

Yow compared monolingual English-speaking four year-olds and bilingual English-speaking four year-olds (their "other" language differed from kid to kid). While only the bilinguals showed a statistically significant first-mention effect, the monolingual kids were only just barely not above chance and almost identical to the monolinguals. While the first-mention effects she saw were weaker than what I saw in my own work, her kids were slightly younger (four year-olds instead of five year-olds).

The additional twist she added was that, in some conditions, the experimenter pointed to one of the characters in the story at the moment she uttered the pronoun. This had a strong effect on how adults and bilingual children interpreted the pronoun; the effect was weaker or monolingual children, but I couldn't tell whether it was significantly weaker (with only 16 kids per group, a certain amount of variability between groups is expected).

In general, I interpret this as more evidence that young children do have (weak) first-mention biases. And it is nice to have one's results replicated.

Iconicity in sign language

Rachel Magid, a student of Jennie Pyers at Wellesley College, presented work on children's acquisition of sign language. Some signs are "iconic" in that they resemble the thing being referred to: for instance, miming swinging a hammer as the sign for "hammer" (I remember this example from the talk, but I do not remember whether that's an actual sign in ASL or any other sign language). Spoken languages have iconic words as well, such as "bark", which both means and sort of sounds like the sound a dog makes. This brings up an important point: iconic words/signs resemble the things they refer to, but not perfectly, and in fact it is often difficult to guess what they refer to, though once it has been explained to you, the relationship is obvious.

The big result was that four year-olds hearing children found it easier to learn iconic than non-iconic signs, whereas three year-olds did not. Similar results were found for deaf children (though if memory serves, the three year-old deaf children were trending towards doing better with iconic signs, though the number of subjects -- 9 deaf three year-olds -- was too small to say much about it).

Why care? There are those who think that early sign language acquisition -- and presumably the creation of sign languages themselves -- derives from imitation and mimicry (basically, sign languages and sign language acquisition start as a game of charades). If so, then you would expect those signs that are most related to imitation/mimicry to be the easiest to learn. However, the youngest children -- even deaf children who have learned a fair amount of sign language -- don't find them especially easy to learn. Why older children and adults *do* find them easier to learn still requires an explanation, though .

[Note: This is my interpretation of the work. Whether Magid and Pyers would endorse the last paragraph, I am not sure.]

Briefly-mentioned

Daniele Panizza (another occasional collaborator of mine) presented work done with a number of folks, including Stephen Crain, on 3-5 year-olds' interpretations of numbers. The question is whether young children understand reversals of entailment scales. So, if you say "John has two butterflies", that means that you do not have three, whereas saying "If John has two butterflies, give him a sticker" means that if he has two OR MORE butterflies, give him a sticker [NOTE, even adults find this "at least two" reading to be a bit iffy; the phenomenon is that they find the "at least two" reading much better in a downward-entailing context like a conditional MUCH BETTER than in a normal declarative]. Interestingly, another colleague and I had spent a good part of the last week wondering whether children that age understood this, so we were happy to learn the answer so quickly: they do.

In the next talk, Einat Shetreet presented work with Julia Reading, Nadine Gaab and Gennaro Chierchia also looking at entailment scales, but with scalar quantifiers rather than numerals. Adults generally think "John ate some of the cookies" means that he did not eat all of them (some = some but not all), whereas "John didn't eat all of the cookies" means that he ate some of them (not all = some). They found that six year olds also get both of these inferences, which is consistent with the just-mentioned Panizza study.

These studies may seem esoteric but get at recent theories of scalar implicature. Basically, theories of scalar implicature have been getting much more complex recently, suggesting that this relatively simple phenomenon involves many moving pieces. Interestingly, children are very bad at scalar implicature (even up through the early elementary years, children are much less likely to treat "some" as meaning "some but not all", so they'll accept sentences like "Some elephants have trunks" as reasonable sentences, whereas adults tend to find such sentences quite odd). So the race is on to figure out which of the many component parts of scalar implicature are the limiting step in early language acquisition.

There were many other good talks on the first day; these merely represent those for which I have the most extensive notes. 

Maybe first-borns aren't smarter after all

Although it is conventional wisdom that your birth order affects your personality, it's a hotly-disputed topic among scientists, and in fact my sense is that, if anything, a majority of researchers doubt the existence of birth order effects. Findings have been slippery: one study suggests that, for instance, first-borns are risk-takers, whereas another suggests that they aren't.

Birth Order & Intelligence

One of the most-researched topics has been intelligence: A wide variety of studies have suggested that first-borns have higher IQ scores than later-borns. While not every study has shown this, Bjerkedal and colleagues published in 2007 what seemed to be the definitive proof. They looked at IQ tests for 250,000 Norwegian male conscripts born from 1967 to 1988 -- that's more than 80% of all Norwegian men born in that time period -- and found first-born sons have IQs of about 2.3 points higher than second-born sons.

Because of the size and completeness of this dataset, they were able to rule out various possible confounds in the data that have been sources of controversy in previous studies. For instance, because wealthy, well-educated families rarely have more than two children, simply being a middle child correlates with being less wealthy and having less access to quality education (and health care, etc.). So one might find that middle children have lower IQs, when in fact what you are measuring is not an effect of birth order, but of socio-economic status. Bjerkedal and colleagues were able to control for such factors.

The Flynn Effect

But, as Satoshi Kanazawa of the London School of Economics points out in a recent paper, there was one confound that they didn't consider: the Flynn Effect. Over the last hundred years -- and possibly longer -- the average person has been doing better and better on IQ tests. In fact, this is something that Bjerkedal and colleagues noticed in their own data, with IQ scores rising slightly from 1984 (the first year of their study) to the mid 1990s.

Because of this, IQ test manufacturers have been constantly raising the bar: you have to get more questions right to get an IQ of 100 now than you did fifty years ago. (What has caused the Flynn effect is one of the Big Questions in current research and a topic for a much longer post.) And Bjerkedal and colleagues did the same thing:
To minimize these variations, scores were standardized by calculating deviations from an overall mean score of 5.00 for each calendar year and age.
The idea is that your score is based not on how many questions you got right, but how many questions you got right compared with everyone else who took the IQ test in the same year. Kanazawa points out that this is a confound: The average performance was higher in the 1990s than in the 1980s. So if two people who took the test in 1985 and 1995 answered the exact same questions correct, the one who took it in 1995 would have a lower IQ than the one who took it in 1985. This means that if you compare two siblings, the older sibling will -- all else equal -- have a higher IQ score than the younger sibling.

Caveats

There is one limitation to Kanazawa's story. While Bjerkedal and colleagues report that the average score did increase from 1985 through the early 1990s, they report that the scores then decreased back down to the original level between 1998 and 2002 (the study ended in 2004). Also, the increase was very small (one 1 IQ point) compared to the birth order effect that they reported (a drop of 1-2 IQ points for each older brother). So whether the Flynn effect is sufficient to explain away the Bjerkedal results is hard to say.*

Nonetheless, Kanazawa has one more card up his sleeve: his own study, Kanazawa looked at un-scaled data from IQ tests given to 17,419 children in the UK, finding no effect of birth order on intelligence.

That said, the statistical analyses are complicated, involving several transformations. While the transformations seems reasonable (mostly PCA), the transformations Bjerkedal used also seemed reasonable until we realized that they weren't. I'd like to see that Kanazawa's null effect holds up on the truly raw data as well.

Conclusions

Birth order effects are interesting scientifically because they get at the following question: How does your home environment affect the person you become, if at all? Many of the leading minds today suspect that your home environment has little to no effect on you, at least not in the long term. Birth order effects are a very useful test case. Relatively little theoretical rides on whether oldest siblings are the smartest or youngest siblings are the smartest, but if you could show that birth order affected intelligence, that would be a proof-of-concept that home environment affects the adult you become.

[BTW Nobody doubts that home environment has a strong impact on future income, level of educational achievement, etc. The question is whether it affects your personality, making you introverted or extroverted, etc.]

If the intelligence data do not hold up, that leaves -- to my knowledge -- no direct measures of personality or cognitive function for which we have solid evidence that they are affected by birth order. There is one indirect measure that, to my knowledge, has never been challenged: people tend to be friends with and marry others of the same birth order (some of the evidence came from studies run at gameswithwords.org -- thank you to all who participated). Since we know that people marry others with similar personalities (on average), a plausible explanation is that people with similar birth order have similar personalities, leading them to marry one another. However, the fact that no one has thought of another explanation doesn't mean that there isn't one. Time will tell.

See also: My review of birth order effects for SciAm Mind from 2010.

*Bjerdekal and colleagues renormalized a 9-point scaled score. I cannot tell from the article whether than 9-point scale itself was based on standardized norms -- though most likely it was -- and whether those norms were re-standardized during the 21 years of the study.

------

Kanazawa, S. (2012). Intelligence, Birth Order, and Family Size Personality and Social Psychology Bulletin, 38 (9), 1157-1164 DOI: 10.1177/0146167212445911