Field of Science

Showing posts with label On stage. Show all posts
Showing posts with label On stage. Show all posts

Boston University Conference on Language Development: Day 2

This year marks my 7th straight BUCLD, the major yearly language acquisition conference. See previous posts for my notes on Day 1 and Day 3.

Verbing nouns

Many if not all English nouns can be turned into verbs. The verb's meaning is related to the noun, but not always in the same way. Consider "John milked the cow" and "John watered the garden". In the first face, John extracts a liquid from the cow; in the second, he adds liquid to the garden.

Maybe this is just something we have to learn in each case, but people seem to have strong intuitions about new verbs. Let's say that there is a substance called "dax" that comes from the dax tree. If I were to dax a tree, am I taking dax out of the tree or adding dax to the tree? Most people think the first definition is right. Now let's say there is something called "blick" which is a seasoning that people often add to soup. If I blick some soup, most people think I'm adding blick to the soup, not taking blick out of the soup. (There are other types of noun-derived verbs as well, but they are a topic for another time.)

These examples suggest a hypothesis: if a noun refers to a substance that usually comes from a specific source, then the derived verb probably refers to the action of extracting that substance. If the noun refers to something that doesn't come from any particular source but is often added to things, then the derived verb refers to that process of adding the substance to something.

Mahesh Srinivasan of UCSD presented joint work with David Barner in which they tested this hypothesis. Probably the most informative of the experiments was one with made-up nouns, much like my "dax" and "blick" examples above. Interestingly, while children were pretty sure that "to blick" meant "put blick on something" (the experiment involved several such nouns, and the children had strong intuitions about all of them), they were much less sure what "to dax" (and similar verbs) meant. Other experiments also showed that young children have more difficulty understanding existing substance-extraction noun-derived verbs (to milk/dust/weed/etc.) than substance-adding noun-derived verbs (to water/paint/butter). And interestingly, English has many more of the latter type of verb than the former.

So, as usual, answer one question leads to another. While they found strong support for their hypothesis about why certain noun-derived verbs have the meanings they do, they also found that children find the one kind of verb easier to learn than the other, which demands an explanation. They explored a few hypotheses. One has to do with the "goal" bias described in previous work by Laura Lakusta and colleagues: generally, when infants watch a video in which an object goes from one location to another, they pay more attention to and remember better the location the object ended up at than the location it came from. Whatever the answer, learning biases -- particularly in young children -- are interesting because they provide clues as to the structure of the mind.

Verb biases in structure priming

One of the talks most-mentioned among the folks I talked to at BUCLD was one on structural priming by Michelle Peter (with Ryan Blything, Caroline Rowland, and Franklin Chang, all of the University of Liverpool). The idea behind structural priming is that using a particular syntactic structure once tends to lead to using it more again in the future (priming). The structure under consideration here was the so-called dative alternation:

(1) Mary gave a book to John.
(2) Mary gave John a book

Although the two sentences mean the same thing (maybe -- that's a long post in itself), notice the difference in word order between (1) and (2). The former is called the "prepositional object" structure, and the second is called the "double object" structure. Some time ago, it was discovered that if people use a given verb (e.g., give) in the prepositional object form once, they are more likely to use that verb in the same form again next time they have to use that verb (and vice versa for the double object form). More recently, it was discovered that using one verb (e.g., give) in the prepositional object form made it more likely to use another verb (e.g., send) in that same form (and again vice versa for the double object form). This suggests that the syntactic form itself is represented in some way that is (at least partially) independent of the verb in question, which is consistent with theories involving relatively abstract grammar.

Or maybe not. This has been highly controversial over the last number of years, with groups of researchers (including the Rowland group) showing evidence of what they call a "lexical boost" -- priming is stronger from the same verb to the same verb, which they take as evidence that grammar is at least partly word-specific. Interestingly, they have now found that children do *not* show the same lexical boost (which, if I remember correctly, has been found by other researchers from the "abstract grammar" camp before, but not by those in the "lexically-specific grammar" camp).

This seems consistent with a theory of grammar on which children start out with relatively general grammatical structures, but as you get older you tend to memorize particularly frequent constructions -- thus, as far as processing goes, grammar becomes increasingly lexically-specific as you get older (though the abstract structures are still around in order to allow for productivity). This is the opposite of the speakers' favored theory, one which grammar becomes more abstract as you get older. They did find some aspects of their data that they thought reflected lexically-specific processing in children; it's complex so I won't discuss it here (I didn't have time to get it all down in my notes and don't want to make a mistake).

There was also a talk by Kyae-Sung Park (collaborator: Bonnie D. Schwartz, both of the University of Hawai'i) on the Korean version of the dative alternation, finding that the more common form is learned earlier by second-language learners of Korean. I was interested in finding out more about the structure of Korean, but I don't know the second-language acquisition research well enough to integrate their main findings into the larger literature.

Other studies

There were many other good talks. The ones I saw included a study by Wang & Mintz, arguing that previous studies that looked at the overlap in the contexts in which different determiners occur in child speech -- which had been used to suggest that young children don't have an abstract grammatical category "determiner" -- were confounded by the small size of the corpora used. If you use a similarly small corpus of adult speech, you'd come to the same conclusion. [The analyses were much cooler and more detailed than this quick overview can get across.]

------------
Lakusta, L., Wagner, L., O'Hearn, K., and Landau, B. (2007). Conceptual Foundations of Spatial Language: Evidence for a Goal Bias in Infants Language Learning and Development, 3 (3), 179-197 DOI: 10.1080/15475440701360168

Boston University Conference on Language Development: Day 3

This post continues my series on this years' BUCLD. While conferences are mostly about networking and seeing your friends, I also managed to attend a number of great talks.

Autism and homophones

Hugh Rabagliati got the morning started with a study (in collaboration with Noemi Hahn and Jesse Snedeker) of ambiguity (homophone) resolution. One of the better-known theories of Autism is that people with Autism have difficulty thinking about context (the "weak central coherence theory"). Rabagliati has spent much of his career so far looking at how people use context to interpret ambiguous words, so he decided to check to see whether people with Autism had any more difficulty than typically-developing folk. (Note that many people with Autism have general language delays. Presumably people with language delays will have trouble on language tasks. This work focused on people with Autism who have roughly normal syntax and semantics.)

Participants listened to sentences with homophones (e.g., "bat") that were either had very constraining contexts (e.g., "John fed the bat that he found in the forest") or not-very-constraining contexts (e.g., "John saw the bat that he found in the forest"). These sentences were part of a longer story. What the participant had to do was pick out a relevant picture (of four on the computer screen) for part of the story. The trick was that one of the pictures was related to the other meaning of the homophone (e.g., a baseball glove, which is related to a baseball bat). Due to priming, if people are thinking about that other meaning of the homophone (baseball bat), they are likely to spend some of their time looking at the picture related to that meaning (the baseball glove). If they have successfully determined that the homophone "bat" refers to the animal bat, they should ignore the glove picture. Which is exactly what happened. For both typically developing 6-9 year-olds and 6-9 year-olds with Autism. This is a problem for the weak central coherence theory.

Autism and prosody

In the same session, the Snedeker Lab presented work on prosody and Autism. This study, described by Becky Nappa, looked at contrast stress. Consider the following:

(1) "Look at the blue house. Now, look at the GREEN..."

What do you expect to come next? If you are like most people, you think that the next word is "house". Emphasizing "green" suggests that the contrast between the two sentences is the color, not the type of object to be looked at. Instead, if the color word was not stressed:

(2) "Look at the blue house. Now, look at the green..."

You don't know what is coming up, but it's probably not a house.

Atypical prosody is a diagnostic of Autism, at least according to some diagnostic criteria. That is, people with Autism often use prosody in unusual ways. But many of these folk have, as I pointed out above, general language difficulties. What about the language-intact Autism population? Here, the data has been less clear. There is still some unusual production of prosody, but that doesn't mean that they don't understand prosody.

Nappa and Snedeker tested children's understanding of contrastive stress. While typically-developing children performed as expected (interpreting contrastive stress as meaning a new example of the same type of object will be described), highly verbal children with Autism performed exactly opposite: they expected a new type of object for (1) and the same type of object for (2).

A second study looking at given/new stress patterns. Compare:

(3) Put the candle on the table. Now put the candle on the counter.
(4) Put the candle on the table. Now put the CANdy on the counter.

In general, if you are going to re-mention the same object ("candle" in (3)), you don't stress it the second time around. When you are mentioning a new object -- especially if its name sounds similar to something you have already described -- you are likely to stress it. Here, interestingly, the ASD children were just as good as typically-developing children.

Nappa puts these two findings together and suggest that children with Autism have overgeneralized the stress pattern in (3-4) to cases like (1-2). In general, they think stressed words refer to something new.

Other Day 3 talks

There were other good talks on Day 3, but by my notes always get more and more sparse as a conference goes on. Researchers from Johns Hopkins University (the speaker was Kristen Johannes) argued that "differences between child and adult spatial language have been previously attributed to underdeveloped conceptual representations" (this is a quote from the abstract). In particular, children use the preposition "on" in strange ways. They argue that this is because children have impoverished spatial vocabulary (there are a number of useful words they don't know) and, given that they don't have those words, they over-apply "on" not so much because they conceptualize of "on"ness differently, but because they are, literally, at a loss for words. When you make adults describe spatial arrangements without using the fancy adult words they normally use, they end up over-applying "on" in much the same way kids do. (Here I am working from memory plus the abstract -- my notes, as I mentioned, are incomplete).

Careful readers will notice that I haven't written about Day 2 yet. Stay tuned.

Boston University Conference on Language Development: Day 1

This year marks my 7th straight BUCLD. BUCLD is the major yearly language acquisition conference. (IASCL is the other sizable language acquisition conference, but meets only every three years; it is also somewhat more international than BUCLD and the Empiricist contingent is a bit larger, whereas BUCLD is *relatively* Nativist).

NOTE I'm typing this up during a break at the conference, so I've spent less time making these notes accessible to the general public than usual. Some parts may be opaque to you if you don't know the general subject matter. Feel free to ask questions in the comments.

Day 1 (Friday, Nov. 2)

What does eyetracking tell us about kid's sentence processing

The conference got off to a great start with Jesse Snedeker's 9am talk, "Negation in children's online language comprehension" (for those who don't know, there are 3 talks at any given time; no doubt the other two 9am talks were good, but I wasn't at them). I was actually more interested in the introduction than the conclusion. Over the last 15 years, the Visual World Paradigm has come to dominate how we study children's language processing. Here is how I usually describe the paradigm to participants in my studies: "People typically look at what is being talked about. So if I talk about the window, you'll probably automatically look at the window. So we can measure what people look at as they listen to sentences to get a sense of what they think the sentence is about at any given time."

Snedeker's thesis was that we actually don't know what part of language comprehension this paradigm measures. Does it measure your interpretation of individual words or of the sentence as a whole? One of the things about language is that words have meanings by themselves, but when combined into sentences, new meanings arise that aren't part of any individual word. So "book" is a physical object, but if I say "The author started the book", you likely interpret "book" as something closer to an activity ("writing the book") than a physical object.

Because the Visual World Paradigm is used extensively by sentence-comprehension people (like me), we hope that it measures sentence comprehension, not just individual words. Snedeker walked through many of the classic results from the Visual World Paradigm and argued that they are consistent with the possibility that the Visual World Paradigm just measures word meaning, not sentence meaning.

She then presented a project showing that, at least in some cases, the Visual World Paradigm is sensitive to sentence meaning, which she did by looking at negation. In "John broke the plate", we are talking about a broken plate, where as in "John didn't break the plate", we are not. So negation completely changes the meaning of the sentence. She told participants stories about different objects while the participants looked at pictures of those objects on a computer screen (the screen of an automatic eyetracker, which can tell where the participant is looking). For example, the story might be about a clumsy child who was carrying dishes around and broke some of them but not others (and so, on the screen, there was a picture of a broken plate and a picture of a not-broken plate). She found that adults and even children as young as three years old look at the broken plate when they heard "John broke the plate" but at the not-broken plate when they heard "John didn't break the plate", and they did so very quickly ... which is what you would expect if eyetracking was measuring your current interpretation of the sentence rather than just your current interpretation of the individual words (in which case, when you hear the word "plate", either plate will do).

(This work was joint work with Miseon Lee -- a collaborator of mine -- Tracy Brookhyser and Matthew Jiang.)

The First Mention Effect

W. Quin Yow of Singapore University of Technology and Design presented a project looking at pronoun interpretation (a topic close to my heart). She looked at sentences in which adults typically interpret the pronoun as referring to the previous subject (these are not the so-called "implicit causality" sentences I discuss most on this blog):
Miss Owl is going out with Miss Ducky. She wants her bag. 
She found, as usual, a strong preference for "she" to refer to Miss Owl in this (and similar) sentences. There is one older study that did not find such a preference in children roughly 4-6 years old, but several other studies have found evidence of (weak) first-mention effects in such sentences, including [shameless self-plug] work I presented at BUCLD two years ago.

Yow compared monolingual English-speaking four year-olds and bilingual English-speaking four year-olds (their "other" language differed from kid to kid). While only the bilinguals showed a statistically significant first-mention effect, the monolingual kids were only just barely not above chance and almost identical to the monolinguals. While the first-mention effects she saw were weaker than what I saw in my own work, her kids were slightly younger (four year-olds instead of five year-olds).

The additional twist she added was that, in some conditions, the experimenter pointed to one of the characters in the story at the moment she uttered the pronoun. This had a strong effect on how adults and bilingual children interpreted the pronoun; the effect was weaker or monolingual children, but I couldn't tell whether it was significantly weaker (with only 16 kids per group, a certain amount of variability between groups is expected).

In general, I interpret this as more evidence that young children do have (weak) first-mention biases. And it is nice to have one's results replicated.

Iconicity in sign language

Rachel Magid, a student of Jennie Pyers at Wellesley College, presented work on children's acquisition of sign language. Some signs are "iconic" in that they resemble the thing being referred to: for instance, miming swinging a hammer as the sign for "hammer" (I remember this example from the talk, but I do not remember whether that's an actual sign in ASL or any other sign language). Spoken languages have iconic words as well, such as "bark", which both means and sort of sounds like the sound a dog makes. This brings up an important point: iconic words/signs resemble the things they refer to, but not perfectly, and in fact it is often difficult to guess what they refer to, though once it has been explained to you, the relationship is obvious.

The big result was that four year-olds hearing children found it easier to learn iconic than non-iconic signs, whereas three year-olds did not. Similar results were found for deaf children (though if memory serves, the three year-old deaf children were trending towards doing better with iconic signs, though the number of subjects -- 9 deaf three year-olds -- was too small to say much about it).

Why care? There are those who think that early sign language acquisition -- and presumably the creation of sign languages themselves -- derives from imitation and mimicry (basically, sign languages and sign language acquisition start as a game of charades). If so, then you would expect those signs that are most related to imitation/mimicry to be the easiest to learn. However, the youngest children -- even deaf children who have learned a fair amount of sign language -- don't find them especially easy to learn. Why older children and adults *do* find them easier to learn still requires an explanation, though .

[Note: This is my interpretation of the work. Whether Magid and Pyers would endorse the last paragraph, I am not sure.]

Briefly-mentioned

Daniele Panizza (another occasional collaborator of mine) presented work done with a number of folks, including Stephen Crain, on 3-5 year-olds' interpretations of numbers. The question is whether young children understand reversals of entailment scales. So, if you say "John has two butterflies", that means that you do not have three, whereas saying "If John has two butterflies, give him a sticker" means that if he has two OR MORE butterflies, give him a sticker [NOTE, even adults find this "at least two" reading to be a bit iffy; the phenomenon is that they find the "at least two" reading much better in a downward-entailing context like a conditional MUCH BETTER than in a normal declarative]. Interestingly, another colleague and I had spent a good part of the last week wondering whether children that age understood this, so we were happy to learn the answer so quickly: they do.

In the next talk, Einat Shetreet presented work with Julia Reading, Nadine Gaab and Gennaro Chierchia also looking at entailment scales, but with scalar quantifiers rather than numerals. Adults generally think "John ate some of the cookies" means that he did not eat all of them (some = some but not all), whereas "John didn't eat all of the cookies" means that he ate some of them (not all = some). They found that six year olds also get both of these inferences, which is consistent with the just-mentioned Panizza study.

These studies may seem esoteric but get at recent theories of scalar implicature. Basically, theories of scalar implicature have been getting much more complex recently, suggesting that this relatively simple phenomenon involves many moving pieces. Interestingly, children are very bad at scalar implicature (even up through the early elementary years, children are much less likely to treat "some" as meaning "some but not all", so they'll accept sentences like "Some elephants have trunks" as reasonable sentences, whereas adults tend to find such sentences quite odd). So the race is on to figure out which of the many component parts of scalar implicature are the limiting step in early language acquisition.

There were many other good talks on the first day; these merely represent those for which I have the most extensive notes. 

Arcadia

The super-lame New Yorker review of the recent Broadway revival of Stoppard's "Arcadia" moved me to do a rare thing: write a letter to the editor. They didn't publish it, despite the fact -- and I think I'm being objective here -- my letter was considerably better than the review. Reviews are no longer free on the New Yorker website (you can see a synopsis here), but I think my letter covers the main points. Here it is below:

Hilton Als ("Brainstorm", Mar 28) writes about the recent revival of "Arcadia" that Stoppard's "aim is not to show us people but to talk about ideas." Elsewhere, Als calls the show unmoving and writes that Stoppard does better with tragicomedies.
"Arcadia" is not a show about ideas. It is about the relationship people have with ideas, particularly their discovery. Anyone who has spent any amount of time around academics would instantly recognize the characters as people, lovingly and realistically depicted. (Als singles out Billy Crudup's "amped-up characterization of the British historian Bernard Nightengale" as particularly mysterious. As Ben Brantley wrote in the New York Times review, "If you've spent any time on a college campus of late, you've met this [man].")
As an academic, the production was for me a mirror on my own life and the people around me. Not everyone will have that experience. The beauty of theater (and literature) is that it gives us peek into the inner lives of folk very different from ourselves. It is a shame Als was unable to take advantage of this opportunity.
Where the play focuses most closely on ideas is the theme of an idea (Thomasina's) stillborn before its time. If one feels no pathos for an idea that came too soon, translate "idea" into "art" and "scientist" into "artist" and consider the tragedies of artists unappreciated in their time and quickly forgotten. Even a theater critic can find the tragedy in that.