Field of Science

Showing posts with label pragmatics. Show all posts
Showing posts with label pragmatics. Show all posts

Boston University Conference on Language Development: Day 3

This post continues my series on this years' BUCLD. While conferences are mostly about networking and seeing your friends, I also managed to attend a number of great talks.

Autism and homophones

Hugh Rabagliati got the morning started with a study (in collaboration with Noemi Hahn and Jesse Snedeker) of ambiguity (homophone) resolution. One of the better-known theories of Autism is that people with Autism have difficulty thinking about context (the "weak central coherence theory"). Rabagliati has spent much of his career so far looking at how people use context to interpret ambiguous words, so he decided to check to see whether people with Autism had any more difficulty than typically-developing folk. (Note that many people with Autism have general language delays. Presumably people with language delays will have trouble on language tasks. This work focused on people with Autism who have roughly normal syntax and semantics.)

Participants listened to sentences with homophones (e.g., "bat") that were either had very constraining contexts (e.g., "John fed the bat that he found in the forest") or not-very-constraining contexts (e.g., "John saw the bat that he found in the forest"). These sentences were part of a longer story. What the participant had to do was pick out a relevant picture (of four on the computer screen) for part of the story. The trick was that one of the pictures was related to the other meaning of the homophone (e.g., a baseball glove, which is related to a baseball bat). Due to priming, if people are thinking about that other meaning of the homophone (baseball bat), they are likely to spend some of their time looking at the picture related to that meaning (the baseball glove). If they have successfully determined that the homophone "bat" refers to the animal bat, they should ignore the glove picture. Which is exactly what happened. For both typically developing 6-9 year-olds and 6-9 year-olds with Autism. This is a problem for the weak central coherence theory.

Autism and prosody

In the same session, the Snedeker Lab presented work on prosody and Autism. This study, described by Becky Nappa, looked at contrast stress. Consider the following:

(1) "Look at the blue house. Now, look at the GREEN..."

What do you expect to come next? If you are like most people, you think that the next word is "house". Emphasizing "green" suggests that the contrast between the two sentences is the color, not the type of object to be looked at. Instead, if the color word was not stressed:

(2) "Look at the blue house. Now, look at the green..."

You don't know what is coming up, but it's probably not a house.

Atypical prosody is a diagnostic of Autism, at least according to some diagnostic criteria. That is, people with Autism often use prosody in unusual ways. But many of these folk have, as I pointed out above, general language difficulties. What about the language-intact Autism population? Here, the data has been less clear. There is still some unusual production of prosody, but that doesn't mean that they don't understand prosody.

Nappa and Snedeker tested children's understanding of contrastive stress. While typically-developing children performed as expected (interpreting contrastive stress as meaning a new example of the same type of object will be described), highly verbal children with Autism performed exactly opposite: they expected a new type of object for (1) and the same type of object for (2).

A second study looking at given/new stress patterns. Compare:

(3) Put the candle on the table. Now put the candle on the counter.
(4) Put the candle on the table. Now put the CANdy on the counter.

In general, if you are going to re-mention the same object ("candle" in (3)), you don't stress it the second time around. When you are mentioning a new object -- especially if its name sounds similar to something you have already described -- you are likely to stress it. Here, interestingly, the ASD children were just as good as typically-developing children.

Nappa puts these two findings together and suggest that children with Autism have overgeneralized the stress pattern in (3-4) to cases like (1-2). In general, they think stressed words refer to something new.

Other Day 3 talks

There were other good talks on Day 3, but by my notes always get more and more sparse as a conference goes on. Researchers from Johns Hopkins University (the speaker was Kristen Johannes) argued that "differences between child and adult spatial language have been previously attributed to underdeveloped conceptual representations" (this is a quote from the abstract). In particular, children use the preposition "on" in strange ways. They argue that this is because children have impoverished spatial vocabulary (there are a number of useful words they don't know) and, given that they don't have those words, they over-apply "on" not so much because they conceptualize of "on"ness differently, but because they are, literally, at a loss for words. When you make adults describe spatial arrangements without using the fancy adult words they normally use, they end up over-applying "on" in much the same way kids do. (Here I am working from memory plus the abstract -- my notes, as I mentioned, are incomplete).

Careful readers will notice that I haven't written about Day 2 yet. Stay tuned.

Boston University Conference on Language Development: Day 1

This year marks my 7th straight BUCLD. BUCLD is the major yearly language acquisition conference. (IASCL is the other sizable language acquisition conference, but meets only every three years; it is also somewhat more international than BUCLD and the Empiricist contingent is a bit larger, whereas BUCLD is *relatively* Nativist).

NOTE I'm typing this up during a break at the conference, so I've spent less time making these notes accessible to the general public than usual. Some parts may be opaque to you if you don't know the general subject matter. Feel free to ask questions in the comments.

Day 1 (Friday, Nov. 2)

What does eyetracking tell us about kid's sentence processing

The conference got off to a great start with Jesse Snedeker's 9am talk, "Negation in children's online language comprehension" (for those who don't know, there are 3 talks at any given time; no doubt the other two 9am talks were good, but I wasn't at them). I was actually more interested in the introduction than the conclusion. Over the last 15 years, the Visual World Paradigm has come to dominate how we study children's language processing. Here is how I usually describe the paradigm to participants in my studies: "People typically look at what is being talked about. So if I talk about the window, you'll probably automatically look at the window. So we can measure what people look at as they listen to sentences to get a sense of what they think the sentence is about at any given time."

Snedeker's thesis was that we actually don't know what part of language comprehension this paradigm measures. Does it measure your interpretation of individual words or of the sentence as a whole? One of the things about language is that words have meanings by themselves, but when combined into sentences, new meanings arise that aren't part of any individual word. So "book" is a physical object, but if I say "The author started the book", you likely interpret "book" as something closer to an activity ("writing the book") than a physical object.

Because the Visual World Paradigm is used extensively by sentence-comprehension people (like me), we hope that it measures sentence comprehension, not just individual words. Snedeker walked through many of the classic results from the Visual World Paradigm and argued that they are consistent with the possibility that the Visual World Paradigm just measures word meaning, not sentence meaning.

She then presented a project showing that, at least in some cases, the Visual World Paradigm is sensitive to sentence meaning, which she did by looking at negation. In "John broke the plate", we are talking about a broken plate, where as in "John didn't break the plate", we are not. So negation completely changes the meaning of the sentence. She told participants stories about different objects while the participants looked at pictures of those objects on a computer screen (the screen of an automatic eyetracker, which can tell where the participant is looking). For example, the story might be about a clumsy child who was carrying dishes around and broke some of them but not others (and so, on the screen, there was a picture of a broken plate and a picture of a not-broken plate). She found that adults and even children as young as three years old look at the broken plate when they heard "John broke the plate" but at the not-broken plate when they heard "John didn't break the plate", and they did so very quickly ... which is what you would expect if eyetracking was measuring your current interpretation of the sentence rather than just your current interpretation of the individual words (in which case, when you hear the word "plate", either plate will do).

(This work was joint work with Miseon Lee -- a collaborator of mine -- Tracy Brookhyser and Matthew Jiang.)

The First Mention Effect

W. Quin Yow of Singapore University of Technology and Design presented a project looking at pronoun interpretation (a topic close to my heart). She looked at sentences in which adults typically interpret the pronoun as referring to the previous subject (these are not the so-called "implicit causality" sentences I discuss most on this blog):
Miss Owl is going out with Miss Ducky. She wants her bag. 
She found, as usual, a strong preference for "she" to refer to Miss Owl in this (and similar) sentences. There is one older study that did not find such a preference in children roughly 4-6 years old, but several other studies have found evidence of (weak) first-mention effects in such sentences, including [shameless self-plug] work I presented at BUCLD two years ago.

Yow compared monolingual English-speaking four year-olds and bilingual English-speaking four year-olds (their "other" language differed from kid to kid). While only the bilinguals showed a statistically significant first-mention effect, the monolingual kids were only just barely not above chance and almost identical to the monolinguals. While the first-mention effects she saw were weaker than what I saw in my own work, her kids were slightly younger (four year-olds instead of five year-olds).

The additional twist she added was that, in some conditions, the experimenter pointed to one of the characters in the story at the moment she uttered the pronoun. This had a strong effect on how adults and bilingual children interpreted the pronoun; the effect was weaker or monolingual children, but I couldn't tell whether it was significantly weaker (with only 16 kids per group, a certain amount of variability between groups is expected).

In general, I interpret this as more evidence that young children do have (weak) first-mention biases. And it is nice to have one's results replicated.

Iconicity in sign language

Rachel Magid, a student of Jennie Pyers at Wellesley College, presented work on children's acquisition of sign language. Some signs are "iconic" in that they resemble the thing being referred to: for instance, miming swinging a hammer as the sign for "hammer" (I remember this example from the talk, but I do not remember whether that's an actual sign in ASL or any other sign language). Spoken languages have iconic words as well, such as "bark", which both means and sort of sounds like the sound a dog makes. This brings up an important point: iconic words/signs resemble the things they refer to, but not perfectly, and in fact it is often difficult to guess what they refer to, though once it has been explained to you, the relationship is obvious.

The big result was that four year-olds hearing children found it easier to learn iconic than non-iconic signs, whereas three year-olds did not. Similar results were found for deaf children (though if memory serves, the three year-old deaf children were trending towards doing better with iconic signs, though the number of subjects -- 9 deaf three year-olds -- was too small to say much about it).

Why care? There are those who think that early sign language acquisition -- and presumably the creation of sign languages themselves -- derives from imitation and mimicry (basically, sign languages and sign language acquisition start as a game of charades). If so, then you would expect those signs that are most related to imitation/mimicry to be the easiest to learn. However, the youngest children -- even deaf children who have learned a fair amount of sign language -- don't find them especially easy to learn. Why older children and adults *do* find them easier to learn still requires an explanation, though .

[Note: This is my interpretation of the work. Whether Magid and Pyers would endorse the last paragraph, I am not sure.]

Briefly-mentioned

Daniele Panizza (another occasional collaborator of mine) presented work done with a number of folks, including Stephen Crain, on 3-5 year-olds' interpretations of numbers. The question is whether young children understand reversals of entailment scales. So, if you say "John has two butterflies", that means that you do not have three, whereas saying "If John has two butterflies, give him a sticker" means that if he has two OR MORE butterflies, give him a sticker [NOTE, even adults find this "at least two" reading to be a bit iffy; the phenomenon is that they find the "at least two" reading much better in a downward-entailing context like a conditional MUCH BETTER than in a normal declarative]. Interestingly, another colleague and I had spent a good part of the last week wondering whether children that age understood this, so we were happy to learn the answer so quickly: they do.

In the next talk, Einat Shetreet presented work with Julia Reading, Nadine Gaab and Gennaro Chierchia also looking at entailment scales, but with scalar quantifiers rather than numerals. Adults generally think "John ate some of the cookies" means that he did not eat all of them (some = some but not all), whereas "John didn't eat all of the cookies" means that he ate some of them (not all = some). They found that six year olds also get both of these inferences, which is consistent with the just-mentioned Panizza study.

These studies may seem esoteric but get at recent theories of scalar implicature. Basically, theories of scalar implicature have been getting much more complex recently, suggesting that this relatively simple phenomenon involves many moving pieces. Interestingly, children are very bad at scalar implicature (even up through the early elementary years, children are much less likely to treat "some" as meaning "some but not all", so they'll accept sentences like "Some elephants have trunks" as reasonable sentences, whereas adults tend to find such sentences quite odd). So the race is on to figure out which of the many component parts of scalar implicature are the limiting step in early language acquisition.

There were many other good talks on the first day; these merely represent those for which I have the most extensive notes. 

New experiment: Mind Reading Quotient

Language requires a lot of inference. Consider the following three conversations:

A: Are there lots of people at the party?
B: Well, most people have left already.

A: How long has the party been going on?
B: Well, most people have left already.

A: Is it a good party?
B: Well, most people have left already.

In each of these cases, B's statement literally means the same thing, but the interpretation is different. Explaining (a) why this should be the case, and (b) how people figure out the implicit meanings is a very active area of research in modern linguistics and psycholinguistics.

The Mind Reading Quotient


Basically, understanding conversations like the ones above seem to require a certain amount of "mind reading" -- that is, guessing what the speaker (B, in this case) means to say. If you've ever wondered "what did she mean by that?" you were engaged in this kind of mind reading.

I just posted a new experiment -- the Mind Reading Quotient -- which consist of several short tests of this kind of mind reading ability. A couple of the tests look specifically at trying to work out what somebody is saying. A couple of the tests look at similar skills in the non-linguistic domain.

My favorite of the non-linguistic tasks is a coordination game. Thomas Schelling won a Nobel Prize in part for pioneering work on the topic. He found that people are very good at guessing what another person is thinking under certain conditions. For instance, if you tell two people they must meet up in New York City -- but without communicating with each other in any way -- they are actually fairly likely to succeed. Most likely, they would both show up on the corner of Times Square (or in one of a very small number of likely locations). The Mind Reading Quotient includes several such problems.

The goal of this study in part is to get a sense of how good people are at such tasks. There are a lot of thought experiments out there, but not nearly enough data. I will also be looking to see if people who are better at one of these tasks are also better at the others -- that is, is there a single underlying "mind reading ability," or does each task require a separate set of skills?

Reports so far are that the experiment runs 20-25 minutes. Because this is broken up into 7 separate activities, it should seem faster than that. And a lot of the tasks are fun (at least, I think so). Plus, at the end of the experiment, you'll be able to see your scores on many of the different sub-tasks. In two cases (a vocabulary test and an empathy test), I also have percentile scores already worked out, so you can see how you compare to average.

Follow this link to the study.


---
For previous posts about pragmatics and other linguistic inferences, check out this one, this one and this one.

image CC by Ignacio Conejo.

Overnight data on lying and bragging

Many thanks to all those who responded to my call for data last week. By midnight, I had enough data to be confident of the results, and the results were beautiful. I would have posted about them here on Friday, but in the lead-up to this presentation, I did so much typing I burned out my wrists and have been taking a much-needed computer break.

The study looked at the interpretation of the word some. Under some conditions, people interpret some as meaning some but not all, but other times, it means simply not none. For instance compared John did some of his homework with If you eat some of your green beans, you can have dessert. Changing some to some-but-not-all doesn't change the meaning of the first sentence, but (for most people) changes the interpretation of the second.

This phenomenon, called "scalar implicature" is one of the hottest topics in pragmatics -- a subdivision of linguistic study. The reasons for this are complex -- partly it's because Ira Noveck and his colleagues turned out a series of fascinating studies capturing a lot of people's attention. Partly it's because scalar implicature is a relatively easily-studied test case for several prominent theories. Partly it's other reasons.

Shades of meaning

On most theories, there are a few reasons some might be interpreted as some-but-not-all or not. The usual intuition is that part of why we assume John did some of his homework means some-but-not-all is because if it were true that John did all of his homework, the speaker would have just said so ... unless, of course, the speaker doesn't know if John did all his homework or if the speaker does know but have a good reason to obfuscate.

At least, that's what many theorists assume, but proving it has been hard. Last year, Bonnefon, Feeney & Villejoubert published a nice study showing that people are less likely to interpret some as some-but-not-all in so-called "face-threatening" contexts -- that is, when the speaker is being polite. For instance, suppose you are a poet and you send 10 poems to a friend to read. Then you ask the friend what she thinks, and she says, "Some of the poems need work." In this case, many people suspect that the friend actually means all of the poems need work, but is being polite.

The study

In this quick study, I wanted to replicate and build on Bonnefon et al's work. The experiment was simple. People read short statements and then answered a question about each one. The first two statement/question pairs were catch trials -- trials with simple questions and obvious answers. The small number of participants who got those wrong were excluded (presumably, they misunderstood the instructions or simply weren't paying attention).

The critical trial was the final one. Here's an example:
Sally: 'John daxed some of the blickets.'
'Daxing' is a neutral activity, neither good nor bad.
Based on what Sally said, how likely is it that John daxed ALL the blickets?
As you can see, the sentence contained unknown words ('daxing', 'blickets'), and participants were presented with a partial definition of one of them (that daxing is a neutral activity). The reason to do this was that it allowed us to manipulate the context carefully.

Each participant was in one of six conditions. Either Sally said "John daxed some...," as in the example above, or she said "I daxed some..." Also, "daxing" was described as either a neutral activity, as in the example above, or a negative activity (something to be ashamed of), or a positive activity (something to be proud of).

Results


As shown in the graph, whether daxing was described as positive, negative or neutral affected whether participants thought all the blickets were daxed (e,g, that some meant at least some rather than some-but-not-all) when Sally was talking about her own actions ("I daxed some of the blickets").

This makes sense, if 'daxing' is something to be proud of, then if Sally daxed all of the blickets, she'd say so. Since she didn't, people assume she daxed only some of them (far right blue bar in graph). Whereas if daxing is something to be ashamed of, then even if she daxed all of them, she might prefer to say "I daxed some of the blickets" as a way of obfuscating -- it's technically true, but misleading.

Interestingly, this effect didn't show up if Sally was talking about John daxing blickets. Presumably this is because people think the motivation to brag or lie is less strong when talking about a third person.

Also  interestingly, people weren't overall more likely to interpret some as meaning some-but-not-all when the sentence was in the first-person ("I daxed..."), which I had predicted to be the case. As described above, many theories assume that some should only be interpreted as some-but-not-all if we are sure the speaker knows whether or not all holds. We should be more sure when the speaker is talking about her own actions than someone else's. But I didn't find any such affect. This could be because the theory is wrong, because the effect of using first-person vs. third-person is very weak, or because participants were at floor already (most people in all 6 conditions thought it was very unlikely that all the blickets were daxed, which can make it hard detect an effect -- though it didn't prevent us from finding the effect of the meaning of 'daxing').

Afterword

I presented these data at a workshop on scalar implicature that I organized last Thursday. It was just one experiment of several dozen included in that talk, but it was the one that seemed to have generated the most interest. Thanks once again to all those who participated.

--------------
Bonnefon, J., Feeney, A., & Villejoubert, G. (2009). When some is actually all: Scalar inferences in face-threatening contexts Cognition, 112 (2), 249-258 DOI: 10.1016/j.cognition.2009.05.005

What isn't said

"Last summer, I went to a conference in Spain."

Technically, all you learned from that sentence is that there was a conference in Spain and that I traveled to it from some other location that isn't Spain. That's what the sentence literally means.

If you know that I live in Boston, you probably assumed that I flew to Spain, rather than take a boat. You're probably confident that I didn't use a transporter or walk on water. You probably also assumed that the conference is now over. All these things are true, but they weren't actually in what I said.

The Communication Game

This presents a problem for understanding communication: a lot is communicated which is not said. A lot of the work I do is focused on trying to figure out what not just what a sentence means, but what is communicated by it ... and that is the focus of the newest experiment on GamesWithWords.org.

In The Communication Game, you'll read one person's description of a situation (e.g., "Josh went to a conference in Spain"). Then, you'll be asked to decide whether, based on that description, you think another statement is true. Some will be obviously true ("Josh went to a conference"), some probably true ("Josh went to the conference in Spain by plane"), some clearly false ("Josh went to the conference in Spain by helicopter"), and some are hard to tell ("Josh enjoyed the conference in Spain more than the conference in Boston").

Scientifically, what we're interested in is which questions are easier to get right than others. From that, we'll get a sense of what people's expectations are. Part of what makes this a game is the program keeps score, and you'll find out at the end how well you did.

The problem with studying pragmatics

(live-blogging Xprag)

In his introduction, Kai von Fintel tells an anecdote that I think sums up why it is sometimes difficult to explain what it is we do. Some time ago, Emmon Bach wrote a book for Cambridge University Press on if/then conditionals. The copy-editor sent it back, replacing every use of "if and only if" with a simple "if," saying the "and only if" was redundant.

As it turns out, although people often interpret "if" as meaning "if nd only if," that's simply not what the word means, despite our intuitions (most people interpret if you mow the lawn, I'll give you $5 as meaning if and only if you mow the lawn...).

Part of the mystery, then, is explaining why our intuitions are off. In the meantime, though, explaining what I do sometimes comes across as trying to prove the sky is blue.

Summa smack-down

(live-blogging Xprag)

Dueling papers this morning on the word "summa" as compared with "some of." Judith Degen presented data suggesting "summa the cookies" is more likely to be strengthened to "some but not all" than is "some of the cookies." Huang followed with data demonstrating that there is no such difference. Everybody seems to agree this has something to do with the way the two studies were designed, but not on which way is better.

I am more convinced by Huang's study, but as she is (1) a lab-mate, (2) a friend, and (3) sitting next to me as I write this, I'm probably not a neutral judge.

Speaker uncertainty

Arjen Zondervan just presented a fascinating paper with the acknowledged long title "Effects of contextual manipulation on hearers' assumptions about speaker expertise, exhaustivity & real-time processing of the scalar implicature of or." He presented two thought-provoking experiments on exhaustivity & speaker expertise, but of primary interest to me was Experiment 3.

An important debate in the field has centered around whether scalar implicatures depend on context. A couple years ago, Richard Breheny & colleagues published a nice reading-time experiment in which was consistent with scalar implicatures being computed in some contexts but not others. Roughly, they set up a contexts something along the following lines:

(1) Some of the consultants/ met the manager./ The rest/ did not manage/ to attend.
(2) The manager met/ some of the consultants./The rest/ did not manage/ to attend.

Participants read the sentences one segment at a time (the '/' marks the boundaries between segments), pressing a key when they were ready for the next sentence. For reasons that may or may not be clear, it was thought that there would be an implicature in the first sentence but not in the second, making "the rest" fairly unnatural in the second sentence, and this resulted in subjects reading "the rest" more slowly in (2) than in (1).

This was a nice demonstration, and was, I tink, the first study of scalar implicature to use an implicit measure rather than just asking participants what they think a sentence means, which has certain advantages, but there were a number of potential confounds in the stimuli in this and the two other experiments they used. Zondervan fixed some of these confounds, re-ran the study and got the same results.

I was interested because, in collaboration with Jesse Snedeker, I have also re-run the Breheny study and also got that basic result. However, Zondervan and Breheny both also got longer reading times for the scalar term (e.g., 'some') in the condition where there is a scalar implicature. Both take this as evidence that calculating an implicature is an effortful process. In a series of similar experiments using my own stimuli, and I just don't get that part of the result. I am fairly convinced this is due to differences in our stimuli, but we're still trying to figure out why and what that might mean.

That said, the effect that all three of us get is, I think, the more important part of the data, and it's nice to see another replication.

Default computation in language

(Blogging Xprag)

This morning begins with a series of talks on scalar implicature. This refers to the fact that "John ate some of the cookies" is usually interpreted as meaning "some but not all of the cookies." Trying to get this post written during a 5-minute Q&A prevents me from proving that "some" does not simply mean "some but not all," but in fact it is very clear that "some" means "some and possibly all." The question, then, is why and how do people interpret such sentences as meaning something other than what they literally mean.

The most interesting moment for me so far has been a question by Julie Sedivy during the first Q & A. A popular theory of scalar implicature argues that the computation of "some = some-but-not-all" is a default computation. A number of experiments that have shown that such computation is slow has been taken by some as evidence against a default model. Sedivy pointed out that saying a computation is done by default doesn't require that the computation be fast, so evidence about speed of computation can't be taken as evidence for or against a default-computation theory.

Liveblogging Experimental Pragmatics

This week I am in Lyon, France, for the 3rd Experimental Pragmatics meeting. I had plans to live-blog CUNY and SRCD, neither of which quite happened, but I'm giving it a go for Day 2 of Xprag, and we'll see how it goes.

Pragmatics, roughly defined, is the study of language use. In practice, this tends to mean anything that isn't semantics, syntax or phonology, though in practice the division between semantics and pragmatics tends to shift as we learn more about the system. Since Pragmatics has perhaps been studied more extensively in philosophy & linguistics, the name of the conference emphasizes that it focuses on experiments rather than just theory.

More to follow

The power of because

To ask for a dime just outside a telephone booth is less than to ask for a dime for no apparent reason in the middle of the street.
-Penelope Brown & Stephen Levinson, Politeness

The opening quote seems to be true. It raises the question of why, though. An economist might say a gift of 10 cents is a gift of 10 cents. You are short 10 cents no matter what the requestee's reason. So why should it matter?

The power of because?
Empirically, in a well-known experiment, Ellen Langer and colleagues showed that 95% of people standing in line to use a copy machine were willing to let another cut in line as long as the cutter offered a reason, even if that reason was inane (e.g. "because I have to make copies.")

The explanation given by Langer and colleagues was that people are primed to do defer to somebody who provides a reason. Thus, the word "because" essentially in and of itself can manipulate others. This not only causes us to give money to people who need it to make a phone call, but to simply give money to anybody who gives a reason.

I haven't been able to find the original research paper -- it seems to have perhaps been reported in a book, not in a published article -- so I don't know for sure exactly what conditions were used. However, none of the media reports I have read (such as this one) mention the perhaps the most important control: a condition in which the cutter gives no excuse and does not use the word "because."

What are other possible explanations?
Other possible explanations are that people are simply reluctant to say 'no,' especially if the request is made in earnest.

There are a couple reasons this could be true. People might be pushovers. They might also simply have been taught to be very polite.

Something that strikes me more likely is that most people avoid unnecessary confrontation. Confrontation is always risky. It can escalate into a situation where somebody gets hurt. Certainly, violent confrontations have been started over less than conflicting desires to use the same copier.

Speculation

None of these speculations, however, explain the opening quote. Perhaps there is an answer out there, and if anybody has come across it, please comment away.

Become a Phrase Detective: A new, massive Internet-based language project

A typical speech or text does not consist of a random set of unrelated sentences. Generally, the author (or speaker) starts talking about one thing and continues talking about it for a while. While this tends to be true, there is typically nothing in the text that guarantees it:

This is my brother John. He is very tall. He graduated from high school last year.
We usually assume this is a story about a single person, who is tall, a recent high school graduate, named John, and who is brother of the speaker. But it could very well have been about three different people. Although humans are very good at telling which part of a story relates to which other part, it turns out to be very difficult to explain how we know. We just do.

This is a challenge both to psychologists like myself, as well as to people who try to design computer programs that can analyze text (whether for the purposes of machine translation, text summarization, or any other application).

The materials for research

A group at the University of Essex put together an entertaining new Web game called Phrase Detectives to help develop new materials for cutting-edge research into this basic problem of language. Their project is similar to my ongoing Dax Study, except that theirs is not so much an experiment as a method for developing the stimuli.

Phrase Detectives is set up as a competition between users, and the results is an entertaining game that you can participate in more or less as you choose. Other than its origins, it looks a great deal like many other Web games. The game speaks for itself and I recommend that you check it out.

What's the point?

Their Wiki provides some useful details as to the purpose of this project, but as it is geared more towards researchers than the general public, it could probably use some translation of its own. Here's my attempt at translation:
The ability to make progress in Computational Linguistics depends on the availability of large annotated corpora...
Basically, the goal of Computational Linguistics (and the related field, Natural Language Processing) is to come up with computer algorithms that can "parse" text -- break it up into its component parts and explain how those parts relate to one another. This is like a very sophisticated version of the sentence diagramming you probably did in middle school.

Developing and testing new algorithms requires a log of practice materials ("corpora"). Most importantly, you need to know what the correct parse (sentence diagram) is for each of your practice sentences. In other words, you need "annotated corpora."

...but creating such corpora by hand annotation is very expensive and time consuming; in practice, it is unfeasible to think of annotating more that one million words.
One million words may seem like a lot, but it isn't really. One of the complaints about one of the most famous word frequency corpora (the venerable Francis & Kucera) is that many important words never even appear in it. If you take a random set of 1,000,000 words, very common words like a, and, and the take up a fair chunk of that set.

Also, consider that when a child learns a language, that child hears or reads many, many millions of words. If it takes so many for a human who is genetically programmed to learn language, how long should it take a computer algorithm? (Computers are more advanced than humans in many areas, but in the basic areas of human competency -- vision, language, etc. -- they are still shockingly primitive.)

However, the success of Wikipedia and other projects shows that another approach might be possible: take advantage of the willingness of Web users to collaborate in resource creation. AnaWiki is a recently started project htat iwll develop tools to allow and encourage large numbers of volunteers over the Web to collaborate in the creation of semantically annotated corpora (in the first instance, of a corpus annotated with information about anaphora).
This is, of course, what makes the Web so exciting. It took a number of years for it to become clear that the Web was not just a method of doing the same things we always did but faster and more cheaply, but actually a platform for doing things that had never even been considered before. It has had a deep impact in many areas of life -- cognitive science research being just one.

The science of flirting and teasing


Flirting appears to be a universal -- and I would venture, innate -- human behavior. It is so universal that the degree to which many aspects of it are downright odd often go unnoticed.

One of the more bewildering aspects of flirting is the degree to which it involves -- on the surface, at least -- insulting one another. This is summed up rather unironically in a dating tips website (check out this article as well):

"From the outside, teasing seems to be a twisted pleasure: affectionate and sort of insulting all at once. Teasing is a very articulate way of winning a person's attraction. It actually helps bring people closer."

Huh?

Something about this analysis seems right, but the "why" seems very unconvincing. Teasing works because it draws attention and brings people closer together. Teasing also leads to bar fights and school shootings.

What gives?

Teasing to reduce social space
Part of an answer appears in Penelope Brown & Stephen Levinson's classic 1978 Politeness: Some universals in language use. On page 72 of the second edition, they note in passing that "a criticism, with the assertion of mutual friendship, may lose much of its sting -- indeed ... it often becomes a game and possibly even a compliment."

I'm not completely sure where they were going with that, but one possible interpretation is that there are things that can be said between friends but not between strangers (criticism, for instance). So when you criticize somebody, you are either being offensive or asserting friendship. If the criticism is done in the right tone under the right circumstances, it comes across as an assertion of friendship.

Of course, the balance can be hard to maintain and it's easy to foul up.

I don't think this is a complete explanation by any means, but there seems to be something right about it. I'm justing beginning to read more in this area of pragmatics, so hopefully I'll have more to add in the future. If anybody is more familiar with this line of research and has something to add, comment away!

New research on understanding metaphors

Metaphors present a problem for anybody trying to explain language, or anybody trying to teach a computer to understand language. It is clear that nobody is supposed to take the statement, "Sarah Palin is a barracuda" literally.


However, we can imagine that such phrases are memorized like any other idiom or, for that matter, any word. Granted, we aren't sure how word-learning works, but at least metaphor doesn't present any new problems.

Clever Speech

At least, not as long as it's a well-known metaphor. The problem is that the most entertaining and inventive language often involves novel metaphors.

So suppose someone says "Sarah Palin is the new Harriet Miers." It's pretty clear what this means, but it seems to require some very complicated processing. Sarah Palin and Harriet Miers have many things in common. They are white. They are female. They are Republican. They are American. They were born in the 20th Century. What are the common characteristics that matter?

This is especially difficult, since in a typical metaphor, the common characteristics are often abstract and only metaphorically common.

Alzheimer's and Metaphor

Some clever new research just published in Brain and Language looked at comprehension of novel metaphors in Alzheimer's Disease patients.

It is already known that AD patients do reasonably well on comprehending well-known metaphors. But what about new metaphors?

Before I get to the data, a note about why anybody would bother troubling AD patients with novel metaphors: neurological patients can often help discriminate between theories that are otherwise difficult to distinguish. In this case, one theory is that something called executive function is important in interpreting new metaphors.

Executive function is hard to explain and much about it is poorly understood, but what is important here is that AD patients are impaired in terms of executive function. So they provide a natural test case for the theory that executive function is necessary to understand novel metaphors.

The results

In this study, AD patients were as good as controls at understanding popular metaphors. While control participants were also very good at novel metaphors, AD patients had a marked difficulty. This may suggest that executive function is important in understanding novel metaphors and gives some credence to theories based around that notion.

This still leaves us a long way from understanding how humans so easily draw abstract connections between largely unrelated objects to produce and understand metaphorical language. But it's another step in that direction.


-----
M AMANZIO, G GEMINIANI, D LEOTTA, S CAPPA (2008). Metaphor comprehension in Alzheimer’s disease: Novelty matters Brain and Language, 107 (1), 1-10 DOI: 10.1016/j.bandl.2007.08.003

Common knowledge

Language is based on common knowledge. This is true in a trivial sense: If I say

Cats are mammals.

Your ability to interpret that sentence relies on our common knowledge that the word cat refers to a furry domestic animal that meows. Likewise, I only believe that the sentence will be successful in communicating with you based on my belief that you know what a cat is. 


Common knowledge and inference

Language requires common knowledge in a much more subtle way as well. Suppose I say:

I am going to Paris tomorrow.

Your ability to interpret this sentence correctly depends on your being able to correctly assign meaning to tomorrow. Consider the fact that the sentence means different things spoken on different days. For us to successfully communicate about tomorrow, we must have interpreted it the same way and know that we have interpreted it the same way.

Notice that the word I and even the word Paris has the same problem.

It actually gets worse, since some communication requires the even more stringent concept mutual knowledge. Suppose I ask my wife if she has fed the cats today. Technically, she could response "yes" as long as she has fed at least two cats today. But of course, I am asking whether she fed our cats. I assume she will understand that's what I mean. 

Now suppose she just answers "yes." For me to interpret this as meaning she fed our cats, I have to assume she knows that I was referring to our cats. Of course, for her to be confident that I will correctly interpret her response, she has to assume I assume that she assumes that I originally asked about her feeding our cats.

And so on.

Certain knowledge?

In their highly influential book Relevance, Sperber and Wilson argue that common knowledge cannot exist (actually, they talk about "mutual knowledge," which is something slightly different, but the differences aren't important here):

"Mutual knowledge must be certain, or else it does not exist; and since it can never be certain it can never exist." (p. 20)

Why do they think mutual knowledge can never be certain? Because, in a philosophical sense, it is true. I can never be certain my wife knows I'm talking about our cats. And she can't be certain that I am referring to our cats. Probabilities get multiplied. So if confident is always 90%, my confidence that she knows that I know that she knows that I know that she knows I'm referring to our cats is only 53%. 

Sperber and Wilson use a much-expanded version of this argument to claim that mutual knowledge just doesn't exist and can't play a role in language, beyond perhaps giving the basic meaning of basic words like cat.

Are we certain philosophers?

A potential problem with their argument is that they assume people are only certain when certainty is justified. This is clearly not the case.



In recent talks, Steven Pinker has presented evidence that, at least in some circumstances, people really do act as if they believe in mutual knowledge. Pinker is interested indirect speech, so his study involved innuendo. Suppose John says to Mary, "Would you like to come up to my apartment for a nightcap."

How certain are you that John is proposing sex? Most people are fairly certain. 

How certain are you that Mary knows that John is proposing sex? Most people are a little less certain. 

How certain are you that John knows Mary knows John is proposing sex? Certainty drops again.

etc.

Now, change the scenario. What if John is particularly crass and says to Mary, "How would you like to go back to my apartment and have sex?"

How certain are you that John is proposing sex? That Mary knows John is proposing sex? That John knows that Mary knows that John is proposing sex? Most people remain certain no matter how far out the question is extended.

Notice that, at least in theory, Sperber & Wilson's argument should have applied. Nobody should be completely certain. Mary could have misheard. John might have a really odd idiolect. But people don't seem to be phased.

Does mutual knowledge exist?

Well, at least sometimes. But I'm not completely sure how this affects Sperber & Wilson's argument. They weren't talking just about indirect speech, but about a much broader range of phenomena. They were arguing against theories that invoke mutual knowledge right and left, so it still remains to be seen whether mutual knowledge is such a pervasive phenomenon.

What is knowledge?

There are a number of good reasons to want a definition for knowledge. For instance, you might be a lexicographer. Or you might be a philosopher, wondering what knowledge is.

Either way, you're out of luck, because knowledge turns out to be a tricky beast.

Know vs. Believe

The easiest way to start is to compare know with believe. What is the difference between:

I believe it's Friday.

and

I know it's Friday.

The latter is more certain, but that's not all. It's possible to believe it's Friday on a Thursday. It's not possible to know that it's Friday on a Thursday. So we might be tempted to define

knowledge = true belief

That's not going to be enough, though. Suppose John just woke up from a coma. He knows he was in a coma, and he hasn't seen a calendar. Still, his intuition tells him it's Friday. Can he say he knows that it's Friday?

Well, he can say it. But even if it turns out that today really is Friday, we still would be uncomfortable saying John knows that it's Friday, unless we believe in ESP or some similar phenomenon.

Similarly, I might say that I know Barack Obama will be the next president of the United States. Even if it turns out that I am right and Obama does become the next president, it's a little weird to say that I knew it. It seems better to say I strongly believed it.

So we might try the following definition:

knowledge = justified true belief

The idea being that it only counts as knowledge if I have sufficient evidence.

Unfortunately, that won't work, either, though it took some fancy philosophizing to prove it. Consider the following example.

Suppose I am watching the Red Sox play the Yankees. Unbeknownst to me, there has actually been an electrical outage at Fenway, so the cameras aren't working. NESN quickly substitutes a tape of a previous game in which the Red Sox played the Yankees, but I don't realize it.

In this rebroadcast, the Red Sox beat the Yankees. At the same time as I am watching the taped game, the Red Sox are actually beating the Yankees. So if I then say, "Today, the Red Sox beat the Yankees," my statement is true (the Red Sox really did beat the Yankees) and justified (I have every reason to believe what I am saying), but still it seems very strange to say that I know that the Red Sox beat the Yankees.

Where does this leave us?

You might try to save justified true belief by fiddling with justified, but most philosophical accounts I've seen just stop there and claim there is no definition. I am inclined to agree, and this is just one more reason to suspect that words just don't have definitions.

As I've pointed out before, Greg Murphy has a pretty good explanation of why it makes sense that words don't have definitions. The original post is here, but in short, words are used to distinguish objects, but it is always possible to come up with a new object (or idea) that is midway between two words -- that is, fits both and neither, just as the baseball game example above seems to fit both knowledge and belief and neither.

I find this pretty convincing, but if he is right, it raises the following question: why do we think words have crisp definitions? Even more, why do we really want words to have crisp definitions? It seems generations of philosophers would have saved a lot of time.

Illegal Philosophy

One of the most famous thought problems from the philosophy of language in the latter half of the 20th century turns out to have legal ramifications. To illustrate that what is meant is not always the same thing as what is said, H. Paul Grice created a hypothetical letter of recommendation for a would-be professor of philosophy. There are many variants of this letter around these days (it's a popular example). Here is one:

To whom it may concern:

Jones dresses well and writes grammatical English.

Sincerely,
Professor So-and-so

That is what is said. What is meant is clearly that Jones is no good at philosophy. Explaining in a rigorous fashion how we come to that conclusion has occupied a number of researchers for half a century and no doubt will continue to do so for some time. This is despite the fact that such letters appear to be illegal in California (the state in which Grice worked).

In a footnote to a recent book chapter, the linguist Laurence Horn cites a court case (Randi M. v. Livingston Union School District, 1995 Cal. App. LEXIS 1230 (Dec. 15, 1995)), in which it was found that "a statement that contains only favorable matters and omits all reference to unfavorable matters is as much a false representation as if all the facts stated were untrue."

The moral of this story may be that philosophy is great, but check with a lawyer before trying to apply it to the real world. 

Talk about the extraordinary

In a chapter from The Handbook of Social Psychology, 4th Edition, Gilbert notes that people have a
an odd habit and a not so odd habit. The not so odd habit is that they describe behavior that is driven by extraordinary dispositions as having been driven by extraordinary dispositions. The odd habit is that they describe behavior that is driven by ordinary dispositions as having been caused by external agencies.
This may sound like a lot of unnecessary jargon, but he immediate breaks it down (Gilbert is an extremely clear and entertaining writer and definitely worth reading):
When one runs screaming from a baby rabbit, one usually owes the bystanders an explanation. Such explanations are acceptable when they are couched in terms of one's extraordinary dispositions--for example, "I have a morbid fear of fur" or "I sometimes mistake baby rabbits for Hitler." On the other hand, when one retreats from a hissing rattlesnake, one does not typically explain that behavior in terms of ordinary dispositions ("I dislike being injected with venom" or "I feel death is bad") but rather, in terms of the stimuli that invoked them ("It shook its thing at me").
This turns out to be part of a broader phenomenon in language, as Gilbert notes. People tend to avoid saying the obvious and focus on the unusual (Grice was probably the first to notice this). This might seem like a very reasonable thing to do, but there is nothing necessary about it. That is, it's easy to imagine people who are as likely to state the obvious as the non-obvious (and there in fact seem to be some people like that, at least in sitcoms). 

What I think is the most interesting part of this, though, is not that people tend to state the non-obvious, but we as listeners expect the speaker to do this. That suggests either some very sophisticated learning or evolution. (The fact that young children are terrible at distinguishing the obvious from non-obvious in conversation doesn't mean that it is a learned skill; it could be a genetically-programmed behavior that simply comes online later in development, just like puberty.)

Pauses in speech

While I was off guest-posting elsewhere, I talked Kristina Lundholm, a PhD student in linguistics and also a speech & language pathologist, into guest-posting here. She knows a great deal more about speech errors and disfluencies than I do. This is her follow-up to my post about errors in speech:

For a long time, spoken language was seen as inferior to written language. Hesitations and pauses were seen as flaws in the production of speech. This way of looking at language and communication (as proposed by e.g. Chomsky) proposes that there is an ultimate way to deliver an utterance.

Today, we know that pauses, hesitations etc are a vital part of communication and not some sort of unnecessary interruption in the speech signal. Natural conversation is studied by conversation analysts, sociolinguists, sociologists, anthropologists and more.

To understand why pauses are important, imagine trying to have a conversation with someone who never pauses. Would you get anything said? Pauses are necessary in communication; we need to breathe, think, and leave gaps where another person can take over. Pauses also make it easier for the listener to process and understand what we are saying.

Even when the people engaged in conversation understand the importance of pausing and make pauses in all the right places, that might not be enough. An agreement on pause length is key to a successful conversation. Pause length tolerance, i.e. how long pause you tolerate before the silence seems unbearable and you feel you have to say something, varies between languages, sociolects, dialects etc.

Because of this you may experience what a friend of mine went through when he moved south to study: he had a high tolerance of pauses, which also meant that if the pause in conversation was short, he didn't take his turn since he felt he was interrupting. Therefore, he felt that his new friends were quite rude who never let him talk, and they thought he must be terribly shy since he rarely spoke. The effect of different pause lengths has been verified by for example Scollon & Scollon and Deborah Tannen.

Pauses also influence how we perceive what is being said. If someone asks you for a ride home, your answer will be interpreted as more negative if you take longer to respond, even if the answer itself is positive; see e.g. Roberts et al.

The location of the pause is also meaningful: if the speaker does not want to be interrupted, it is wise to pause within a syntactic unit, for example before an important content word: "I want a (pause) green sweater". If you pause between syntactical units, chances are that your conversation partner will think that you're finished and will start talking.

Now, about those uuuums and eeeers… There are a bunch of different names for those small units in communication: filler words, fillers, filled pauses, hesitation phenomena, disfluencies etc. I prefer the term "filled pause" since I classify them as a sort of pause. Filled pauses have a lot of functions in spoken conversation. One is to signal to other persons that "even though I'm not saying anything particular right now, I don't want anyone else to take over". It can also mean "difficult question, I have to think about that". Or a number of different things, depending on position, prosody, context etc. Quite a lot of research has focused on filled pauses in spoken dialogue, but I don't know if anyone has investigated filled pauses in written communication – well, if not, someone should!

So, in conclusion: pauses are not only important: they may make or break a conversation. And in linguistics today, the spherical cow is not so spherical anymore, but seen as the irregularly formed creature it is.