Field of Science

Do monkeys have grammar?

The short answer is "no." But a new study in PLOS One suggests that some monkey calls may be morphologically-complex. Here is the relevant passage:
Some calls were given to a broad, others to a narrow range of events. Crucially, “krak” calls were exclusively given after detecting a leopard, suggesting that it functioned as a leopard alarm call, whereas the “krak-oo” was given to almost any disturbance, suggesting it functioned as a general alert call. Similarly, “hok” calls were almost exclusively associated with the presence of a crowned eagle (either a real eagle attack or in response to another monkey's eagle alarm calls), while “hok-oo” calls were given to a range of disturbances within the canopy, including the presence of an eagle or a neighbouring group (whose presence could sometimes be inferred by the vocal behaviour of the females).
The authors take this as evidence that "oo" is a suffix of some sort that modifies the meaning of the preceding part of the call.

Maybe. Whether two words that contain similar sounds share a morpheme or not is an old problem in linguistics, and one that is actually hard to solve. I cut my teeth on such questions as whether the /t/ in swept is the same past-tense affix that we see in dropped. Notice that both words end in the sound "t" -- but, then, so does "hat," and probably nobody thinks the /t/ in "hat" is a suffix.

One crucial test the authors would need to do would be to show that this "oo" suffix can be used productively. If this was a study of humans, you might teach them a new word "dax," which refers to a chipmunk, and then see if "dax-oo" was interpreted as "warning, there's a chipmunk!"

All of which is not to say that this isn't an intriguing finding, but we're a ways from talking monkeys yet.

What does a professor do all day?

Readers of this blog will remember Dick Morris's strange claim that professors don't do anything except teach -- it's not even clear he thinks they have to prepare for class or grade papers. This raised a considerable backlash on the Web, in which many pointed out that teaching is, for many professors, only one pursuit (and often not the main one).

Around the same time, but apparently independently, a professor of psycholinguistics, Gerry Altmann, listed how he had been spending his time. In the space of 2.5 weeks, sent out 18 manuscripts for review (he's a journal editor), wrote 51 action letters (telling authors what decisions had been made), reviewed 7 NIH grants (interesting, since he works in the UK), and visited collaborators in Philly to discuss a new project (presumably part -- but not all -- of the 3677 miles he reports having flown).

Using Google Wave

I admit I'm pretty excited about Google Wave. I am currently involved in a fairly large collaboration. It's large in

  • the scale of phenomena we're trying to understand (essentially, argument realization)
  • the number of experiments (literally, dozens)
  • the number of people involved (two faculty, three grad students, and a rotating cast of research assistants, all spread across three universities)
One problem is keeping everybody up-to-date and on the same page, but an even more serious problem is that's difficult to keep track of everything we've discovered. In the last few weeks, we've moved much of our discussion into Wave, and I think already I have a better sense of some of the issues we've been dealing with.

Collaborative Editing?

If you are interested in Wave, the best thing is to simply check out their website or one of the many other websites describing out to use it. The main idea behind it is to enable collaborative document editing -- that is, a document that can be edited by a group of people simultaneously.

Anyone who has worked on a group project is familiar with the following problem: only one person can work on a document at a given time. For instance, if I send a paper to a co-author for editing, I can't work on the paper in the meantime or risk a real headache when trying to merge the separate edits later.

Google Docs and similar services have allowed real-time collaborative editing for a while, but although these services allow real-time collaborations, they weren't really designed for real-time collaborations. For instance, it's difficult to record who made what changes. Similarly, it doesn't allow comments (for instance, sometimes you don't want to change the text, you just want to say you don't understand it). If one person makes a change and you want to undo it, good luck. Google Wave has these and other features.

Using the Wave

Currently, we're using Wave as a collective notebook, where we record everything we've learned in the course of our research. This keeps everyone up-to-date. It also allows us to discuss issues without requiring meetings (a good thing, since we're at different universities).

For instance, recently I read a claim that a certain grammatical structure that is impossible in English happens to be possible in Japanese. I noted this in a section of our document, and attached a comment: "Yasu, Miki: can you check this?" As it happens, two members of our project are native Japanese speakers. In a series of nested comments, they discussed the issue, came to a conclusion (that the paper I had read was wrong), and then we finally deleted the comments and replaced the whole section with a summary of the discussion and conclusions.

In other sections, we've included the methods for experiments that we're designing, commenting on and ultimately editing the methods until everyone agrees.

Needed Improvements

At the moment, Wave is very much in beta testing and is underpowered. Although you can embed files and websites, there's no way to embed, say, a spreadsheet -- a major inconvenience for us, since much of our work involves making lists of verbs and their properties. Whenever I want the most updated list, I need to email whoever was working on it last, which isn't ideal.

Of course, we could use Google Docs, but it has the problems listed above (no way of commenting, no track-changes, no archive in case we decide to undo a change). It's assumed that these kinds of features will be added in the future.

What isn't said

"Last summer, I went to a conference in Spain."

Technically, all you learned from that sentence is that there was a conference in Spain and that I traveled to it from some other location that isn't Spain. That's what the sentence literally means.

If you know that I live in Boston, you probably assumed that I flew to Spain, rather than take a boat. You're probably confident that I didn't use a transporter or walk on water. You probably also assumed that the conference is now over. All these things are true, but they weren't actually in what I said.

The Communication Game

This presents a problem for understanding communication: a lot is communicated which is not said. A lot of the work I do is focused on trying to figure out what not just what a sentence means, but what is communicated by it ... and that is the focus of the newest experiment on

In The Communication Game, you'll read one person's description of a situation (e.g., "Josh went to a conference in Spain"). Then, you'll be asked to decide whether, based on that description, you think another statement is true. Some will be obviously true ("Josh went to a conference"), some probably true ("Josh went to the conference in Spain by plane"), some clearly false ("Josh went to the conference in Spain by helicopter"), and some are hard to tell ("Josh enjoyed the conference in Spain more than the conference in Boston").

Scientifically, what we're interested in is which questions are easier to get right than others. From that, we'll get a sense of what people's expectations are. Part of what makes this a game is the program keeps score, and you'll find out at the end how well you did.

First-borns don't trust you

New research in Animal Behavior suggests that first-borns are "less trustful and reciprocate less."

Participants in the experiment played a basic economics game: Let's say Mary and Sally are playing. Both are given $30 to start with. Mary is told she can "invest" as much as she wants with Sally by giving that money to Sally. However much she gives to Sally is automatically tripled, and then Sally can decide how much to give back to Mary as a dividend. So let's say Mary gives Sally $10. That amount is then tripled, so Sally ends up with $30 + $10*3 = $60, while poor Mary only has $20 left. This doesn't seem very fair, so Sally is allowed to give any amount of money back to Mary.

Mary, then, has a difficult choice. She earns the most money if she gives all $30 to Sally, and then Sally gives the entire investment ($90) back. However, she also risks Sally absconding with the money, leaving Sally with $120 and Mary with nothing. So Mary's safest choice is to give Sally nothing.

So what happens?

The results of the study were that if Mary was a first-born, she gave $3.7 less money to Sally than if she was a latter-born (actually, the game used "monetary units," not dollars, so it's not clear how this translates). So first-borns trust less. Interestingly, if Sally was a first-born, she was also returned less money Mary, meaning first-borns reciprocate less as well.

It's not entirely clear that these are two separate effects. If Mary was predicting Sally's behavior by thinking about what she herself would do in Sally's shoes, you would expect first-borns to give less money (since they themselves would return less of it) and for latter-borns to give more (since they themselves would return more).

One nice aspect of this study is that they controlled for the fact that latter-borns tend to come from larger families (each family has only one first-born but potentially many latter-borns). This is unfortunately not something that birth-order researchers typically do, which is problematic since people from smaller and larger families differ in many ways (including parental income, education, etc.).

Why are first-borns so mean?

It's interesting that people differ based on birth-order, but I think what most people really want to know is why. Why were first-borns less trusting and why did they reciprocate less?

The authors didn't really address this question, but their data do suggest one possibility. They report that if Mary gave Sally $10, Sally gave, on average, $18 back, leaving Mary with $38 and Sally with $42.

Now, let's let Jenny and Susan play. Jenny decides to give Susan $30 -- all her money! On average, Susan would give $43 back. This is more money than Sally gave back, but keep in mind that Jenny also gave up more money to start with, so in the end Jenny walks home with $43 -- which is hardly better than the $48 Mary got. On the other hand, Susan cleans up, netting $77.

It's true that Jenny's generosity did get her an extra $5...but it also got Susan an extra $25, which she chose not to share. Since this unfairness tends to rankle people, Jenny may in fact be more unhappy at the end of the game than is Mary, despite the extra $5 (obviously, Susan would feel pretty good about the outcome).

Getting back to birth-order, first-borns were less trusting -- that is, they invested less money. As we see from the analysis above, they were making the right decision; investing wasn't really going to pay off for them. They also returned less money from the investment, which may show that they are "bad reciprocators," or it might just show that after years of dealing with their younger siblings, they've come to the hard truth: people -- even younger siblings -- are greedy jerks. They returned about as much money as they would expect to get were the positions reversed.

Would the world be better off without first-borns?

For anyone who was wondering whether the game would turn out better if only latter-borns played, even though latter-borns do return more money from the investment, they don't return much. Invest $30 with a latter-born, and in the end you'll have $47, and s/he will have $73 -- better than if you invest with a first-born ($36.7 and $83.3), but still unfair.

Illustration from the Creative Commons

Heroic participants

I just glanced through the data for the new experiments. Things are looking good. Two heroic participants rated over 300 puns each. One of them stuck with it and only tuckered out after 1199 puns. The enthusiasm is much appreciated!

Mini monitors

A few years ago, I read a study claiming that the larger the screen on your computer, the more productive you are. I believe it. I frequently edit one paper while having several others open, and it's convenient to have them all open simultaneously. I write my experiments in Flash, which is almost impossible to use without two screens, due to all the various floating toolbars.

This is fine if I'm in the Lab, as there I have an iMac with an attached 20 inch screen, and also another 24 inch screen I attach to my MacBook Pro. Unfortunately, I'm not always in my office. I'm sometimes at Tufts, where I am collaborating on some ERP work, and sometimes I work from home. But I don't have a bunch of extra monitors at home, and even if I did, I don't have anywhere to put them.

Enter Mimo USB monitors. This 7-inch monitors weigh less than a pound and can pack flat, so they're easy to get out of the way when you aren't using them. 7 inches is just about the right size to fit some -- though not all -- of my extraneous toolbars.

Which isn't to say it's a perfect solution. The screen is too small to host a second document window. In order to fit more, everything is annoyingly small -- and I like small font. The brightness and contrast can't be meaningfully adjusted. Those drawbacks make it annoying to use Flash or Dreamweaver ... annoying but possible. Prior to getting my Mimo, I pretty much refused to work on Flash or Dreamweaver if I couldn't do it in my office.

Of course, what I really want is a virtual monitor that could be projected onto any space (such as my glasses). In the meantime, Mimo works pretty well.

How do I feel about open-access journals? The president wants to know.

The White House is requesting comments as it formulates a policy on open-access publication, at least according to a recent email from AAAS:
The Obama Administration is seeking public input on policies concerning access to publicly-funded research results, such as those that appear in academic and scholarly journal articles. Currently, the National Institutes of Health require that research funded by its grants be made available to the public online at no charge within 12 months of publication. The Administration is seeking views as to whether this policy should be extended to other science agencies and, if so, how it should be implemented.
The comments are being collected in phases. Right now (Dec. 10 - 20) they are asking
Which Federal agencies are good candidates to adopt Public Access policies? What variables (field of science, proportion of research funded by public or private entities, etc.) should affect how public access is implemented at various agencies, including the maximum length of time between publication and public release?
Next up (Dec. 21 - 31) is
In what format should the data be submitted in order to make it easy to search and retrieve information, and to make it easy for others to link to it? Are there existing digital standards for archiving and interoperability to maximize public benefit? How are these anticipated to change?
Finally (Jan. 1 - 7) they are interested in
What are the best mechanisms to ensure compliance? What would be the best metrics of success? What are the best examples of usability in the private sector (both domestic and international)?
I'm glad they are thinking seriously about these things.

Peer Review: An Editor Responds

As previously noted, a loosely-translated Hitlerian tantrum about unreasonable reviewers has been making the rounds. Among recent viewers is Gerry Altmann, the editor of Cognition. He is clearly waiting for someone to make a similar video about unreasonable authors. He makes a good case, and it's worth reading (scroll down to the post titled "Peer Review" on Nov. 27th), but as there are more authors than editors, I suspect it wouldn't get as many views on YouTube.

Keeping your brain in a computer

A researcher at UC-San Diego is methodically slicing and preparing one of the world's most famous brains (where "world" is narrowly defined as the world of cognitive science -- I suspect HM isn't a household name, though he was a huge anonymous celebrity among psychologists and neuroscientists for the last half-century). The hope is to make the data electronically available.

An interesting fact tucked in at the end of the article is that in order to make it possible to zoom down to the cellular level on each electronic copy of each slide will require 1 - 10 terabytes per slide. That's terabytes with a t. There are 2,400 slices of the brain (I'm not clear as to whether all will become slides, but presumably a good fraction will be). And the researcher wants to eventually expand the project to 500 brains.

This creates a serious storage issue.

It also brings up the question of building synthetic brains. If we need something on the order of 5,000 terabytes just to render a digital image of a brain, how many do we need to perfectly model a brain?

Granted, there aren't 5,000 trillion neurons in a brain (there are about 100 billion). But one neuron doesn't equal one bit -- a neuron's behavior is complex, and it's pieces do things. As we don't fully understand what neurons do, I take it as uncontroversial we don't know how many 'bits' make up a neuron.

This is one of the complications with predicting the future of neuroscience. Our knowledge and technology are growing exponentially, but we don't know how far there is to go.

The science of puns

Puns are jokes involving ambiguous words. Some are hilarious. Some are only groan-worthy. (If anyone wants to argue the first one isn't a pun, comment away.)

Wait -- aren't all words ambiguous?
The thing about puns that pops out at me is that pretty much every word is ambigous. My parade-case for an unambiguous word -- really a phrase -- is George Washington. But, of course, more than one person has been named "George Washington," so even that one is ambiguous. In any case, the vast majority of words are ambiguous -- including the words word and pun.

Most of the time we don't even notice that a words is ambiguous, because our brains quickly select the contextually-appropriate meaning. For a pun, this selection has to fail in some way, since we remain aware of the multiple appropriate meanings.

Studying puns
As someone who studies how context is used to understand language, this makes puns and other homophones interesting. I'm also currently starting a new EEG experiment looking at the processing of ambiguous words in folks on the autism spectrum. But one question that I'm interested in is why some puns are funnier than others.

As a first step in that work, I need a bunch of puns ranked for how funny they are. There are website out there that have these types of rankings, but they don't use the sorts of controls that any peer-reviewed journal will demand. So I am running a rank-that-pun "experiment" at called Puntastic! It goes without saying that once I've got enough puns rated, I'll post what I learn here (including what were the funniest and the worst puns). This make take a few months, though, as I'm getting ratings for over 500 puns.

(picture above isn't just a picture -- it's a t-shirt)

Science in the classroom

Science Magazine recently profiled a new website that links up scientists and classroom teachers in order to improve science education. It looks like a great project, and definitely something needed.

Pronoun Sleuth

George Washington always refers to George Washington. The pronoun he, on the other hand, can refer to George Washington, Thomas Jefferson, John Adams, or anyone else who is male (not just presidents). So, unlike proper names which have the same meaning regardless of context, pronouns have almost no meaning without context.

Just saying that we figure out who he and she refer to based on context begs the question of how we do it. What aspects of context matter? The fact that today is Tuesday? Whether it is sunny or rainy? (This isn't a straw man -- both of these things can matter in the right circumstance; I leave it to the reader as an exercise to come up with examples.)

In one of our newest experiments, we took sentences with pronouns and systematically obscured aspects of the context to see if people could still figure out who the pronoun refers to. If they can, then that aspect of the context didn't matter. Play Pronoun Sleuth by clicking here.

Making science relevant

One thing about being a scientist interested in how people think and how different groups of people think differently is that you constantly notice differences in how scientists and non-scientists think differently.

For instance, scientists like evidence. You think how you parent your children affects how they turn out? Maybe. It's a testable question. (For my position on the question and on the evidence, read here.) One mark of a great researcher is the ability to spot untested assumptions (one of my favorite examples being Marc Hauser's work on language evolution). Of course, some scientists are less confined by evidence than others, and my avowedly non-scientist wife is as empirical-minded as they come. But it seems to be generally true (though I admit I don't know any well-controlled studies).

Where am I going with this?

Does making science relevant help science education?

In the last issue of Science Magazine, Hulleman and Harackiewicz point out that

Many educators and funding agencies share the belief that making education relevant to students’ lives will increase engagement and learning. However, little empirical evidence supports the specific role of relevance in promoting optimal educational outcomes, and most evidence that does exist is anecdotal or correlational.
I smell an experiment. It was surprisingly simple: students in a high school system in the Midwest were randomly assigned to write essays (as many as 8 during the semester, with an average of 4.7 essays/student) that either just summarized what they were learning in science or tried to apply what they learned to their own lives. The students who wrote essays relating science to their lives got better science grades and reported more interest in science at the end.

Motivation or depth of processing?

The authors discuss this in terms of motivation: students who see the relevance of science are more interested in it. (They also seem to imply that it improves confidence.) I'm interested in understanding the mechanism better (a professor in my department complains that Science articles are necessarily too short to give necessary experimental detail and theoretical motivation): were these students more interested because of the personal relevance per se, or was it simply that thinking about relevance required investing in and reinterpreting the material. After all, science isn't just a list of facts (or shouldn't be, anyway). Facts are boring; interpretations are what make science science.

But that is, as they say, academic. As the authors point out, this was a relatively simple method that improved grades and interest in science. Assuming, of course, that it replicates, this is a valuable contribution.

And it suggests that making science relevant improves education outcomes. If that seemed obvious from the get-go, it's worth remembering how many obvious truths have turned out to be wrong. Occasionally proving the obvious is an occupation hazard, but still worth the effort.

Games with Words: New Web lab launched

The new Lab is launched (finally). I was a long ways from the first to start running experiments on the Web. Nonetheless, when I got started in late 2006, the Web had mostly been used for surveys, and there were only a few examples of really successful Web laboratories (like the Moral Sense Test, FaceResearch and Project Implicit). There were many examples of failed attempts. So I wasn't really sure what a Web laboratory should look like, how it could best be utilized, or what would make it attractive and useful for participants.

I put together a website known as Visual Cognition Online for the lab I was working at. I was intrigued by the possibility of running one-trial experiments. Testing people involves a lot of noise, so we usually try to get many measurements (sometimes hundreds) from each participant, in order to get a good estimate of what we're trying to measure. Sometimes this isn't practical.
The best analogy that comes to mind is football. A lot of luck and random variation goes into each game, so ideally, we'd like each team to play each other several times (like happens in baseball). However, the physics of football makes this impractical (it'd kill the players).

Running a study on the Web makes it possible to test more participants, which means we don't need as many trials from each. A few studies worked well enough, and I got other good data along the way (like this project), so when the lab moved to MN and I moved to graduate school, I started the Cognition and Language Lab along the same model.

Web Research blooms

In the last two years, Web research has really taken off, and we've all gotten a better sense of what it was useful for. The projects that make me most excited are those run by the likes of, Games with a Purpose, and Phrase Detectives. These sites harness the massive size of the Internet to do work that wasn't just impossible before -- it was frankly inconceivable.

As I understand it, the folks behind Games with a Purpose are mainly interested in machine learning. They train computer programs to do things, like tag photographs according to content. To train their computer programs, they need a whole bunch of photographs tagged for content; you can't test a computer -- or a person -- if you don't know what the correct answer is. Their games are focused around doing things like tagging photographs. Phrase Detectives does something similar, but with language.

The most exciting results from (full disclosure: the owner is a friend of mine, a classmate at Harvard, and also a collaborator) have focused on the development and aging of various skills. Normally, when we look at development, we test a few different age groups. An extraordinarily ambitious project would test some 5 year olds, some 20 year olds, some 50 year olds, and some 80 year olds. By testing on the Web, they have been able to look at development and aging from the early teenage years through retirement age (I'll blog about some of my own similar work in the near future).


This Fall, I started renovating in order to incorporate some of the things I liked about those other sites. The project quickly grew, and in the end I decided that the old name (Cognition and Language Lab) just didn't fit anymore. was born.

I've incorporated many aspects of the other sites that I like. One is simply to make the site more engaging (reflected, I hope, in the new name). It's always been my goal to make the Lab interesting and fun for participants (the primary goal of this blog is to explain the research and disseminate results), and I've tried to adopt the best ideas I've seen elsewhere.

Ultimately, of course, the purpose of any experiment is not just to produce data, but to produce good data that tests hypotheses and furthers theory. This sometimes limits what I can do with experiments (for instance, while I'd love to give individualized feedback to each participant for every experiment, sometimes the design just doesn't lend itself to feedback. Of the two experiments that are currently like, one offers feedback, one doesn't.

I'll be writing more about the new experiments over the upcoming days.

Obama & I

Geoff Nunberg has a fantastic Fresh Air commentary posted on his website about the political misuse of linguistic information. Pundits frequently use statistical information about language -- the frequency of the word I in a politicians speeches, for instance -- to editorialize about the politician's outlook or personality.

That is to say, pundits frequently misuse statistical information. Most of what they say on the topic is nonsense. Nunberg has the details, so I won't repeat them here. There is one segment worth quoting in full, though:

To Liberman, those misperceptions suggest that Will and Fish are suffering from what psychologists call confirmation bias. If you're convinced that Obama is uppity or arrogant, you're going to fix on every pronoun that seems to confirm that opinion.

Watch this Space, the successor to will (hopefully) be launched within the next week or so. The blog name itself has changed in advance. And, in fact, you will find that the URL already works, though it just takes you to the old site.

Why a new website? Among reasons, I've been overhauling the website to make it more engaging and more fun. The old name didn't really fit anymore. Plus it was always hard to say, which is particularly egregious for a language-themed website.

More to come soon...

A Poorly-edited Editors' Handbook

Most psychology journals require that papers follow the American Psychological Association's style guide. This guidebook covers everything from the structure of the paper to the right way of formatting section headings, and it is updated every so often.

The sixth edition was released over the summer, and it seems it had to be recalled due to "errors and inconsistencies."

I haven't actually seen the 6th edition myself (I just bought the 5th edition a couple years ago and am not in a hurry to buy the new one). On the whole, it's a good manual and the rules make sense. However, reviewers will sometimes thank you for breaking the more frustrating rules , like the rule that charts and tables should be appended to the end of the manuscript -- not included in the document itself. This probably made sense in the day of type-written manuscripts, but makes modern electronic manuscripts very hard to read. Electronic documents are wonderful for many things, but the ease of flipping back and forth from one section to another is not one of them.

Hopefully the 6th edition fixed some of those out-dated rules. But I'll wait to find out once the fixed version appears.

Changes in this blog

As I've mentioned in a previous post, I'm in the process of renovating the lab website. There will also be significant structural changes to this blog (probably a regular schedule for posting, for instance).

All this renovation is taking a considerable amount of time, and you may have noticed the lack of frequent posting. This will continue until the new site is launched, hopefully in the next month.

Magic babies

There's an interesting article today over at Slate (Why Babies Crave Magic) that features work from one of my favorite local labs.

Making Super-babies

Parenting advice is no doubt as old as time itself. There is good advice, and then there are myths.

The Walt Disney Company is, in a roundabout fashion, owning up to one myth, which is that their Baby Einstein videos make babies smarter. This has been a well-known myth in scientific circles -- the American Academy of Pediatrics recommends no videos of any type for children under 2. Controlled experiments are tough, since it's hard to assign children to either watch or not watch TV (this tends to correlate with parental factors), but a quick search found a conference paper showing that toddlers have difficulty learning words presented on television, which fits with what I hear from other language development people that young children do not learn vocabulary from television (this isn't a literature I know well -- the youngest kids I study are 4 years old).

This brings up a myth about bilingualism. Many parents believe that raising a child bilingual makes them smarter. Some do this by having their children watch Spanish-language programming like Diego. This is likely a waste of time for two reasons: first, children typically do not learn a language if it makes up less than 20% of what they hear during a day. So a television program or two isn't going to do much good (again, citing other language researchers; I didn't see an obvious paper relating to this).

Second, though, the evidence that bilingualism makes a baby smarter is weak. The problem, again, is that controlled experiments are impossible. There is no way of randomly assigning toddlers to be bilingual or not. And bilingualism correlates with family (e.g., cultural and genetic) factors. As anyone who has spent time with a bilingual family knows, raising a child bilingual is a lot of work, and many parents don't bother. The parents who do are, by definition, not randomly distributed.

That said, there is a good reason to raise your children bilingual, even if it doesn't make them smarter: your children will be able to speak two languages! And that's pretty useful.

But if you want to make smarter babies, the best option I know of is to play with them more.

Vaccination and the Assault on Health

I had always though that refusal to get a flu vaccination was relatively harmless masochism. Refusal to vaccinate one's own children, on the other hand, should probably be prosecuted as child abuse, but at the least the negative consequences stay close to home.

Yesterday, however, I read two articles on vaccination. One in Slate looks at the risks the unvaccinated pose to people with immunity problems (she's unable to get childcare for her child, who is undergoing cancer treatment, because the risk of being around unvaccinated children is too high). If that seems like a parochial problem ("my kid doesn't have cancer; why should I worry about vaccination rates?"), the other article, appearing in Wired, is feature-length, and focuses on the anti-vaccine movement and the dangers it poses to the health of everyone.

Both note the rise in non-vaccination and the concomitant rise in outbreaks of the scourges of yesteryear. And they were scourges:
Just 60 years ago, polio paralyzed 16,000 Americans every year, while rubella caused birth defects and mental retardation in as many as 20,000 newborns. Measles infected 4 million children, killing 3,000 annually, and a bacterium called Haemophilus influenzae type b caused Hib meningitis in mor ehtan 15,000 children, leaving many with permanent brain damage...
But refusing to vaccinate is more than just a convenient way of decreasing the probability you'll have to pay for college (and that your neighbor's kid with leukemia will survive). This is because the un-vaccinated put the vaccinated at risk.

The Risk to Us All

As told in the Wired article, an unvaccinated 17-year-old Indiana girl picked up measles on a 2005 trip to Bucharest. When she returned, she went to a church gathering of 500 people. Of the 50 attendees who had not been vaccinated, 32 developed measles. Any adults who got measles had at least made the choice to take on that risk, but the children had not.

Even worse are the two people who had been vaccinated but nonetheless got sick. They had been responsible and protected themselves, but this reckless 17-year-old and her parents endangered their lives. First, though, three cheers for vaccines. Of the unvaccinated, 64% got sick. Of the vaccinated and those with natural immunity, only 0.8% got sick.

But still, vaccines don't always work. Sometimes they don't take. Sometimes your immune response may have weakened (for instance, through aging). Or you might just have bad luck. A 2002 study in The Journal of Infectious Diseases determined that you were safer as an unvaccinated person in a well-vaccinated country than as a vaccinated person in a largely un-vaccinated country.

People who refuse vaccines aren't just risking themselves, and parents who refuse vaccines for their children aren't just risking their children, they are risking you and me.


What makes this even worse is that every baby is initially unvaccinated. Children have to reach a certain age in order to get vaccines. What protects babies is that everyone older is healthy (i.e., vaccinated). So adult vaccine-refuseniks made it through infancy partly thanks to everyone else getting vaccinated. But they aren't willing to give other babies the same chance.

Do people have the right to choose for themselves whether they want vaccines? Sure -- as long as they live on top of a mountain or on a deserted island away from contact with anyone else. Mandatory vaccination**, and now!

(**With medical exceptions, of course)

Why do so many homophones have two pronunciations?

An interest in puns has led me to start reading the literature on homophones. Interestingly, in appears that in the scientific literature "homophone" and "homograph" mean the same thing, which explains why there are so many papers about mispronouncing homophones. Here's a representative quote:

"...reports a failure to use context in reading, by people with autism, such that homophones are mispronounced (eg: 'there was a tear in her eye' might be misread so as to sound like 'there was a tear in her dress").'

Sticklers will note that "tear in her eye" actually does involve a homophone (tier), but I don't think that's what the authors meant.

Readers of this blog know that I'm not a prescriptivist -- that is, I believe words mean whatever most speakers of a language think the words mean. So I'm not going to claim that these authors are misusing the word, since there seem to be so many of them. That said, it would be convenient to have a term for two words that have the same pronunciation which is distinct from the term for two words with distinct pronunciations but are written in the same way.

Recruiting Laboratory Participants

I am in the process of revamping the Internet laboratory, as I'm trying to increase the number of participants. Some very successful websites recruit ~500/day. I have been averaging about 30/day -- still respectable, but it limits what I can do.

In this context, I read recent reports from the folks behind Phrase Detectives with interest. Phrase Detectives, it appears, gets a slightly greater amount of traffic than I do. What I focused on was their method of advertising and how well it works. They noted that their traffic comes in the following forms:

direct: 46%
website link: 29%
search: 12%
Facebook advertisement: 13%

Then they looked at the bounce rate (the number of visitors who arrive at the home page then scoot away) for each of these sources:

direct: 33%
link: 29%
search: 44%
Facebook advertisement: 90%

It appears that paid advertisements -- the only one of these sources that actually costs money -- isn't worth much. In the end, only 4% of visitors who didn't bounce came through the paid advertisement.

Renovations at the Cognition and Language Lab

I am in the processing of doing a complete overhaul of the Web-based laboratory. The site has been due for some editing for a while; the page about me still lists me as an "incoming graduate student," though I just started my third year.

More importantly, though, I want to make the website more interesting. Though I've collected some very good data, leading to two publications already with several more on their way, the experiments I'm currently interested in running require more participants. Right now I get about 30-40 participants a day. For the new experiments to work, I need closer to 100 per day.

Here is where you, the reader, comes in. What do you think would make the site more interesting and the experiments more compelling? I am doing a few things already. First, you may have noticed there are lately more pictures on the website. The new experiments are all going to be game-like. Participants will get back scores and, in some cases, know how they did compared to others. This has worked very well for folks like Games with a Purpose or I also admit that some of the experiments I've posted over the last few years have been pretty dry.

One last thing I'm considering doing is changing the name of the site to reflect the new brand. I had planned on, but someone just snagged that domain. I could still go with, but there is always the risk of confusion. What else might be a catchy name?

If you have any ideas about the domain name or any other aspect of the website, please leave a comment here or email me at

Measuring the Quality of a Scientific Paper

"Good" is a notoriously difficult word to define. A pretty common and reasonably uncontroversial definition of a good paper, though, is one that has significantly advanced human knowledge. The question is how do we measure that?

If the paper is in your field, you probably have a sense of the impact. But if it's outside of your field, it becomes trickier. A pretty good proxy is how many times the paper has been cited. Pretty much all the work in the study of language over the last 50 years has bounced off of Chomsky's ideas, and you can see this in the fact that his 1965 book Aspects of the Theory of Syntax has been cited 11,196 times...and that's only one of several very influential books.

It takes a few years for a paper or book to start getting citations, though, because it takes time for people to read it, not to mention the fact that a paper that is printed today was probably written anywhere from 1-3 years ago. So for a new paper, you can estimate how big an impact it will have by looking at the quality of the typical paper published in the journal in which the paper in question was published. This is usually measured by -- you guessed it -- how often papers in that journal are cited.

As a recent article in the Times points out, this gets tricky when you want to compare across fields. In some fields, authors meticulously cite everything that could possibly be relevant (linguistics is a good example). Other fields don't have as strong a citation tradition. Conversely, researchers in some fields traditionally work for years at a time on a project and sum up all their findings every few years in a single, high-impact publication (again, linguistics comes to mind). Other fields are more concerned with quickly disseminating every incremental advance. Then there is simply the size of the field: the more people in your field, the more people can cite you (developmental psychology journals tend to have low impact factors simply because developmental psychology is a very small world).

So by looking at citation rates, you might conclude that molecular biology is the most important field of modern thought, and mathematics and computer science are among the least. I'm a fan of molecular biology, but it's hard not to admit that molecular biology would be impossible without recent advances in mathematics and computer science; the reverse is not true.

Starting Assumptions

The idealized scientist might start by questioning everything and assuming nothing. However, one usually has to make starting assumptions to get things going. For instance, David Hume proved that the notion that science works at all is founded on the un-provable assumption that the future will conform to the past (i.e., if e=mc2 yesterday, it will do so again tomorrow).

Starting assumptions can get a bit less metaphysical though. Here is a very telling line in linguist David Pesetsky's influential Zero Syntax from 1995:

It follows from the hunch just described that hypotheses about language should put as small a burden as possible on the child's linguistic experience and as great a burden as possible on the biologically given system, which we call Universal Grammar (UG). Of course, the role of experience is not zero, or else every detail of language would be fixed along genetic lines. Nonetheless, given that linguistics tries to explain, the null hypothesis should place the role of experience as close to zero as possible.

In contrast, there has been a strong trend in psychology -- and folk science, for that matter -- to assume everything is learned and prove otherwise.

Ultimately, if science proceeds as it should, we'll all converge on the same theory somewhere in the middle. In the meantime, wildly divergent starting assumptions often unfortunately lead to folks simply talking in different languages.

A good example is a recent exchange in Trends in Cognitive Sciences. Waxman and Gelman had recently wrote an article about how children's assumptions about the world (they called these assumptions "theories") guide learning even in infancy. Sloutsky wrote a letter to complain that Waxman and Gelman had failed to explain how those assumptions were learned. Gelman and Waxman responded, in essence: "Who says they're learned?"

All three are intelligent, thoughtful researchers, so at the risk of simplifying the issue, Sloutsky's problem with the "innate theories" theory is that nobody has given a good characterization of how those theories are instantiated in the brain, much less how evolution could have endowed us with those innate theories. Sloutsky assumes learning unless proven otherwise.

However, Waxman and Gelman's problem with Sloutsky is that nobody has a good explanation -- even in theory -- of how you could learn anything without starting with some basic assumptions. At the very least, you need Hume's assumption (the future will conform to the past) to even get learning off the ground.

Both perspectives have their strengths, but both are also fatally flawed (which is not a criticism -- there aren't any perfect theories in cognitive science yet, and likely not in any science). Which flaws bother you the most depends on your starting assumptions.

Are college professors underworked?

According to Dick Morris, I've joined a cushy profession. Professors don't teach very much, which makes college expensive. He argues that by requiring faculty to work harder "approximating the work week the rest of us find normal" and holding down some administrative costs, the tuition can be cut in half!

Comments on The Choice sum up the reaction -- mainly, that strong opinions are easy to have if you have no clue what you are talking about. Most have focused on the ridiculous claim that faculty don't work very hard, presumably due to Morris's odd belief that the only time professors spend working is time spent in the classroom. Morris would presumably cringe at the claim that the only time he spends working is the time he is physically typing out an article.

Well, maybe not Morris. There's no evidence in this article, at least, that he spend any time doing research. But most faculty spend a lot of time doing research, preparing for class, grading, sitting on committees, meeting with students, etc. When I find one who works less than 50 hours a week, I'll ask her secret.

There are also some funny numbers. Morris argues faculty typically teach 5 courses per year, spending 18-20 hours in the classroom per week. If they were to teach 8 courses, they'd spend 24 hours in class per week. Increasing the number of courses by 60% seems to only increase hours by 20%-33%. Sounds like profitability through magical thinking.

There is one point that Morris could have made, though: some universities could be made cheaper by having faculty do no research and less preparation for class. This wouldn't necessarily be an ideal situation, but it would be cheaper. The question is whether it's worth the cost.

And now, on the radio

The radio show I discussed a couple weeks ago finally aired. I would have posted earlier, but I wasn't aware it had happened. My appearance is brief, but it was fun to do. The journalist (Michelle Elliot) does a very nice job of discussing birth order effects, so it is definitely worth listening to.

Coglanglab on Scientific American

I have an article on the Scientific American website (part of the Mind Matters blog) this week. It's on the relationship between language and thought. Check it out.

Speaking Chinese

People often talk about speaking 'Chinese,' as if there were a single language called 'Chinese.' There are a number of related Chinese languages, much as there are a number of related Romance languages.

It turns out the situation is worse than we thought, though. Linguists have been discovering new Chinese languages.

My First Radio Interview

In my brief career as a freelance travel & culture writer, I conducted a number of interviews. I had never been interviewed for anything real prior to just finishing a phone interview with a journalist who is considering writing about my birth order research.

Harvard being Harvard, many of my friends have been interviewed by multiple TV and radio shows, and there are periodically camera crews on my floor. But my lab's research is less media-friendly (no dancing parrots), it's not something we normally deal with.

I admit the experience is somewhat disconcerting. I expect my birth order research to be controversial. And while there is really no point in publishing something that is then ignored, the one advantage of being ignored is nobody's likely to send angry emails, feel I misrepresented their findings, or criticize the methods or conclusions. So while I do seek out publicity for these findings (hence the blog, and also an upcoming article I'm writing for a mainstream science magazine), success in achieving that publicity is at least as worrisome as failure. So we'll see how this goes...

New in Developmental Research

Every year, the Harvard Laboratory for Developmental Studies (of which I am a part) sends out a newsletter to all the parents of the kids have participated in our research studies. For every project conducted in the last year, the lead experimenter (usually a grad student or post-doc) writes up the results in layperson-friendly terms. This year's newsletter was just published. Check it out here.


I'm preparing a speech for later today on unrecognized ambiguity. Many sentences are ambiguous. Often we don't notice that these sentences are ambiguous, because we know what we intend to say. This probably explains many of the (reportedly) real newspaper headlines I'm using in the talk, most of which are worth reading again even if you already know them:

Ten Commandments: Supreme Court says some OK, some not

Federal agents raid gun shop, find weapons

One-armed man applauds the kindness of strangers

Autos killing 110 a day; let's resolve to do better

Dr. Ruth to talk about sex with newspaper editors

Enraged cow injures farmer with ax

Eye drops off shelf

Iraqi head seeks arms

Juvenile court tries shooting defendant

Killer sentenced to die for second time in 10 years

Kicking baby considered to be healthy

Two soviet ships collide -- one dies

William Kelly was Fed Secretary

Kids make nutritious snacks

Milk drinkers are turning to powder

Does birth order affect who you are friends with? Results from a new study

In 1874, in preparing a demographic survey of English scientists, Francis Galton noticed a funny thing: nearly half of all English scientists were oldest or only sons. In the following 135 years, the notion that your birth order affects personality, intelligence, success in the world, and just about anything and everything else has become a mainstay in popular culture. A search on Amazon turned up over 10,000 books, including such titles as The Birth Order Book: Why You are the Way You Are and The Birth Order Connection: Finding and Keeping the Love of Your Life.

Are Birth Order Effects Real?

It may be surprising, then, that the scientific evidence for birth order effects is heavily controversial. Perhaps more studies that failed to find any effect of birth order as studies that do (see Ernst & Angst's Birth Order: Its Influence on Personality), and many studies that do show such effects are hopelessly confounded. Take Galton's original study. In the 1800s, many scientists were men of independent wealth. Generally, the bulk of inherited wealth went to the oldest son. So there were just more oldest and only sons available to be scientists. There were probably also more younger sons in the military, but this shouldn't be taken as evidence that younger children are more violent.

Modern day family size tends to be related to ethnicity and class. Poor families have more children than rich families, so being a middle child actually correlates with being poor. So if poor educational achievement correlates with being a middle child (which it probably does), that may just be that more middle children go to crumbling schools than do oldest or youngest children.

So if there are no birth order effects, what does that mean? Researchers like Steven Pinker (The Blank Slate) and Judith Rich Harris (The Nurture Assumption: Why Children Turn out the Way They Do) point out that if birth order effects are not real then this calls into question whether your home environment has any effect on how you turned out. If the existence or non-existence of siblings doesn't effect your personality, what chance is there that whether your parents read to you had an effect either? (They both go on to marshall additional evidence that the home environment has no lasting effects.)


Harris and Pinker's claims fly in the fact of much of what we hear. Everybody knows that children who are read to do better in school, right? Children who were beaten by their parents beat their own children. Unfortunately, data like these run up against a very basic problem in science: correlation does not equal cause and effect.

For instance, it is almost certainly the case that the height of the youngest child in the family is correlated with the height of the oldest child of the family. I'm tall and my younger brothers are tall, too. But I can't take credit for making them tall. We happen to share the same growth-related genes.

Similarly, maybe the genes that lead a particular parent to be the sort of parent who reads to her child are also genes that make children do well in school. If this sounds implausible ("genes for reading to one's child?"), consider that there is a personality difference between people who like to read and those who don't. The former might read to their children more. And those children, inheriting the same book-worm genes, do well in school.

(If you don't believe behavior is partly inheritable, The Blank Slate does a good job of summarizing the mountain of research that it is. You might also speculate on why cats act like cats, snakes like snakes, and humans like humans. It can't all be learned. My cats were raised by humans from birth, but they have yet to learn to talk, tie their shoes or clean their own litter box.)

Love, Marriage and Birth Order

One of the difficulties with the existing literature is that it tends to test specific claims about birth order (e.g., older children are more likely to be scientists). So if the experiment fails (the hypothesis is disproven), you only know that your specific idea about birth order is wrong. It doesn't mean that birth order doesn't have some other effect.

In trying to think of a very general test of birth order effects, I realized that a century of research has shown that like marries like. Spouses correlate on just about everything measurable, from physique to personality. Certainly, spouses aren't identical twins, but they tend to be more alike one another than you'd expect if people married random strangers off the street. So if birth order affects personality, and people choose spouses based on having similar personalities, spouses should (on average) have similar birth orders.

In a paper to be published by the Journal of Individual Psychology, my co-authors and I found not only do spouses have similar birth orders, best friends do, too. The effect is not huge, but it probably shouldn't be. Nobody thinks birth order rigidly determines personality, and neither do people rigidly marry others with identical personalities. What we expected was a weak correlation.

Unlike much previous research, this effect can't be due to socio-economic factors. Although middle children are more common in poorer families or in families of certain ethnic groups, there is no ethnic group -- can be no ethnic group -- in which older children are more common than younger children. And yet oldest-oldest and youngest-youngest pairings were more common that oldest-youngest pairings.

Personality or Intelligence?

Afficionados will notice there is another possible explanation. There is now convincing evidence that birth order is related to intelligence. Since spouse's tend to IQs correlate, it could be that IQ drove this effect. Whether this is a counter-hypothesis or not depends on whether you think intelligence is completely unrelated to personality, which seems to be more of a definitional question than a scientific one. But it does raise the question of which aspects of behavior are affected by birth order.

Either way, I think these results maintain some hope for the idea that we are at least partly shaped by how we grew up. It's important to recognize that the effects we found in this study were not large, though of course it would be surprising if it were large. Nobody thinks birth order rigidly determines personality, and neither do people always marry clones of themselves. So the correlation was never likely to be large.

Many Thanks

I want to conclude this post by thanking the participants in this study, many of whom may be among the readers. One of the surveys included in this paper was conducted through the Cognition and Language Laboratory on the Web. And of course, I invite anyone who is interesting in helping out in future research to participate in the new experiments we have running on the site.

The New Cognition and Language Laboratory

The lab's website is in the process of being updated. Everything should be easier to find than it was previously, particularly the results from previous experiments (now labeled the 'findings' page).

Do Americans Value Science? New Numbers

A recent Pew survey finds that more Americans think scientists contribute a lot to society (70%) than do doctors (69%), engineers (64%), the clergy (40%), journalists (38%), artists (31%), lawyers (23%) or business executives (21%). The apparent statistical tie between scientists and doctors may be explained, however, by the fact that many people seem to conflate the two. When asked for an important scientific achievement, about half referred to a biomedical advance.

The survey contains a great deal of information and is worth reading in full. A few other things stand out to me: 49% of scientists, but only 17% of the public, think American science is the best in the world. The objective numbers are that American science is the best in the world. True, this has been rapidly changing, which may explain scientists' pessimism. But why is the public unaware of America's huge historical scientific advantage on the world stage? At the very least, this indicates poor PR on the part of US science.

Semantics Summer Reading List

This summer, I organized two book clubs involving people from people in the Laboratory for Developmental Studies. Many of us have difficulty finding time to read, and the hope was that by forming book clubs there would be peer pressure to read some foundational material. The project has been more or less successful for different people, but at the very least I have managed a lot of reading.

The book clubs were organized around language meaning (some combination of the fields of semantics and pragmatics). Based on mutual interest, we have read or are reading the following books:

Levinson (2000) Presumptive Meanings
Quine (1960) Word and Object
Vygotsky (1934) Language & Thought
Fodor (1975) The Language of Thought
Heim & Kratzer (1998) Semantics in Generative Grammar
Tomasello (2003) Constructing a Language

Keep in mind of course that some obvious books are not on this list because we've already read them (for instance, we read Pinker's Learnability & Cognition and Jackendoff's Semantic Structures last winter) and some are not on the list because we plan to read them in the near future (Pustejovsky's The Generative Lexicon is a popular choice for this coming Fall).

That said, for those of you in the field, if you were to read 6 books on semantics & pragmatics over the summer, what would you read?

All Italians Smoke

Although most behavior experiments are conducted in the lab, it's nice to be reminded occasionally that it's possible to conduct experiments in the human's natural environment...such as a nightclub. Italian scientists studied responses to requests for a cigarette at three nightclubs in central Italy.

That scientists would study Italian's smoking behaviors comes as no surprise to anyone who has been reading semantics recently. It seems that half the example sentences in the papers I read involve some variation on "all Italians smoke."

Calling all 12 year olds

I've been analyzing data from the Memory Test. The response to that experiment has been fantastic, so I'm able to look at performance based on age, from about 14 years old to about 84 years old. Interestingly, by 14 years old, people are performing at adult levels. I have a few kids in the 10-13 range, but not quite enough. It would be nice to know at what age people hit adult competency.

So...if you or someone you know is in that age range, I'd like a few more participants in the near future. I should actually be able to put up a description of the results relatively quickly in this case, should I get enough participants.

The Value of Experiments

I have been reading Heim & Kratzer's Semantics in Generative Grammar, which is an excellent introduction to formal semantics. On the whole, I've really liked the book, until I got to an example sentence in the 8th chapter:

(1) Every man placed a screen in front of him.

The authors claimed that this sentence was synonymous with

(2) Every man placed a screen in front of himself.

I though this was absurd, because to me the first sentence must mean that there is some man (let's call him 'Jim,') and all the other men put a screen in front of Jim. It just can't have the meaning of (2). I have a great deal of respect for the authors, but my immediate reaction was that this must be one of those cases in which linguists unconciously adapt their judgments to their theory (it was important for the theory Heim & Kratzer were developing that (1) mean the same as (2)).

Just to be sure, I walked into the office down the hall and took a poll of the seven people in it, none of whom study pronouns or are particularly familiar with the literature. Two of them agreed with me, but five agreed with Heim & Kratzer. So this may be a dialectical difference.

Now I feel bad about having doubted H&K, but in any case it is a good lesson about studying language: don't trust your own intuitions. Get a second opinion.

Ant Navigation SNL-style

If you appreciated Saturday Night Live's Mother Lover, then this ode to ant navigation should be right up your alley, produced by student in Dave Barner's Developmental Psychology course at UCSD.

OK, the videos have nothing to do with each other, but both are worth watching.

Lean Times come to the World's Richest University

Academia is traditionally a good place to wait out recessions. Not so much this year. Harvard has posted a list of cost-cutting measures. Notice in particular that the number of PhD students being admitted has been reduced (no word about masters or professional school students...but then masters and professional school students pay tuition).

Copyright and Science

I imagine the academic publishing industry is either hurting from or worried about digital theft, just like all other publishers. But some of the pressure is coming from other quarters.

As I've discussed on this blog before, academic publishing is a strange industry. Researchers need to publicize their research. Publishers need research to publish. So researchers give their work for free to publishers on the understanding the publishers will publicize the work. The publishers print and distribute the work and retain all the money.

Fundamentally, publishers need researchers since there is no other source of research. Researchers, on the other hand, don't need publishers, they need distribution. And with the advent of the Internet, it's no longer so clear that expensive printed journals are the best method.

I'm thinking about this as I listen to a task by Kenneth Crews called "Protecting your scholarship: copyrights, publication agreements, and open access." He is currently suggesting that we negotiate our publication agreements with journals. For instance, he argued that academic authors should not be transferring their copyrights to publishers, but rather license the copyright to the publishers. This way, the authors retain ownership of the work, which would eliminate strange transactions where authors have to get permission from the publisher to quote from their own work in a future book.

This would seem to suggest that we have some bargaining power. And, as open-access options become more prevalent, it seems that we should. Has anyone reading actually negotiated a publication agreement.

The problem with studying pragmatics

(live-blogging Xprag)

In his introduction, Kai von Fintel tells an anecdote that I think sums up why it is sometimes difficult to explain what it is we do. Some time ago, Emmon Bach wrote a book for Cambridge University Press on if/then conditionals. The copy-editor sent it back, replacing every use of "if and only if" with a simple "if," saying the "and only if" was redundant.

As it turns out, although people often interpret "if" as meaning "if nd only if," that's simply not what the word means, despite our intuitions (most people interpret if you mow the lawn, I'll give you $5 as meaning if and only if you mow the lawn...).

Part of the mystery, then, is explaining why our intuitions are off. In the meantime, though, explaining what I do sometimes comes across as trying to prove the sky is blue.

Summa smack-down

(live-blogging Xprag)

Dueling papers this morning on the word "summa" as compared with "some of." Judith Degen presented data suggesting "summa the cookies" is more likely to be strengthened to "some but not all" than is "some of the cookies." Huang followed with data demonstrating that there is no such difference. Everybody seems to agree this has something to do with the way the two studies were designed, but not on which way is better.

I am more convinced by Huang's study, but as she is (1) a lab-mate, (2) a friend, and (3) sitting next to me as I write this, I'm probably not a neutral judge.

Speaker uncertainty

Arjen Zondervan just presented a fascinating paper with the acknowledged long title "Effects of contextual manipulation on hearers' assumptions about speaker expertise, exhaustivity & real-time processing of the scalar implicature of or." He presented two thought-provoking experiments on exhaustivity & speaker expertise, but of primary interest to me was Experiment 3.

An important debate in the field has centered around whether scalar implicatures depend on context. A couple years ago, Richard Breheny & colleagues published a nice reading-time experiment in which was consistent with scalar implicatures being computed in some contexts but not others. Roughly, they set up a contexts something along the following lines:

(1) Some of the consultants/ met the manager./ The rest/ did not manage/ to attend.
(2) The manager met/ some of the consultants./The rest/ did not manage/ to attend.

Participants read the sentences one segment at a time (the '/' marks the boundaries between segments), pressing a key when they were ready for the next sentence. For reasons that may or may not be clear, it was thought that there would be an implicature in the first sentence but not in the second, making "the rest" fairly unnatural in the second sentence, and this resulted in subjects reading "the rest" more slowly in (2) than in (1).

This was a nice demonstration, and was, I tink, the first study of scalar implicature to use an implicit measure rather than just asking participants what they think a sentence means, which has certain advantages, but there were a number of potential confounds in the stimuli in this and the two other experiments they used. Zondervan fixed some of these confounds, re-ran the study and got the same results.

I was interested because, in collaboration with Jesse Snedeker, I have also re-run the Breheny study and also got that basic result. However, Zondervan and Breheny both also got longer reading times for the scalar term (e.g., 'some') in the condition where there is a scalar implicature. Both take this as evidence that calculating an implicature is an effortful process. In a series of similar experiments using my own stimuli, and I just don't get that part of the result. I am fairly convinced this is due to differences in our stimuli, but we're still trying to figure out why and what that might mean.

That said, the effect that all three of us get is, I think, the more important part of the data, and it's nice to see another replication.

Default computation in language

(Blogging Xprag)

This morning begins with a series of talks on scalar implicature. This refers to the fact that "John ate some of the cookies" is usually interpreted as meaning "some but not all of the cookies." Trying to get this post written during a 5-minute Q&A prevents me from proving that "some" does not simply mean "some but not all," but in fact it is very clear that "some" means "some and possibly all." The question, then, is why and how do people interpret such sentences as meaning something other than what they literally mean.

The most interesting moment for me so far has been a question by Julie Sedivy during the first Q & A. A popular theory of scalar implicature argues that the computation of "some = some-but-not-all" is a default computation. A number of experiments that have shown that such computation is slow has been taken by some as evidence against a default model. Sedivy pointed out that saying a computation is done by default doesn't require that the computation be fast, so evidence about speed of computation can't be taken as evidence for or against a default-computation theory.

Liveblogging Experimental Pragmatics

This week I am in Lyon, France, for the 3rd Experimental Pragmatics meeting. I had plans to live-blog CUNY and SRCD, neither of which quite happened, but I'm giving it a go for Day 2 of Xprag, and we'll see how it goes.

Pragmatics, roughly defined, is the study of language use. In practice, this tends to mean anything that isn't semantics, syntax or phonology, though in practice the division between semantics and pragmatics tends to shift as we learn more about the system. Since Pragmatics has perhaps been studied more extensively in philosophy & linguistics, the name of the conference emphasizes that it focuses on experiments rather than just theory.

More to follow

Are Cyborgs Near?

Raymond Kurzweil, inventor and futurist, predicts that by the 2030s, it will be possible to upload your mind, experience virtual reality through brain implants, have experiences beamed into your mind, and communicate telepathically. Just to name a few predictions.

Kurzweil, as he himself recently noted on On The Media, has a track record of successful predictions over the past three decades. Past performance being the best predictor of future performance, this leads people to at least pay attention to his arguments. Nonetheless, as the mutual funds folk say, past performance is a predictor, not a guarantee.

Exponential Progress

I suspect that Kurzweil is right about many things, but I'm not sure about the telepathy. When I have heard him speak, his primary argument for his predictions is telepathy only seems like a distant achievement because we think technology moves at a linear rate, but in fact knowledge and capability increases exponentially. This has clearly been the case in terms of computing speed.

Fair enough. The problem is that we aren't sure exactly how hard the problems we are facing are. There is a famous anecdote about an early pioneer in Artificial Intelligence assigning "vision" as a summer project. This was many decades ago, and as anyone in the field knows, machine vision is improving rapidly but still not that great.

A more contemporary example: A colleague I work with closely built a computation model of a relatively simple process in human language and tried to simulate some data. However, it took too long to run. When he looked at it more carefully, he realize that his program required more cycles to complete than there are atoms in the known universe. That is, merely waiting for faster computers was not going to help; he needed to re-think his program.

The Distance

In short, even if we grant Kurzweil that computers improve exponentially, somebody still needs to program them. Our ability to program may also be improving exponentially, but I'm unconvinced that we know how far we have to go.

Suppose I wanted to walk to some destination 1,000 miles away. I walk 1 mile the first year. If I keep going at the same rate, it'll take 1000 years. But if my speed doubles each year, it will take less than 14 years. Which is a lot faster!

But we don't know -- or at least I don't know -- how far we have to walk. We may well be walking to the other side of the universe (>900,000,000,000,000,000,000,000 miles). In which case even if my speed doubles every year, it'll still take almost 80 years. Which granted is pretty quick, but not as fast as the 14 years.

Of course, notice that by the 79th year I'll be traveling at such a velocity that I'd be able to cross nearly the entire universe in a year (or, 156 billion times the speed of light), which so far as we know is impossible. The growth of our technology may similarly eventually hit hard limits.

That said...

I wouldn't terribly mind being proved wrong. Telepathy sounds neat.

More things you don't have time to read

PLoS One has published over 5,000 papers. Is that a sign of success or failure?

I've worried before on this blog about the exploding number of science publications. Publications represent completed research, which is progress, and is good. But the purpose of writing a paper is not for it to appear in print, the purpose is for people to read it. The more papers are published, it stands to reason, the fewer people read each one. Thus, there is some argument for publishing fewer, higher quality papers. I have heard that the average publication gets fewer than 1 citation, meaning many papers are never cited and thus presumably were not found to be relevant to anybody's research program.

It is in this context that I read the following excited announcement from PLoS ONE, a relatively new open-access journal:
nearly 30,000 of your peers have published over 5,000 papers with us since our launch just over two years ago.
That's a lot of papers. Granted, I admit to being part of the problem. Though I do now have a citation.

Origin of Language Pinpointed

Scientists have long debated the evolution of language. Did it emerge along with the appearance of modern homo sapiens, 130,000-200,000 years ago? Or did it happen as late as 50,000 years ago, explaining the cultural ferment at that time? What are we to make of the fact that Neanderthals may have had the ability to produce sounds similar to those of modern humans?

In a stunning announcement this morning, freelance writer Joshuah Bearman announced that he had pinpointed the exact location, if not the date, of the origin of modern language: Lake Turkana in Kenya.


Actually, what Bearman says is
This is where Homo sapiens emerged. It is the same sunset our ancestors saw, walking through this very valley. To the north is Lake Turkana, where the first words were spoken. To the south is Laetoli, where, in 1978, Mary Leakey's team was tossing around elephant turds when they stumbled across two sets of momentous footprints: bipedal, tandem, two early hominids together...
Since this is in an article about a wedding, I suspect tha Bearman was not actually intending to floor the scientific establishment with an announcement; he assumed this was common knowledge. I can't begin to imagine where he got this idea though. I wondered if perhaps this was some sort of urban legend (like all the Eskimo words for snow), but Googling has turned up nothing, though of course some important archaeological finds come from that region.


Probably he heard it from a tour guide (or thought he had heard something like that from a tour guide). Then neither he nor his editor bothered to think through the logic: how would we know where the first words were spoken, given that there can be no archaeological record? It's unlikley we'll ever even find the first human(s), given the low likelihood of fossilization.

I have some sympathy. Bearman was simply trying to provide a setting for his story. In one of my first published travel articles, I similarly mentioned in passing that Lake Baikal (the topic of my story) was one of the last strongholds of the Whites in the Russian Revolution. I have no idea where I got that idea, since it was completely untrue. (Though, in comparison with the Lake Turkana hypothesis, at least my unfounded claim was possible.)

So I'm sympathetic. I also had to write a correction for a subsequent issue. Bearman?