Field of Science

Do monkeys have grammar?


The short answer is "no." But a new study in PLOS One suggests that some monkey calls may be morphologically-complex. Here is the relevant passage:
Some calls were given to a broad, others to a narrow range of events. Crucially, “krak” calls were exclusively given after detecting a leopard, suggesting that it functioned as a leopard alarm call, whereas the “krak-oo” was given to almost any disturbance, suggesting it functioned as a general alert call. Similarly, “hok” calls were almost exclusively associated with the presence of a crowned eagle (either a real eagle attack or in response to another monkey's eagle alarm calls), while “hok-oo” calls were given to a range of disturbances within the canopy, including the presence of an eagle or a neighbouring group (whose presence could sometimes be inferred by the vocal behaviour of the females).
The authors take this as evidence that "oo" is a suffix of some sort that modifies the meaning of the preceding part of the call.

Maybe. Whether two words that contain similar sounds share a morpheme or not is an old problem in linguistics, and one that is actually hard to solve. I cut my teeth on such questions as whether the /t/ in swept is the same past-tense affix that we see in dropped. Notice that both words end in the sound "t" -- but, then, so does "hat," and probably nobody thinks the /t/ in "hat" is a suffix.

One crucial test the authors would need to do would be to show that this "oo" suffix can be used productively. If this was a study of humans, you might teach them a new word "dax," which refers to a chipmunk, and then see if "dax-oo" was interpreted as "warning, there's a chipmunk!"

All of which is not to say that this isn't an intriguing finding, but we're a ways from talking monkeys yet.

What does a professor do all day?

Readers of this blog will remember Dick Morris's strange claim that professors don't do anything except teach -- it's not even clear he thinks they have to prepare for class or grade papers. This raised a considerable backlash on the Web, in which many pointed out that teaching is, for many professors, only one pursuit (and often not the main one).

Around the same time, but apparently independently, a professor of psycholinguistics, Gerry Altmann, listed how he had been spending his time. In the space of 2.5 weeks, sent out 18 manuscripts for review (he's a journal editor), wrote 51 action letters (telling authors what decisions had been made), reviewed 7 NIH grants (interesting, since he works in the UK), and visited collaborators in Philly to discuss a new project (presumably part -- but not all -- of the 3677 miles he reports having flown).

Using Google Wave

I admit I'm pretty excited about Google Wave. I am currently involved in a fairly large collaboration. It's large in

  • the scale of phenomena we're trying to understand (essentially, argument realization)
  • the number of experiments (literally, dozens)
  • the number of people involved (two faculty, three grad students, and a rotating cast of research assistants, all spread across three universities)
One problem is keeping everybody up-to-date and on the same page, but an even more serious problem is that's difficult to keep track of everything we've discovered. In the last few weeks, we've moved much of our discussion into Wave, and I think already I have a better sense of some of the issues we've been dealing with.

Collaborative Editing?

If you are interested in Wave, the best thing is to simply check out their website or one of the many other websites describing out to use it. The main idea behind it is to enable collaborative document editing -- that is, a document that can be edited by a group of people simultaneously.

Anyone who has worked on a group project is familiar with the following problem: only one person can work on a document at a given time. For instance, if I send a paper to a co-author for editing, I can't work on the paper in the meantime or risk a real headache when trying to merge the separate edits later.

Google Docs and similar services have allowed real-time collaborative editing for a while, but although these services allow real-time collaborations, they weren't really designed for real-time collaborations. For instance, it's difficult to record who made what changes. Similarly, it doesn't allow comments (for instance, sometimes you don't want to change the text, you just want to say you don't understand it). If one person makes a change and you want to undo it, good luck. Google Wave has these and other features.

Using the Wave

Currently, we're using Wave as a collective notebook, where we record everything we've learned in the course of our research. This keeps everyone up-to-date. It also allows us to discuss issues without requiring meetings (a good thing, since we're at different universities).

For instance, recently I read a claim that a certain grammatical structure that is impossible in English happens to be possible in Japanese. I noted this in a section of our document, and attached a comment: "Yasu, Miki: can you check this?" As it happens, two members of our project are native Japanese speakers. In a series of nested comments, they discussed the issue, came to a conclusion (that the paper I had read was wrong), and then we finally deleted the comments and replaced the whole section with a summary of the discussion and conclusions.

In other sections, we've included the methods for experiments that we're designing, commenting on and ultimately editing the methods until everyone agrees.

Needed Improvements

At the moment, Wave is very much in beta testing and is underpowered. Although you can embed files and websites, there's no way to embed, say, a spreadsheet -- a major inconvenience for us, since much of our work involves making lists of verbs and their properties. Whenever I want the most updated list, I need to email whoever was working on it last, which isn't ideal.

Of course, we could use Google Docs, but it has the problems listed above (no way of commenting, no track-changes, no archive in case we decide to undo a change). It's assumed that these kinds of features will be added in the future.

What isn't said

"Last summer, I went to a conference in Spain."

Technically, all you learned from that sentence is that there was a conference in Spain and that I traveled to it from some other location that isn't Spain. That's what the sentence literally means.

If you know that I live in Boston, you probably assumed that I flew to Spain, rather than take a boat. You're probably confident that I didn't use a transporter or walk on water. You probably also assumed that the conference is now over. All these things are true, but they weren't actually in what I said.

The Communication Game

This presents a problem for understanding communication: a lot is communicated which is not said. A lot of the work I do is focused on trying to figure out what not just what a sentence means, but what is communicated by it ... and that is the focus of the newest experiment on GamesWithWords.org.

In The Communication Game, you'll read one person's description of a situation (e.g., "Josh went to a conference in Spain"). Then, you'll be asked to decide whether, based on that description, you think another statement is true. Some will be obviously true ("Josh went to a conference"), some probably true ("Josh went to the conference in Spain by plane"), some clearly false ("Josh went to the conference in Spain by helicopter"), and some are hard to tell ("Josh enjoyed the conference in Spain more than the conference in Boston").

Scientifically, what we're interested in is which questions are easier to get right than others. From that, we'll get a sense of what people's expectations are. Part of what makes this a game is the program keeps score, and you'll find out at the end how well you did.

First-borns don't trust you

New research in Animal Behavior suggests that first-borns are "less trustful and reciprocate less."

Participants in the experiment played a basic economics game: Let's say Mary and Sally are playing. Both are given $30 to start with. Mary is told she can "invest" as much as she wants with Sally by giving that money to Sally. However much she gives to Sally is automatically tripled, and then Sally can decide how much to give back to Mary as a dividend. So let's say Mary gives Sally $10. That amount is then tripled, so Sally ends up with $30 + $10*3 = $60, while poor Mary only has $20 left. This doesn't seem very fair, so Sally is allowed to give any amount of money back to Mary.

Mary, then, has a difficult choice. She earns the most money if she gives all $30 to Sally, and then Sally gives the entire investment ($90) back. However, she also risks Sally absconding with the money, leaving Sally with $120 and Mary with nothing. So Mary's safest choice is to give Sally nothing.

So what happens?

The results of the study were that if Mary was a first-born, she gave $3.7 less money to Sally than if she was a latter-born (actually, the game used "monetary units," not dollars, so it's not clear how this translates). So first-borns trust less. Interestingly, if Sally was a first-born, she was also returned less money Mary, meaning first-borns reciprocate less as well.

It's not entirely clear that these are two separate effects. If Mary was predicting Sally's behavior by thinking about what she herself would do in Sally's shoes, you would expect first-borns to give less money (since they themselves would return less of it) and for latter-borns to give more (since they themselves would return more).

One nice aspect of this study is that they controlled for the fact that latter-borns tend to come from larger families (each family has only one first-born but potentially many latter-borns). This is unfortunately not something that birth-order researchers typically do, which is problematic since people from smaller and larger families differ in many ways (including parental income, education, etc.).

Why are first-borns so mean?

It's interesting that people differ based on birth-order, but I think what most people really want to know is why. Why were first-borns less trusting and why did they reciprocate less?

The authors didn't really address this question, but their data do suggest one possibility. They report that if Mary gave Sally $10, Sally gave, on average, $18 back, leaving Mary with $38 and Sally with $42.

Now, let's let Jenny and Susan play. Jenny decides to give Susan $30 -- all her money! On average, Susan would give $43 back. This is more money than Sally gave back, but keep in mind that Jenny also gave up more money to start with, so in the end Jenny walks home with $43 -- which is hardly better than the $48 Mary got. On the other hand, Susan cleans up, netting $77.

It's true that Jenny's generosity did get her an extra $5...but it also got Susan an extra $25, which she chose not to share. Since this unfairness tends to rankle people, Jenny may in fact be more unhappy at the end of the game than is Mary, despite the extra $5 (obviously, Susan would feel pretty good about the outcome).

Getting back to birth-order, first-borns were less trusting -- that is, they invested less money. As we see from the analysis above, they were making the right decision; investing wasn't really going to pay off for them. They also returned less money from the investment, which may show that they are "bad reciprocators," or it might just show that after years of dealing with their younger siblings, they've come to the hard truth: people -- even younger siblings -- are greedy jerks. They returned about as much money as they would expect to get were the positions reversed.

Would the world be better off without first-borns?

For anyone who was wondering whether the game would turn out better if only latter-borns played, even though latter-borns do return more money from the investment, they don't return much. Invest $30 with a latter-born, and in the end you'll have $47, and s/he will have $73 -- better than if you invest with a first-born ($36.7 and $83.3), but still unfair.

---
Illustration from the Creative Commons

Heroic participants

I just glanced through the data for the new experiments. Things are looking good. Two heroic participants rated over 300 puns each. One of them stuck with it and only tuckered out after 1199 puns. The enthusiasm is much appreciated!

Mini monitors

A few years ago, I read a study claiming that the larger the screen on your computer, the more productive you are. I believe it. I frequently edit one paper while having several others open, and it's convenient to have them all open simultaneously. I write my experiments in Flash, which is almost impossible to use without two screens, due to all the various floating toolbars.

This is fine if I'm in the Lab, as there I have an iMac with an attached 20 inch screen, and also another 24 inch screen I attach to my MacBook Pro. Unfortunately, I'm not always in my office. I'm sometimes at Tufts, where I am collaborating on some ERP work, and sometimes I work from home. But I don't have a bunch of extra monitors at home, and even if I did, I don't have anywhere to put them.

Enter Mimo USB monitors. This 7-inch monitors weigh less than a pound and can pack flat, so they're easy to get out of the way when you aren't using them. 7 inches is just about the right size to fit some -- though not all -- of my extraneous toolbars.

Which isn't to say it's a perfect solution. The screen is too small to host a second document window. In order to fit more, everything is annoyingly small -- and I like small font. The brightness and contrast can't be meaningfully adjusted. Those drawbacks make it annoying to use Flash or Dreamweaver ... annoying but possible. Prior to getting my Mimo, I pretty much refused to work on Flash or Dreamweaver if I couldn't do it in my office.

Of course, what I really want is a virtual monitor that could be projected onto any space (such as my glasses). In the meantime, Mimo works pretty well.

How do I feel about open-access journals? The president wants to know.

The White House is requesting comments as it formulates a policy on open-access publication, at least according to a recent email from AAAS:
The Obama Administration is seeking public input on policies concerning access to publicly-funded research results, such as those that appear in academic and scholarly journal articles. Currently, the National Institutes of Health require that research funded by its grants be made available to the public online at no charge within 12 months of publication. The Administration is seeking views as to whether this policy should be extended to other science agencies and, if so, how it should be implemented.
The comments are being collected in phases. Right now (Dec. 10 - 20) they are asking
Which Federal agencies are good candidates to adopt Public Access policies? What variables (field of science, proportion of research funded by public or private entities, etc.) should affect how public access is implemented at various agencies, including the maximum length of time between publication and public release?
Next up (Dec. 21 - 31) is
In what format should the data be submitted in order to make it easy to search and retrieve information, and to make it easy for others to link to it? Are there existing digital standards for archiving and interoperability to maximize public benefit? How are these anticipated to change?
Finally (Jan. 1 - 7) they are interested in
What are the best mechanisms to ensure compliance? What would be the best metrics of success? What are the best examples of usability in the private sector (both domestic and international)?
I'm glad they are thinking seriously about these things.

Peer Review: An Editor Responds

As previously noted, a loosely-translated Hitlerian tantrum about unreasonable reviewers has been making the rounds. Among recent viewers is Gerry Altmann, the editor of Cognition. He is clearly waiting for someone to make a similar video about unreasonable authors. He makes a good case, and it's worth reading (scroll down to the post titled "Peer Review" on Nov. 27th), but as there are more authors than editors, I suspect it wouldn't get as many views on YouTube.

Keeping your brain in a computer

A researcher at UC-San Diego is methodically slicing and preparing one of the world's most famous brains (where "world" is narrowly defined as the world of cognitive science -- I suspect HM isn't a household name, though he was a huge anonymous celebrity among psychologists and neuroscientists for the last half-century). The hope is to make the data electronically available.

An interesting fact tucked in at the end of the article is that in order to make it possible to zoom down to the cellular level on each electronic copy of each slide will require 1 - 10 terabytes per slide. That's terabytes with a t. There are 2,400 slices of the brain (I'm not clear as to whether all will become slides, but presumably a good fraction will be). And the researcher wants to eventually expand the project to 500 brains.

This creates a serious storage issue.

It also brings up the question of building synthetic brains. If we need something on the order of 5,000 terabytes just to render a digital image of a brain, how many do we need to perfectly model a brain?

Granted, there aren't 5,000 trillion neurons in a brain (there are about 100 billion). But one neuron doesn't equal one bit -- a neuron's behavior is complex, and it's pieces do things. As we don't fully understand what neurons do, I take it as uncontroversial we don't know how many 'bits' make up a neuron.

This is one of the complications with predicting the future of neuroscience. Our knowledge and technology are growing exponentially, but we don't know how far there is to go.

The science of puns

Puns are jokes involving ambiguous words. Some are hilarious. Some are only groan-worthy. (If anyone wants to argue the first one isn't a pun, comment away.)

Wait -- aren't all words ambiguous?
The thing about puns that pops out at me is that pretty much every word is ambigous. My parade-case for an unambiguous word -- really a phrase -- is George Washington. But, of course, more than one person has been named "George Washington," so even that one is ambiguous. In any case, the vast majority of words are ambiguous -- including the words word and pun.

Most of the time we don't even notice that a words is ambiguous, because our brains quickly select the contextually-appropriate meaning. For a pun, this selection has to fail in some way, since we remain aware of the multiple appropriate meanings.

Studying puns
As someone who studies how context is used to understand language, this makes puns and other homophones interesting. I'm also currently starting a new EEG experiment looking at the processing of ambiguous words in folks on the autism spectrum. But one question that I'm interested in is why some puns are funnier than others.

As a first step in that work, I need a bunch of puns ranked for how funny they are. There are website out there that have these types of rankings, but they don't use the sorts of controls that any peer-reviewed journal will demand. So I am running a rank-that-pun "experiment" at GamesWithWords.org called Puntastic! It goes without saying that once I've got enough puns rated, I'll post what I learn here (including what were the funniest and the worst puns). This make take a few months, though, as I'm getting ratings for over 500 puns.

(picture above isn't just a picture -- it's a t-shirt)

Science in the classroom

Science Magazine recently profiled a new website that links up scientists and classroom teachers in order to improve science education. It looks like a great project, and definitely something needed.

Pronoun Sleuth

George Washington always refers to George Washington. The pronoun he, on the other hand, can refer to George Washington, Thomas Jefferson, John Adams, or anyone else who is male (not just presidents). So, unlike proper names which have the same meaning regardless of context, pronouns have almost no meaning without context.

Just saying that we figure out who he and she refer to based on context begs the question of how we do it. What aspects of context matter? The fact that today is Tuesday? Whether it is sunny or rainy? (This isn't a straw man -- both of these things can matter in the right circumstance; I leave it to the reader as an exercise to come up with examples.)

In one of our newest experiments, we took sentences with pronouns and systematically obscured aspects of the context to see if people could still figure out who the pronoun refers to. If they can, then that aspect of the context didn't matter. Play Pronoun Sleuth by clicking here.

Making science relevant

One thing about being a scientist interested in how people think and how different groups of people think differently is that you constantly notice differences in how scientists and non-scientists think differently.

For instance, scientists like evidence. You think how you parent your children affects how they turn out? Maybe. It's a testable question. (For my position on the question and on the evidence, read here.) One mark of a great researcher is the ability to spot untested assumptions (one of my favorite examples being Marc Hauser's work on language evolution). Of course, some scientists are less confined by evidence than others, and my avowedly non-scientist wife is as empirical-minded as they come. But it seems to be generally true (though I admit I don't know any well-controlled studies).

Where am I going with this?

Does making science relevant help science education?

In the last issue of Science Magazine, Hulleman and Harackiewicz point out that

Many educators and funding agencies share the belief that making education relevant to students’ lives will increase engagement and learning. However, little empirical evidence supports the specific role of relevance in promoting optimal educational outcomes, and most evidence that does exist is anecdotal or correlational.
I smell an experiment. It was surprisingly simple: students in a high school system in the Midwest were randomly assigned to write essays (as many as 8 during the semester, with an average of 4.7 essays/student) that either just summarized what they were learning in science or tried to apply what they learned to their own lives. The students who wrote essays relating science to their lives got better science grades and reported more interest in science at the end.

Motivation or depth of processing?

The authors discuss this in terms of motivation: students who see the relevance of science are more interested in it. (They also seem to imply that it improves confidence.) I'm interested in understanding the mechanism better (a professor in my department complains that Science articles are necessarily too short to give necessary experimental detail and theoretical motivation): were these students more interested because of the personal relevance per se, or was it simply that thinking about relevance required investing in and reinterpreting the material. After all, science isn't just a list of facts (or shouldn't be, anyway). Facts are boring; interpretations are what make science science.

But that is, as they say, academic. As the authors point out, this was a relatively simple method that improved grades and interest in science. Assuming, of course, that it replicates, this is a valuable contribution.

And it suggests that making science relevant improves education outcomes. If that seemed obvious from the get-go, it's worth remembering how many obvious truths have turned out to be wrong. Occasionally proving the obvious is an occupation hazard, but still worth the effort.

Games with Words: New Web lab launched

The new Lab is launched (finally). I was a long ways from the first to start running experiments on the Web. Nonetheless, when I got started in late 2006, the Web had mostly been used for surveys, and there were only a few examples of really successful Web laboratories (like the Moral Sense Test, FaceResearch and Project Implicit). There were many examples of failed attempts. So I wasn't really sure what a Web laboratory should look like, how it could best be utilized, or what would make it attractive and useful for participants.

I put together a website known as Visual Cognition Online for the lab I was working at. I was intrigued by the possibility of running one-trial experiments. Testing people involves a lot of noise, so we usually try to get many measurements (sometimes hundreds) from each participant, in order to get a good estimate of what we're trying to measure. Sometimes this isn't practical.
The best analogy that comes to mind is football. A lot of luck and random variation goes into each game, so ideally, we'd like each team to play each other several times (like happens in baseball). However, the physics of football makes this impractical (it'd kill the players).

Running a study on the Web makes it possible to test more participants, which means we don't need as many trials from each. A few studies worked well enough, and I got other good data along the way (like this project), so when the lab moved to MN and I moved to graduate school, I started the Cognition and Language Lab along the same model.

Web Research blooms

In the last two years, Web research has really taken off, and we've all gotten a better sense of what it was useful for. The projects that make me most excited are those run by the likes of TestMyBrain.org, Games with a Purpose, and Phrase Detectives. These sites harness the massive size of the Internet to do work that wasn't just impossible before -- it was frankly inconceivable.

As I understand it, the folks behind Games with a Purpose are mainly interested in machine learning. They train computer programs to do things, like tag photographs according to content. To train their computer programs, they need a whole bunch of photographs tagged for content; you can't test a computer -- or a person -- if you don't know what the correct answer is. Their games are focused around doing things like tagging photographs. Phrase Detectives does something similar, but with language.

The most exciting results from TestMyBrain.org (full disclosure: the owner is a friend of mine, a classmate at Harvard, and also a collaborator) have focused on the development and aging of various skills. Normally, when we look at development, we test a few different age groups. An extraordinarily ambitious project would test some 5 year olds, some 20 year olds, some 50 year olds, and some 80 year olds. By testing on the Web, they have been able to look at development and aging from the early teenage years through retirement age (I'll blog about some of my own similar work in the near future).

Enter: GamesWithWords.org

This Fall, I started renovating coglanglab.org in order to incorporate some of the things I liked about those other sites. The project quickly grew, and in the end I decided that the old name (Cognition and Language Lab) just didn't fit anymore. GamesWithWords.org was born.

I've incorporated many aspects of the other sites that I like. One is simply to make the site more engaging (reflected, I hope, in the new name). It's always been my goal to make the Lab interesting and fun for participants (the primary goal of this blog is to explain the research and disseminate results), and I've tried to adopt the best ideas I've seen elsewhere.

Ultimately, of course, the purpose of any experiment is not just to produce data, but to produce good data that tests hypotheses and furthers theory. This sometimes limits what I can do with experiments (for instance, while I'd love to give individualized feedback to each participant for every experiment, sometimes the design just doesn't lend itself to feedback. Of the two experiments that are currently like, one offers feedback, one doesn't.

I'll be writing more about the new experiments over the upcoming days.