Field of Science

Findings: How memory changes with age


It's well-known that both long-term and short-term memory decline with age. However, most of these data come from verbal memory tests. The evidence for visual short-term memory is less clear. A few studies (Adamowicz, 1976; 1978; Fahle & Daum, 1997) show age-related declines, but more recent studies do not (Faubert & Bellefeuille, 2002; McIntosh et al., 1999; Sara & Faubert, 2000; Thompson, Cengelci & Ozekes, 1999).

In one of our longest-running experiments -- now archived -- we looked at visual short-term memory across a wide age range: from 14 years old to 90 years old. This has some advantages over the typical method, which is to test one group of young people (usually college students) and a group of older people (often recruited at a club or function frequented by folk of a certain age). With our data, we can find out not only whether visual working memory declines with age, but when it begins to decline and how rapidly.

The Memory Test


The experiment is simple. Participants are shown four novel objects for one second. Then they have to remember those objects for one second. After that, they are shown one object and asked if that is one of the four they just saw. A diagram of this method is on the right.

By the time the experiment stopped running, 8,374 people had completed the task (plus some who did not make it into this analysis -- for instance, people doing the experiment for the second time). This allowed us to look closely at the results year-by-year for the entire range of ages.

Results

Not surprisingly, performance did decline with age. Our statistical analyses suggest that the decline begins in the 30s (the best estimate was 36.6 years of age, with a 95% confidence interval from 32.2 to 41.1 years). Starting at 37 years old, performance on this task dropped by one percentage point every 2 1/2 years.



Why didn't those other recent studies show an age-related decline in visual short-term memory? One possibility is that the tasks those researchers used were simply too easy for differences to show up (the well-known ceiling effect), whereas our task and the ones used by the older experiments are all much harder.

Future Work

The exciting and novel part of this experiment is not that we showed age-related changes in memory performance. It would have been surprising if we didn't find any. What is more exciting is our ability to estimate the age at which it starts. My collaborators and I are currently running similar studies looking at aging for other types of mental tasks to see whether changes appear at similar ages. If changes appear at approximately the same age for two different tasks, that may suggest a common origin to the changes. We've run several more studies which I'll talk about in the future, and we're nearly ready to write up this work for publication.

Many thanks to the nearly 9,000 people who participated. Of course, we're always running new experiments at GamesWithWords.org. Please stop by.

See Also

This is not the only way the data from The Memory Test has been used. You read a compilation of previous posts on memory here.


Picture credit: http://www.flickr.com/photos/deepblue66/ / CC BY-NC-SA 2.0

Lab Notebook: Shopping Period

Harvard and Yale, in their infinite wisdom and love of tradition, do not register students for classes until the second week of the semester. The first week, called "Shopping Period," in theory gives students a chance to try out different classes. Undergraduates seem to like it.

Shopping Period has several consequences. One is that no schedules can be set in stone until the second -- or often third -- week of the semester. Since nobody knows how many students are going to take a class, much less which students, graduate student-led course sections can't be scheduled until the second week. This makes it difficult for undergraduate research assistants to set their lab schedules until the second or third week of the semester. The same goes for any regular meetings that graduate students and/or undergraduate students attend. Also, one can't reserve rooms for those meetings until after all the courses have rooms assigned and all the sections have rooms assigned. I have a mandatory meeting for my undergraduate research assistants. As should be clear from above, it's not always clear whether an undergraduate will be able to attend that meeting until the third week of the semester.

Again, the students seem to like Shopping Period, so it clearly has its benefits, but it makes the beginning of the semester very busy -- it's a little like packing your bags while you're already on the way to the airport. I am also giving a talk next Thursday and submitting a paper to the Cognitive Science Society annual meeting next Saturday.

Photo credit: http://www.flickr.com/photos/chavals/ / CC BY-NC-ND 2.0

Vote!



I know it's snowy out there, but all of you in Massachusetts: go out and vote!'


Making research open-access

The US Office of Science & Technology policy is continuing to receive comments on the future of its open-access policy. The comments have been overwhelmingly in favor of making research papers available for free electronically, which many suggesting they be available immediately on publication (or even upon acceptance, which is as much as a year or two before publication). The comment that was most on-point, from my perspective, was one person who wondered what purpose publishers even serve beyond copy-editing, given that all the work is done volunteers (and, come to think of it, I helped with the copy-editing of my last paper myself).


Some comments go even farther, suggesting that all data should be made public immediately. I'm not sure about that idea. Preparing data so that it is easy for others to understand is by no means easy. When I go back to look at data I collected a few years ago, it often takes me hours to interpret it, and I remember what the study was about. In fact, the basic purpose of a paper is to take data and make them easy to understand. Finally, while there are rare cases where I wish I had access to somebody's original data, for most studies I can't even imagine what value having the original data would have.


Certainly, there are cases in which having the raw data would be valuable. But is the value worth the cost of preparing all data from all studies for the public? Maybe in fields in which the data are easier to publish it is. In psychology, I'm not sure.

Birth order and personality

For those who haven't seen it yet, I have a new article in the latest issue of Scientific American Mind. You can of course buy a copy of the magazine off of a news stand, but the article has been posted in full online as well.

More monkey language

The New York Times is running an article trying to pull together recent work on primate communication, including the paper I blogged about last week.

Want to know as soon as results from an experiment are ready?

When I have results from an experiment that I can share, I put it on the blog ... which is great if you constantly check the blog. I also send out an email. If you would like to receive this email, you can join the mailing list here. I also use this mailing list to announce new experiments. On the whole, this means 5-10 emails per year.

I use Google Groups, which is a little bit of a pain if you don't have a Google account -- you'll need one (they're free). On the other hand, Google Groups makes it really easy for you to unsubscribe if, for some reason, you eventually decide you don't want to get emails anymore.

Can we make our body do what we want?

I read Fodor's Language of Thought over the summer. Towards the end of Chapter 1, he says psychological rules must have exceptions, since "even when the spirit is willing the flesh is often weak. There are always going to be behavioral lapses which are physiologically explicable but which are uninteresting from the point of view of psychological theory."

Like many cognitive scientists, I'm a big fan of Fodor. I love his theory of concepts put forth in Language of Thought. I even like his work when I disagree with it. This is a place I disagree: behavioral lapses are fascinating from the point of view of psychological theory.

There's two ways of interpreting this comment. One assumes dualism: there is an immaterial mind that tries to make the physical brain and body do what it wants. This theory is almost certainly wrong, but on this theory, cases in which the brain fails to do what the mind wants are interesting. Why would that happen? Are these errors random stochastic noise, or are there patterns in the failures?

If we assume that all behavior arises from the activity of the brain (probably the right theory), behavioral lapses are even more interesting. We have conscious parts of our mind/brain that make explicit decisions (I'm getting out of bed now; I won't eat that slice of chocolate cake; I'm going to smile when I open this present, no matter what it is), but it's clear to any owner of a consciousness that making a decision is one thing -- making it happen is another (there are more prosaic cases as well, in which we mean to say one word but a different one comes out).

Perhaps wires sometimes get crossed and our decision isn't transmitted to the relevant module of the brain. That seems like a pretty serious design flaw, so why hasn't evolution fixed it? Perhaps there are non-conscious parts of our brain that sometimes override our conscious decisions ... in which case, what is consciousness for (according to some, not for making decisions).

Psychological theory aside, I suspect most people would like to understand why, despite a willing spirit, their flesh is so often weak.

Findings: The funniest pun

Puntastic has been running for a few weeks, and so far participants have contributed 13,748 ratings in total of nearly 2,000 different puns. Currently, the most popular pun is:

College slogan: 'Draft beer, not students.'

53 different puns are tied for last place, supporting the hypothesis that there are a lot more bad puns than good puns.

When the study is done -- hopefully in a few months -- I'll post rankings of the puns used in the experiment. Before that time, though, we need many more people to rate puns.

If you've already participated, feel free to play again: with nearly 2,000 puns, it's unlikely you'll see many of the same ones again (with the exception of a few 'filler' puns that everyone sees). Just be sure that you answer "yes" to the question "Have you participated in Puntastic previously?"

Do monkeys have grammar?


The short answer is "no." But a new study in PLOS One suggests that some monkey calls may be morphologically-complex. Here is the relevant passage:
Some calls were given to a broad, others to a narrow range of events. Crucially, “krak” calls were exclusively given after detecting a leopard, suggesting that it functioned as a leopard alarm call, whereas the “krak-oo” was given to almost any disturbance, suggesting it functioned as a general alert call. Similarly, “hok” calls were almost exclusively associated with the presence of a crowned eagle (either a real eagle attack or in response to another monkey's eagle alarm calls), while “hok-oo” calls were given to a range of disturbances within the canopy, including the presence of an eagle or a neighbouring group (whose presence could sometimes be inferred by the vocal behaviour of the females).
The authors take this as evidence that "oo" is a suffix of some sort that modifies the meaning of the preceding part of the call.

Maybe. Whether two words that contain similar sounds share a morpheme or not is an old problem in linguistics, and one that is actually hard to solve. I cut my teeth on such questions as whether the /t/ in swept is the same past-tense affix that we see in dropped. Notice that both words end in the sound "t" -- but, then, so does "hat," and probably nobody thinks the /t/ in "hat" is a suffix.

One crucial test the authors would need to do would be to show that this "oo" suffix can be used productively. If this was a study of humans, you might teach them a new word "dax," which refers to a chipmunk, and then see if "dax-oo" was interpreted as "warning, there's a chipmunk!"

All of which is not to say that this isn't an intriguing finding, but we're a ways from talking monkeys yet.

What does a professor do all day?

Readers of this blog will remember Dick Morris's strange claim that professors don't do anything except teach -- it's not even clear he thinks they have to prepare for class or grade papers. This raised a considerable backlash on the Web, in which many pointed out that teaching is, for many professors, only one pursuit (and often not the main one).

Around the same time, but apparently independently, a professor of psycholinguistics, Gerry Altmann, listed how he had been spending his time. In the space of 2.5 weeks, sent out 18 manuscripts for review (he's a journal editor), wrote 51 action letters (telling authors what decisions had been made), reviewed 7 NIH grants (interesting, since he works in the UK), and visited collaborators in Philly to discuss a new project (presumably part -- but not all -- of the 3677 miles he reports having flown).

Using Google Wave

I admit I'm pretty excited about Google Wave. I am currently involved in a fairly large collaboration. It's large in

  • the scale of phenomena we're trying to understand (essentially, argument realization)
  • the number of experiments (literally, dozens)
  • the number of people involved (two faculty, three grad students, and a rotating cast of research assistants, all spread across three universities)
One problem is keeping everybody up-to-date and on the same page, but an even more serious problem is that's difficult to keep track of everything we've discovered. In the last few weeks, we've moved much of our discussion into Wave, and I think already I have a better sense of some of the issues we've been dealing with.

Collaborative Editing?

If you are interested in Wave, the best thing is to simply check out their website or one of the many other websites describing out to use it. The main idea behind it is to enable collaborative document editing -- that is, a document that can be edited by a group of people simultaneously.

Anyone who has worked on a group project is familiar with the following problem: only one person can work on a document at a given time. For instance, if I send a paper to a co-author for editing, I can't work on the paper in the meantime or risk a real headache when trying to merge the separate edits later.

Google Docs and similar services have allowed real-time collaborative editing for a while, but although these services allow real-time collaborations, they weren't really designed for real-time collaborations. For instance, it's difficult to record who made what changes. Similarly, it doesn't allow comments (for instance, sometimes you don't want to change the text, you just want to say you don't understand it). If one person makes a change and you want to undo it, good luck. Google Wave has these and other features.

Using the Wave

Currently, we're using Wave as a collective notebook, where we record everything we've learned in the course of our research. This keeps everyone up-to-date. It also allows us to discuss issues without requiring meetings (a good thing, since we're at different universities).

For instance, recently I read a claim that a certain grammatical structure that is impossible in English happens to be possible in Japanese. I noted this in a section of our document, and attached a comment: "Yasu, Miki: can you check this?" As it happens, two members of our project are native Japanese speakers. In a series of nested comments, they discussed the issue, came to a conclusion (that the paper I had read was wrong), and then we finally deleted the comments and replaced the whole section with a summary of the discussion and conclusions.

In other sections, we've included the methods for experiments that we're designing, commenting on and ultimately editing the methods until everyone agrees.

Needed Improvements

At the moment, Wave is very much in beta testing and is underpowered. Although you can embed files and websites, there's no way to embed, say, a spreadsheet -- a major inconvenience for us, since much of our work involves making lists of verbs and their properties. Whenever I want the most updated list, I need to email whoever was working on it last, which isn't ideal.

Of course, we could use Google Docs, but it has the problems listed above (no way of commenting, no track-changes, no archive in case we decide to undo a change). It's assumed that these kinds of features will be added in the future.

What isn't said

"Last summer, I went to a conference in Spain."

Technically, all you learned from that sentence is that there was a conference in Spain and that I traveled to it from some other location that isn't Spain. That's what the sentence literally means.

If you know that I live in Boston, you probably assumed that I flew to Spain, rather than take a boat. You're probably confident that I didn't use a transporter or walk on water. You probably also assumed that the conference is now over. All these things are true, but they weren't actually in what I said.

The Communication Game

This presents a problem for understanding communication: a lot is communicated which is not said. A lot of the work I do is focused on trying to figure out what not just what a sentence means, but what is communicated by it ... and that is the focus of the newest experiment on GamesWithWords.org.

In The Communication Game, you'll read one person's description of a situation (e.g., "Josh went to a conference in Spain"). Then, you'll be asked to decide whether, based on that description, you think another statement is true. Some will be obviously true ("Josh went to a conference"), some probably true ("Josh went to the conference in Spain by plane"), some clearly false ("Josh went to the conference in Spain by helicopter"), and some are hard to tell ("Josh enjoyed the conference in Spain more than the conference in Boston").

Scientifically, what we're interested in is which questions are easier to get right than others. From that, we'll get a sense of what people's expectations are. Part of what makes this a game is the program keeps score, and you'll find out at the end how well you did.