Field of Science

Where else to read the Cognition and Language Lab blog

Dear coglanglab.blogspot.com readers.

As many of you know, I have what amounts to a mirror of this blog running over at scienceblog.com. I maintain this one because it has some great features, like good archiving, better formatting, and the ability to save drafts of posts.

However, almost all the traffic is over at the other site. This means that while those posts get a number of comments, the ones here do not. So if you like reading comments or enjoy the give-and-take of a conversation in comments, you might prefer to read the other site.

This has gotten even easier now that scienceblog.com has added my own RSS feed, which you can find here. The purveyor of scienceblog.com promises to add some of that extra functionality eventually, but until he does, I will continue posting here as well, so if you prefer this site, you should be able to continue reading it for some time.

Which do you answer: Mail or Email?

One of the most difficult problems in research on humans is response bias. If you want to study rabbits, it's relatively easy to get a random sample of rabbits to test. Rabbits have very little say in the matter.

Humans, on the other hand, can choose to participate or not. Certain types of people tend not to participate in experiments (low-income, rural people, for instance), while other groups provide the bulk of test subjects (college psychology majors, for instance). If you are studying something thought to be fairly independent of geography, SES or education (low-level vision, for instance), this may not matter so much. If you are studying social attitudes, it does.

Web-based Response Bias

One potential concern about Web-based research is that it changes the dynamics of the response bias. Surveying college undergraduates may involve consider sample bias, but at least we know what the bias is. Some researchers are concerned that Web-based studies must suffer from some research bias (suburban teenagers are probably over-represented on the Web), but it's one we don't understand as well and have less control of.

At least one study suggests that, at least in some circumstances, we may not need to worry about this. This is according to Sabina Gesell, Maxwell Drain and Michael Sullivan, writing in the last issue of the International Journal of Internet Science. They ran an employee-satisfaction survey of hospital employees. Half were sent surveys by the Web, and half by regular mail. The response rate (the number of surveys completed) was equivalent using both methods, and the reported job satisfaction of both groups was identical.

Interestingly, the respondents were also asked whether they preferred a Web-based or paper-based survey. The most common response was that they did not care, but of those who expressed an opinion, the majority preferred the survey type they had actually received (sounds like cognitive dissonance).


--------

Sabina B. Gesell, Maxwell Drain, Michael P. Sullivan (2007). Test of a Web and paper employee satisfaction survey: comparison of respondents and non-respondents International Journal of Internet Science, 2 (1), 45-58

How to win at baseball (Do managers really matter?)

It's a standard observation that when a team does poorly, the coach -- or in the case of baseball, the manager -- is fired, even though it wasn't the manager dropping balls, throwing the wrong direction or striking out.

Of course, there are purported examples of team leaders that seem to produce teams better than the sum of the parts that make them up. Bill Belichick seems to be one, even modulo the cheating scandals. Cito Gaston is credited with transforming the Blue Jays from a sub-.500 team into a powerhouse not once but twice, his best claim to excellence being this season, in which he took over halfway through the year.

But what is it they do that matters?

Even if one accepts that managers matter, the question remains: how do they matter? They don't actually play the game. Perhaps some give very good pep talks, but one would hope that the world's best players would already be trying their hardest pep talk or no.

In baseball, one thing the manager controls is the lineup: who plays, and the order in which they bat. While managers have their own different strategies, most lineups follow a basic pattern, the core of which is to put one's best players first.

There are two reasons I can think of for doing this. First, players at the top of the lineup tend to bat more times during a game, so it makes sense to have your best players there. The other reason is to string hits together.

The downside of this strategy is that innings in which the bottom of the lineup bats tend to be very boring. Wouldn't it make sense to spread out the best hitters so that in any given inning, there was a decent chance of getting some hits.

How can we answer this question?

To answer this question, I put together a simple model. I created a team of four .300 hitters and five .250 hitters. At every at-bat, a player's chance of reaching base was exactly their batting average (a .300 hitter reached base 30% of the time). All hits were singles. Base-runners always moved up two bases on a hit.

I tested two lineups: one with the best players at the top, and one with them alternating between the poorer hitters.

This model ignores many issues, such as base-stealing, double-plays, walks, etc. It also ignores the obvious fact that you'd rather have your best power-hitting bat behind people who get on base, making those home-runs count for more. But I think if batting order has a strong effect on team performance, it would still show up in the model.

Question Answered

I ran the model on each of the line-ups for twenty full 162-game seasons. The results surprised me. The lineup with the best players interspersed scored nearly as many runs in the average season (302 1/4) as the lineup with the best players stacked at the top of the order (309 1/2). Some may note that the traditional lineup did score on average 7 more runs per game, but the difference was not actually statistically significant, meaning that the two lineups were in a statistical tie.

Thus, it doesn't appear that stringing hits together is any better than spacing them out.

One prediction did come true, however. Putting your best hitters at the front of the lineup is better than putting them at the end (291 1/2 runs per season), presumably because the front end of the lineup bats more times in a season. Although the difference was statistically significant, it still amounted to only 1 run every 9 games, which is less than I would have guessed.

Thus, the decisions a manager makes about the lineup do matter, but perhaps not very much.

Parting thoughts

This was a rather simple model. I'm considering putting together one that does incorporate walks, steals and extra-base hits in time for the World Series in order to pick the best lineup for the Red Sox (still not sure how to handle sacrifice flies or double-plays, though). This brings up an obvious question: do real managers rely on instinct, or do they hire consultants to program models like the one I used here?

In the pre-Billy Beane/Bill James world, I would have said "no chance." But these days management is getting much more sophisticated.

Science Funding and Presidential Politics

John McCain has just answered a questionnaire by Scientists and Engineers for America (Obama did so several weeks ago).

You should read the answers yourself, but as someone who expects to be involved in American science for the next number of decades, I found McCain's disappointing.

Obama begins his answer to the first question about American innovation this way:

Ensuring that the U.S. continues to lead the world in science and technology will be a central priority for my administration. Our talent for innovation is still the envy of the world, but we face unprecedented challenges that demand new approaches. For example, the U.S. annually imports $53 billion more in advanced technology products than we export. China is now the world’s number one high technology exporter. This competitive situation may only worsen over time because the number of U.S. students pursuing technical careers is declining. The U.S. ranks 17th among developed nations in the proportion of college students receiving degrees in science or engineering; we were in third place thirty years ago.
This reassures me that he understand the problem. McCain, on the other hand, merely says "I have a broad and cohesive vision for the future of American innovation. My policies will provide broad pools of capital, low taxes and incentives for research in America, a commitment to a skilled and educated workforce, and a dedication to opening markets around the world."

Solutions to our Problems.

OK. Maybe McCain isn't as good at setting up the problem as Obama is. What does his broad and cohesive vision look like? Most of his proposals sound nice, but it's hard to tell what exactly he means. For instance, he wants to "utilize the nation's science and technology infrastructure to develop a framework for economic growth both domestically and globally."

Sounds good. How? (One way might be to make the R&D tax credit permanent, something which Obama supports but which McCain strangely neglects to mention.)

Other parts of the boilerplate sound like he is merely suggesting we do what we are already doing. I am refering to points like "Fund basic and applied research in new and emerging fields such as nanotechnology and biotechnology..."

Hmmm. We already do that. Maybe he intends to increase funding for such projects, but he doesn't say.

McCain also says he has supported and will continue to support increasing federal funding for research. However, he doesn't say how much. Federal funding has increased over the last few years. It's just not keeping up with inflation. So hazy talk about "increasing funding" may well be meaningless.

One of the few concrete proposals McCain makes is to "eliminate wasteful earmarks in order to allocate funds for science and technology investments." Sounds good. There are $16 billion of earmarks in the 2008 federal budget. The budget for the National Institutes of Health alone is $28 billion. So even if all the "earmarks savings" were spent on science -- and he has other things he wants to do with that money -- we couldn't even get close to doubling science funding, as Obama has proposed.

Does Obama do any better?

If I find McCain's answers discouraging, Obama's are heartening. Although he uses fewer words than McCain, those words are packed with specific proposals, such as doubling federal funding for basic research over the period of 10 years and making the R&D tax credit permanent.

Within the general increase in science funding, Obama shows again that he, or at least his advisors, actually know something about the state of American science, in that he singles out the need to "increase research grants for early-career researchers to keep young scientists entering these fields."

The problem that he is referring to is that the current average age of a scientist receiving their first NIH grant is over 40 years old. While the truism that scientists do their best work prior to the age of 40 is less true for the biomedical researchers NIH funds than it is for mathematicians and physicists, this funding trend is still worrisome.

While on that topic, out of all NIH 2007 applications in for a new RO1 grant -- the bread-and-butter grant that funds many or most large labs -- only 19% were funded. While we certainly want money to go to the best projects, it's important to remember that when a scientist doesn't get a grant, she doesn't just go back to our day job. Science is her day job. So she has to apply again. With 81% of scientists applying to the NIH failing to get funding each year, that means many, many burdensome reapplications -- taking time and money away from doing actual science.

All that is just another reason that significantly increasing federal funding for research is crucial.

I won't go into detail about Obama's other policies, but I found them similarly encouraging and, frankly, a breath of fresh air.

The Candidate with Vision and Expertise

When Obama first began campaigning, some people wondered what the substance behind his vision was. As I read through his responses to this questionnaire, I was struck time and time again that (1) he seemed to really understand and appreciate what challenges I face in my daily attempts to do science, and (2) he had concrete plans to address those issues.

McCain's, answers, on the other hand, rang hollow and out of touch.

How many memories fit in your brain? More than we thought

One of the most obvious facts about memory is that it is not nearly so good as we would like. This definitely seems true in day-to-day life, and one focus of my research during the last couple years has been why our ability to remember what we see over even very short time periods is so very limited.

So memory is crap, right?

It may be hard to remember a visual scene over a very short time period, but new evidence suggests that it is remarkably easy to remember a visual image over a longer period of time (several hours).

Researchers at MIT showed participants nearly 3,000 visual images (see some of them here) over the course of 5 1/2 hours. Afterwards, their memory was tested. They were able to discriminate the pictures they actually saw from slightly altered versions of the same picture nearly 90% of the time.

This is frankly incredible. When I show participants much, much simpler images and then ask them to recognize the same images just 1 second later, accuracy is closer to 80%!

These results are going to necessitate some re-thinking the literature. It suggests that our brains are storing a lot more information than many of us thought just a little while ago. It also suggests a very strange interaction between time and memory strength that will need to be better understood.

So this is a surprise?

Yes, and no. The results are surprising, but their publication last week in the Proceedings of the National Academy of Sciences was not, at least not for me. I had the opportunity to first be stunned by these data nearly a year ago, when the third author gave a talk at Harvard. It came up again when he gave another talk during the winter. (Oh, and I've known the first author since we sat next to each other during a graduate school application event at MIT, and we still regularly talk about visual memory).

So I and many others have had the opportunity to think through the implications for a long time now, which means it is very possible that there are labs which have already completed follow-up studies.

While this has nothing to do with the big story itself -- the sheer massiveness of visual memory elicited in this study -- I bring it up as an example of my point from last week: the fact that America is (for now) the center of the scientific world gives us tremendous institutional advantages, the least of which is that it is much easier to stay fully up-to-date. If that mantle passes to another country, we will be the ones reading about old news only when it finally comes out in press.

Parting Thoughts

If you yourself have done research like this, the first thing you probably wondered was where they got 3,000 carefully controlled visual images, not to mention all the test images.

Google Images, baby, Google Images. It still took a great deal of time, but as I hear the story told, the ability to download huge numbers of web images via Google was immensely helpful. This is just one more example of Web search as a tool for science.

Politicians who blame universities for their own failings

There has been some flack in Congress lately about the cost of tuition as the famous private universities. The primary senator behind the movement is Charles Grassley, a Republican of Iowa.

Tuition, room, board and fees at a university like Harvard are now hovering around $50,000. In a recent press release, Grassley writes that:

"The Congressional Research Service reports that, on the basis of mean household income of a household in the bottom fifth of the population, the price of college in 2005 was over 70 percent of the household's income."
Is there an easy solution that won't cost me anything?

Luckily, according to Grassley, there is: make schools spend more of their endowments. He notes that endowments are an awful lot like the funds of private foundations:

"In the 1960s, Congressman Wright Patman issued a series of reports, one of which included recommendations to limit foundation life to 25 years..."

So, colleges should only last for 25 years?

I'll give Grassley the benefit of the doubt and assume that's not where he was going with that quote (though there is no evidence to the contrary in the original document). In any case, he has a bigger problem: most schools don't have large endowments. Just 75 universities control 71% of all endowment assets. So that means the other 29% must cover the remaining 2,543 accredited four-year institutions in the US.

It's not clear just how well that will work.

Grassley also has a big conceptual problem. Universities invest their endowments and use the interest for operating costs. Costs go up each year. Inflation is one reason. Increasing numbers of students is another. Libraries must expand to incorporate new books. New departments (microbiology, ecological studies, Arabic, etc.) must be founded, and rarely are old ones (history, physics, literature) abandoned. A stagnant endowment is death to such a university.

Again, if your university has an endowment. So what about the rest?

What about the University of Iowa?

The flagship public university in Grassley's state is the University of Iowa. As a relatively wealthy public university, it has an endowment of almost $1,000,000,000. That seems like a lot, but it also has over 30,000 students, which gives the university an endowment of just over $32,000 per student.

That sounds like a lot, only if the university spends it all this year, after which there won't be any money for next year. Grassley wants universities to spend at least 5% of their endowment each year -- even in bad years in which they get less than 5% interest on the endowment.

That comes out to $1,615 per student per year. The in-state tuition this year (not counting room and board) appears to be $6,554. So that kind of spending will make a dent.

Except that schools already rely heavily on their endowments. That's why they have them. I couldn't find numbers for the University of Iowa, but I have heard that many school already spend over 4% of their endowment yearly. So let's assume the University of Iowa spends 4%. That means Grassley is calling for his home state university to spend an extra $323 per year from its endowment -- or, about the cost of 3-4 college textbooks.

And remember, this is one of the country's richest universities. The vast majority of schools have far, far smaller endowments, if they have one at all.

Right on the fact. Wrong on the reason.


Governments have been steadily cutting funding for publicly-funded universities. I'm fortunate to be at an endowment-rich university, but my father is at a public institution in Michigan. The budget crisis there has caused steep cutbacks in funding ... and thus steep increases in tuition.

This is a story I've heard repeated at many universities across the country.

It's nice to be able to go to Harvard. But only about 2,000 students each year are accepted. I'm sure they'd appreciate the help with tuition, but it's not going to affect most Americans. The university tuition crisis is in the public universities, which have small endowments, if any.

But I suppose Grassley doesn't want to talk about that.

What if someone is watching?

I have heard a number of concerns about Web-based research. One of the most reasonable -- at least, the one that I find most compelling -- is that participants may be more motivated when an experimenter is physically present. This makes some sense: people might take a test more seriously if it is handed to them by a living person than if they run across it on Google. The presence of someone watching you provides a certain amount of social pressure.

The implications of this question go beyond Web-based experimenting to any type of work conducted "virtually."

There are some empirical reasons to suspect this might be a real concern. In the famous Milgram experiments, participants' behavior in a simulated torture scenario varied considerably depending on the appearance and actions of the experimenter.

That said, my Web-based experiments do not involve torture. Some are actually quite fun. So it is not at all clear whether the lack of a physical experimenter actually affects how the participant behaves.

Luckily, Heike Ollesch, Edgar Heineken, and Frank Schulte of the Universitat Duisberg-Essen have looking into just this question. They ran a couple simple experiments. In one, participants compared visual images. In another, they memorized lists of words. Some participants were recruited as part of a standard Web-based experiment. In control trials, the same experiment was run, but either in the lab, with an experimenter present, or outside in campus public areas, again with an experimenter present.

Participants did no better in the lab-based condition than in the Web-based condition, though in the word-memorization experiment, participants performed more poorly in the public areas (as might be expected), than in either the Web-based or lab-based conditions.

In a final, third experiment, participants described short videos. They actually produced more comprehensive descriptions in the Web-based condition than in the lab-based or public-area conditions.


--------
Stanley Milgram (1963). Behavioral Study of obedience. The Journal of Abnormal and Social Psychology, 67 (4), 371-378 DOI: 10.1037/h0040525

Heike Ollesch, Edgar Heineken, Frank P. Schulte (2006). Physical or virtual presence of the experimenter: Psychological online-experiments in different settings International Journal of Internet Science, 1 (1), 71-81

Why it doesn't matter if America falls behind in Science

Earlier this year, an article in the New York Times argued that it doesn't matter that the US is losing its edge in science and research. In fact, the country can save considerable money by letting other countries do the hard work. The article went on to explain how this can be viewed as outsourcing: let other, cheaper countries do the basic research, and try to turn that research into products in the US.

Really?

The article quoted research published by the National Academy of Sciences, and while I fully recognize that they know more about this topic, have thought more about this topic, and are no doubt considerably smarter, I'm skeptical.

There are two problems I see. The first is that just because other countries are picking up the slack doesn't mean there isn't slack. The second is that I'm not convinced that, in the long term, allowing all the best, most cutting-edge research to take place in other countries is really economically sound.

Being a Good Citizen

The article in question seems to imply that there is some amount of research, X, that needs to be done. If other countries are willing to do it, then no more is needed.

To make this concrete, as long as one new disease is cured, say, every five years, there's simply no reason to invest any additional energy into curing diseases. That's enough. And for people who have some other disease that hasn't been cured, they can wait their turn.

The concept is most clear when it comes to disease, but I think the same argument applies everywhere else. Basic science is what gives us new technology, and technology has been humanity's method of improving our quality of life since at least a few million years ago. Perhaps some people think quality of life is improving fast enough -- or too fast, thank you -- but I, at least, would like my Internet connection to be a bit faster now rather than later.

The fact that China, Taiwan, Singapore & co. are stepping up to the plate is not a reason for us to go on vacation.

Can We Really be Competitive as a Backwater?

The article casts "outsourcing" science as good business by noting that America is still the best at turning science into products. So let other countries do the expensive investment into research -- we'll just do the lucrative part that comes later.

Do they think other countries won't catch on?

I have to imagine that Singapore and similar countries are investing in research because they want to make money. Which means they will want their share of the lucrative research-to-product business. So America's business plan, then, would have to be to try to keep our advantage on that front while losing our advantage on basic research.

This may well be possible. But it has some challenges. It's no accident that the neighborhood around MIT is packed with tech start-ups. I'm not a sociologist, but I can speculate on why that is. First, many of those tech start-ups are founded by MIT graduates. They aren't necessarily Boston natives, but having been drawn to one of the world's great research universities, they end up settling there.

Second, Flat World or not, there are advantages to being close to the action. Many non-scientists don't realize that by the time "cutting-edge" research is published, it is often a year or even several years old. The way to stay truly current is to chat with the researchers over coffee about what they are doing right now, not about what they are writing right now.

Third, science benefits from community. Harvard's biggest advantage, as far as I can tell, is the existence of MIT two miles down the road, and visa versa. Waxing poetic about the free exchange of ideas may sound a bit abstract, but it has a real impact. I have multiple opportunities each week to discuss my current projects with some of the best minds in the field, and I do better work for it.

In short, I think any country that maintains the world's premier scientific community is going to have impressive structural advantages when it comes to converting ideas into money.

That Said...

That said, I think there are two really useful ideas that come out of that article. The first is the challenge against the orthodoxy that strong science = strong economy. Without challenges like these, we can't home in on what exactly is important about funding basic research (not saying I've been successful here, but it is a start, at least). The second is that even if the US maintains its lead in science, that lead is going to shrink no matter what we do, so it's important to think about how to capitalize on discoveries coming in from overseas.

Political Note

Those who are concerned about basic research in the US should note that while John McCain does not list science funding as a priority on his website -- unless you count non-specific support of NASA -- and did not mention it in his convention speech, Barack Obama did both (he supports doubling basic science funding).

Folks in Eastern Washington may be interested to know that a clinical psychologist is running for Congress against an incumbent. Though Mark Mays has been professionally more involved in treatment than in research, research is among his top priorities.

Research by one of the Progenitors of Web-based Research

As science goes, Web-based research is still fairly new. However, it is now at least a decade old, as the publication record of Ulf-Dietrich Reips shows. Reips was one of the first to turn to the Web and has been at the forefront of the field since then.

It appears that Reips and Uwe Matzat run a yearly journal devoted to Web-based experiments: the International Journal of Internet Science. This journal is, fittingly, open-source. Although the journal describes itself as "a peer reviewed open access journal for empirical findings, methodology, and theory of social and behavioral science concerning the Internet and its implications for individuals, social groups, organizations, and society," much of the research so far appears to concentrate on that middle goal: evaluation of Web-based experiments as a methodology (a project in which Reips has been involved in the past).

This is obviously important, since a number of people remain skeptical of Web-based research. A few of the papers that have come out already are worth mentioning and will appear in future posts.

Monkey See, Monkey Do

Despite the well-known phrase, monkeys are very poor imitators. They also don't point. Some researchers have built up theories about the differences between humans and animals around these facts.

Recent evidence, however, suggests that dolphins may point.


----
This abbreviated post may be the last for a few weeks. Coglanglab is going on vacation.

The verbal memory hegemony

One fact about the world is that the most famous memory researchers did most of their work on verbal memory. Alan Baddeley and George Miller both come to mind -- and I doubt anybody can think of more famous memory researchers in the last 50 years.

Another fact about the world is that many researchers -- not necessarily Baddeley or Miller -- have assumed that anything discovered using memory tests involving words should apply to other forms of memory as well. To pick unfairly on one person, Cowan notes in his masterful paper "The magical number 4 in short-term memory" that out of several related experiments, one has results that diverge from the others. Cowan attempts an explanation but basically throws up his hands. He doesn't notice that of all the experiments discussed in that section, the divergent one was the only one to use visual rather than verbal stimuli.

Similarly, a reviewer of my paper which just came out complained that the results reported in the paper only "told us things we already knew." As evidence, the reviewer cited a number of other papers, all of which had investigated verbal rather than visual short-term memory.

As it happens, the results in this case were very similar to what had been reported previously for verbal memory. But it could have come out differently. That was the point of doing the experiment.

Partly because of this bias in favor of verbal materials, not enough is known about visual memory, though this has been changing in recent years, thanks in part to folks like Steve Luck, George Alvarez, Yuhong Jiang, Edward Vogel and several others.

----------
Cowan, N. (2001). The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behavioral and brain sciences, 24, 87-185.

Hartshorne, J.K. (2008). Visual working memory capacity and proactive interference. Public Library of Science One

Miller, G.A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63, 81-97.

A scientist at work: Street corner surveying

As much as I love Web-based experiments, they aren't ideal in all situations. Currently, I have a series of short surveys, each of which requires 20 or so participants. When I say "series of short surveys," that doesn't mean I have them all in advance. The results of each survey dictate what the next survey will be.

This is hard to run online, because it means I need time between surveys to analyze the results and think up a new experiment. On the Web, it's hard to take a timeout.

Instead, I took on a new research assistant, made a sign, and bought a bunch of candy. Then I set up shop outside the Harvard Science Center and began giving away candy in exchange for participation in the surveys.

At first, it was awful. We got lots of tight-lipped smiles, but nobody stopped or even really made eye-contact. I thought, "Why did I think this was a good idea? I hate this sort of thing." Within about 10 minutes, though, we got into a groove and have been collecting data at a pretty good clip every time we go out.

It turns out that, other than feeling like a canvaser, it's a fun way of collecting data. You get to be outdoors and away from the computer. You get to actually interact with people. And the pace of the research is if anything even faster than Web-based research. We typically average 30 or so participants per hour. The Moral Sense Test gets that kind of traffic; The Cognition and Language Lab, unfortunately, does not. This is probably not unrelated to the 392 appearances of "Marc Hauser" in the New York Times archives, compared with the single appearance for "Joshua Hartshorne." (Journalists: if you are reading this, call me!)

Street corner surveying is an old method. Many people seem to believe it is more reliable than Web-based surveying. Why is beyond me. We are stopping busy people with other things on their minds. Many just want candy. We are in a busy, noisy area with tours passing by, camera bulbs flashing and the occasional demonstration. And on Tuesdays, there is a farmer's market in the same area.

Sometimes the responses are hard to explain. One control question reads along the lines of: "John has two children. How likely do you think it is that he has two children? How likely do you think it is he has three children?" More than a few people agree that it is more likely that John has three children than that he has two. One person carefully corrected the grammar on one page, which was a neighborly thing to do, except that the grammar on that page was actually right, and the "corrections" made it wrong.

When collecting data from humans, there is always noise.

Should we trust experiments on the Web?

When I first started doing Web-based experiments, a number of people in my own lab were skeptical as to whether I would get anything valuable out of them. Part of this was due to worries about method (How do you know the participants are paying attention? How do you know they are telling the truth?), but I think part of it was also a suspicion of the Internet in general, which, as we all know, is full of an awful lot of crap.

For this reason, I expected some difficulties getting my Web-based studies published. However, the first of these studies was accepted without much drama, and what concerns the reviewers did raise had nothing to do with the Web (granted that only one of the experiments in that paper was run online). Similarly, while the second study (run in collaboration with Tal Makovski) has run into some significant hurdles in getting published, none of them involved the fact that the experiments were all run online.

Until now. After major revisions and some new experiments, we submitted the paper to a new journal where we thought it would be well-received. Unfortunately, it was not, and many of the concerns involved the Web. Two of the reviewers clearly articulated that they just don't trust Web-based experiments. One went so far as to say that Web-based experiments should never be run unless there is absolutely no way to do the experiment in the lab.

(I would use direct quotes, but the reviewers certainly did not expect their comments to show up on a blog, anonymously or not. So you will have to take my word for it.)

Obviously, I trust Web-based experiments. I have written enough posts about why I think concerns are misguided, so I won't rehash that here. I am more interested in why exactly people have trouble with Web-based experiments as opposed to other methodologies.

Is it because the Web-based method is relatively new? Is it because the Internet is full of porn? Or is it simply the case that for any given method, there are a certain number of people who just don't trust it?

I have been doing street-corner surveying lately (a well-established method), and I can tell you that although it ultimately gives decent results, some very odd things happen along the way. But I suppose if, as a reviewer, I tried to reject a paper because I "just don't trust surveys," the action editor would override me.

Results from an Experiment: The Time Course of Visual Short-Term Memory

The first experiment I ran on the Web has finally made it into print. Rather fittingly, it has been published in a Web-based journal: The Public Library of Science One.

Visual Memory is a Scrawny Creature

That experiment, The Time Course of Visual Short-Term Memory, was part of a larger study probing a fundamental question about memory: why is visual working (short-term) memory so lousy? In recent years, visual memory folk like Edward Vogel and George Alvarez have debated whether we can store as many as four items in visual memory, while on the other hand researchers looking more at verbal memory, such as Nelson Cowan, have been arguing over whether verbal memory can store only four items. There are memory tricks that can allow you to keep a hundred words in short-term memory; nobody has reported any similar tricks for visual memory.

There are many other ways in which visual memory is piddly compared to verbal memory, and I go into them in depth in the paper. Interestingly, previous researchers have not made much out of this difference, possibly because people seem to work on either visual memory or verbal memory, but not both.

Does Verbal Memory Explain the Differences between Humans and Apes?

One possibility that occurred to me is that if verbal memory in fact is considerably more robust and more useful than visual memory, that would endow verbal animals (i.e., adult humans) with significant advantages over non-verbal animals (e.g., human infants and all other animals). Just as writing has allowed some human cultures to supplement our limited memory capacity -- try doing a complicated math problem in your head; the real limitation is memory -- language could allow us to supplement limited non-verbal memory systems.

In fact, I found that many of the differences between adult humans on the one side and young children and apes on the other are found in tasks with large working memory demands. More examples are given in the paper, but this includes theory of mind tasks.

Is Verbal Memory Really Better?

Of course, this is fruitless speculation unless visual working memory is really inferior. The problem is that visual and verbal memory capacity is tested in somewhat different ways. The easiest way to test verbal memory capacity is to give people a list of words to remember and then ask them to repeat that list back (this forms an important part of many IQ tests).

This is obviously impossible with visual memory tests.

In a visual memory test, the participant is usually shown several images to remember. Then, after a delay, they are shown another image and asked if that is the same as one of the original images. Notice that you can be right 50% of the time just by guessing. Thus, to get a good measure, you need to do this many times.

Proactive Interference

This brings up the specter of proactive interference. I have written about proactive interference recently and won't belabor it here. The basic intuition is that if you do many trials of a memory test, it becomes hard to remember which stimuli were on which trial. So if you have been asked to remember circles of different colors, and then you are asked if the last trial contained a blue circle, you might remember that you have seen a blue circle recently but not remember if it was on the last trial or not.

So if visual working memory capacity tasks require many trials and verbal working memory tasks do not, one possible reason for the poor performance observed for visual working memory might be greater proactive interference.

Nope -- not proactive interference

The short version of the results of the published paper is that proactive interference does decrease measured capacity for visual working memory, but not by very much (about 15%). So it cannot account for the differences between visual and verbal working memory. The search must go on.


I hope to describe how the Web-based experiment contributed to this result in a future post. But interested readers can also read the paper itself. It is fairly short and reasonably non-technical.



-------
Hartshorne, J.K. (2008). Visual working memory capacity and proactive interference. Public Library of Science One

CogLangLab's first published paper!

The first paper to contain data collected at my website (technically, at the old website) has just been published. The experiment in question was The Time Course of Visual Short-Term Memory.

This is the first of hopefully two papers using that data. The second paper will look at individual differences and aging. That paper is still in preparation and will hopefully be submitted in August.

I will explain the results and import of the just-published paper in an upcoming post.


--------

Hartshorne, J.K. (2008). Visual working memory capacity and proactive interference. Public Library of Science One