As much as I love Web-based experiments, they aren't ideal in all situations. Currently, I have a series of short surveys, each of which requires 20 or so participants. When I say "series of short surveys," that doesn't mean I have them all in advance. The results of each survey dictate what the next survey will be.
This is hard to run online, because it means I need time between surveys to analyze the results and think up a new experiment. On the Web, it's hard to take a timeout.
Instead, I took on a new research assistant, made a sign, and bought a bunch of candy. Then I set up shop outside the Harvard Science Center and began giving away candy in exchange for participation in the surveys.
At first, it was awful. We got lots of tight-lipped smiles, but nobody stopped or even really made eye-contact. I thought, "Why did I think this was a good idea? I hate this sort of thing." Within about 10 minutes, though, we got into a groove and have been collecting data at a pretty good clip every time we go out.
It turns out that, other than feeling like a canvaser, it's a fun way of collecting data. You get to be outdoors and away from the computer. You get to actually interact with people. And the pace of the research is if anything even faster than Web-based research. We typically average 30 or so participants per hour. The Moral Sense Test gets that kind of traffic; The Cognition and Language Lab, unfortunately, does not. This is probably not unrelated to the 392 appearances of "Marc Hauser" in the New York Times archives, compared with the single appearance for "Joshua Hartshorne." (Journalists: if you are reading this, call me!)
Street corner surveying is an old method. Many people seem to believe it is more reliable than Web-based surveying. Why is beyond me. We are stopping busy people with other things on their minds. Many just want candy. We are in a busy, noisy area with tours passing by, camera bulbs flashing and the occasional demonstration. And on Tuesdays, there is a farmer's market in the same area.
Sometimes the responses are hard to explain. One control question reads along the lines of: "John has two children. How likely do you think it is that he has two children? How likely do you think it is he has three children?" More than a few people agree that it is more likely that John has three children than that he has two. One person carefully corrected the grammar on one page, which was a neighborly thing to do, except that the grammar on that page was actually right, and the "corrections" made it wrong.
When collecting data from humans, there is always noise.
No comments:
Post a Comment