Many of my experiments in the lab use eye-tracking. Eye-tracking is a fantastic method for studying language comprehension, developed mainly by Michael Tanenhaus and his colleagues. People tend to look at whatever is being talked about, and we can use that fact to measure what people think is being talked about.
For instance, in the eyetracking experiments related to the Pronoun Sleuth experiment have online, I have people look at a picture with a couple characters on it while they listen to a story about the characters. My interest is in what happens when they hear a pronoun: who do they look at?
This has two advantages over just asking (which is what I do in Pronoun Sleuth): first, it's an implicit measure, so I don't have to worry about people just telling me what they think I want to hear. More importantly, the measure is sensitive to time: people only look at who they think is being talked about when they decide who's being talked about. So I can get a measure of how long it takes to understand a pronoun in different conditions.
A Better Eyetracker
In the lab, we use an automated eyetracker (the Tobii T60), which can record in real time what people are looking at. This is a fantastic time-saver. Unfortunately, it's also really expensive and really heavy, so it's mostly good for use in the lab. I'll be going to Taiwan to run some experiments in March, and I won't be taking the Tobii with me.
A cheap eyetracker can be built by other means, though. Our lab traditionally used what we affectionately call the "poor man's eyetracker," which is just a video camera. In a typical experiment, participants see four physical objects and hear information about the objects. The four objects are arranged in a square, and right in the center of the square is the video camera, pointed back at the participant. From the recording, then, we can work out whether the participant is looking at the object in the top left, bottom left, top right or bottom right.
This is a slower method than using the Tobii, because you have to code where the participant is looking frame-by-frame in each video (Tobii does all that automatically). And it has much less resolution that Tobii, which can record where a participant is looking down to an inch or so, whereas with Poor Man's we can only get quadrants of a screen. But it's a lot cheaper.
Although members of the lab have traveled with such a setup before, it's less portable than it could be. These experiments involve many objects, so you end up having a huge box of supplies to cart around.
Enter Laptop
Many laptops -- and all Mac laptops -- come with cameras built into their screens. One of my research assistants and a grad student at MIT and I have been trying to turn our MacBooks into portable eyetrackers. My experiments usually only involve stories about two characters, one on the left-hand side of the screen and one on the right-hand side.
We show the pictures on the screen, and by use the built-in camera plus iMovie to record the participant. Based on testing, the camera records a frame 20 times per second, which is slower than the Tobii T60 (60 times per second), but is enough for the relatively slow effects I study. This is incredibly portable, since all that is required is the laptop.
Except
There has only been one problem so far. As I said, I'm interested in where people look when they hear a pronoun. So I need to know exactly when they heard the pronoun, down to about 1/20 or at the worst 1/10 of a second. The software that I use to code eye-movements doesn't play the sound while you're coding, so there's no way of using it to determine the beginning of the pronoun.
What researchers have often done is have two cameras working simultaneously: one recording the participant and one recording the screen. These video streams can be spliced together so that you can see both simultaneously when coding. If you know, for instance, that the pronoun is spoken exactly 14,345 milliseconds after the picture appears on the screen, you just measure forwards 14,345 ms after the picture appeared on the screen. This however, requires additional equipment (the camera, not to mention the apparatus that combines the video signals).
Another trick people will use is to have a mirror behind the participant. You can then see what the participant is looking at in the mirror. Our current plan is to adopt this method, except since the whole thing needs to be portable, we're using a very small (compact makeup) mirror, which is mounted on a 9-inch make-shift tripod. This can be placed in front of the laptop, but slightly off to the side so it doesn't block the participant's view.
With any luck, this will work. We're currently running a few pilot participants.
Head-Mounted Eyetrackers
I should point out that there are also head-mounted eyetrackers, which the participant actually has to wear. These will give you a video of the scene in front of the subject, with a little cross-hair over the exact spot the participant is focusing on. These are the most flexible, since participants can turn their heads and walk around (not possible in any of the set-ups above), but they still require frame-by-frame coding (the eyetracker can't recognize objects; it doesn't know if the participant was looking at a chair or a cat -- you have to code this yourself) and they aren't great for working with kids, since they are usually too heavy for young kids to wear.
For instance, in the eyetracking experiments related to the Pronoun Sleuth experiment have online, I have people look at a picture with a couple characters on it while they listen to a story about the characters. My interest is in what happens when they hear a pronoun: who do they look at?
This has two advantages over just asking (which is what I do in Pronoun Sleuth): first, it's an implicit measure, so I don't have to worry about people just telling me what they think I want to hear. More importantly, the measure is sensitive to time: people only look at who they think is being talked about when they decide who's being talked about. So I can get a measure of how long it takes to understand a pronoun in different conditions.
A Better Eyetracker
In the lab, we use an automated eyetracker (the Tobii T60), which can record in real time what people are looking at. This is a fantastic time-saver. Unfortunately, it's also really expensive and really heavy, so it's mostly good for use in the lab. I'll be going to Taiwan to run some experiments in March, and I won't be taking the Tobii with me.
A cheap eyetracker can be built by other means, though. Our lab traditionally used what we affectionately call the "poor man's eyetracker," which is just a video camera. In a typical experiment, participants see four physical objects and hear information about the objects. The four objects are arranged in a square, and right in the center of the square is the video camera, pointed back at the participant. From the recording, then, we can work out whether the participant is looking at the object in the top left, bottom left, top right or bottom right.
This is a slower method than using the Tobii, because you have to code where the participant is looking frame-by-frame in each video (Tobii does all that automatically). And it has much less resolution that Tobii, which can record where a participant is looking down to an inch or so, whereas with Poor Man's we can only get quadrants of a screen. But it's a lot cheaper.
Although members of the lab have traveled with such a setup before, it's less portable than it could be. These experiments involve many objects, so you end up having a huge box of supplies to cart around.
Enter Laptop
Many laptops -- and all Mac laptops -- come with cameras built into their screens. One of my research assistants and a grad student at MIT and I have been trying to turn our MacBooks into portable eyetrackers. My experiments usually only involve stories about two characters, one on the left-hand side of the screen and one on the right-hand side.
We show the pictures on the screen, and by use the built-in camera plus iMovie to record the participant. Based on testing, the camera records a frame 20 times per second, which is slower than the Tobii T60 (60 times per second), but is enough for the relatively slow effects I study. This is incredibly portable, since all that is required is the laptop.
Except
There has only been one problem so far. As I said, I'm interested in where people look when they hear a pronoun. So I need to know exactly when they heard the pronoun, down to about 1/20 or at the worst 1/10 of a second. The software that I use to code eye-movements doesn't play the sound while you're coding, so there's no way of using it to determine the beginning of the pronoun.
What researchers have often done is have two cameras working simultaneously: one recording the participant and one recording the screen. These video streams can be spliced together so that you can see both simultaneously when coding. If you know, for instance, that the pronoun is spoken exactly 14,345 milliseconds after the picture appears on the screen, you just measure forwards 14,345 ms after the picture appeared on the screen. This however, requires additional equipment (the camera, not to mention the apparatus that combines the video signals).
Another trick people will use is to have a mirror behind the participant. You can then see what the participant is looking at in the mirror. Our current plan is to adopt this method, except since the whole thing needs to be portable, we're using a very small (compact makeup) mirror, which is mounted on a 9-inch make-shift tripod. This can be placed in front of the laptop, but slightly off to the side so it doesn't block the participant's view.
With any luck, this will work. We're currently running a few pilot participants.
Head-Mounted Eyetrackers
I should point out that there are also head-mounted eyetrackers, which the participant actually has to wear. These will give you a video of the scene in front of the subject, with a little cross-hair over the exact spot the participant is focusing on. These are the most flexible, since participants can turn their heads and walk around (not possible in any of the set-ups above), but they still require frame-by-frame coding (the eyetracker can't recognize objects; it doesn't know if the participant was looking at a chair or a cat -- you have to code this yourself) and they aren't great for working with kids, since they are usually too heavy for young kids to wear.
No comments:
Post a Comment