Field of Science

Adoption and Language Development

I just heard that the Boston Globe recently ran an article on my advisor's work. If you haven't already read it, here's my discussion of the same work in Scientific American Mind.

Class Notes: A Developmental Paradox

One thing that keeps coming up in artificial grammar studies (in which people are taught bits of made-up languages) is that adults are much better at learning them than kids. Yet we know that if you wait 20 years, people who started learning a real language as a kid do much better than those who started as adults. This might mean the artificial grammar studies aren't ecologically valid, but I think it's also the case that in real-life second-language acquisition, adults start out ahead of kids. Whatever makes kids better language learners seems to be something that happens after the first few weeks/months of learning.

Web Experiment Tutorial: Chapter 3, Basic Flash Programming

Several years ago, I wrote a tutorial for my previous lab on how to create Web-based experiments in Flash. Over the next number of months, I'll be posting that tutorial chapter by chapter.

This and the following two chapters will go over how to produce a basic change-detection experiment. Necessary materials and the finished example are included.

1. How is a Flash program structured?

If you have ever done any programming, you will find the layout of a Flash program rather odd. This is because it is designed for animations.

A Flash program is structured around frames. Frames are what they sound like – a frame in a movie. A frame may contain images. It also has attached code. That code is linked to that frame and that frame only, although subroutines written in one frame’s code can be called by another frame.

Moreover, a frame can contain movie clips. Movie clips is a very confusing name for these structures. What you need to know about them for the moment is that they can have their own timeline and their own attached code.


2. Exploring Flash

Note: This tutorial was written for Flash MX with Actionscript 2.0. I believe everything will work as it should in newer versions of Flash, but if you have any problems, please leave a comment or email me.

Open a new Flash document. The main window contains two parts. At the top, there is a timeline. This timeline shows all the frames in your program. Currently, the first frame should be highlighted and have a circle in it. The other frames should all be faded in appearance. They are just placeholders and will not appear in a movie. The circle in the first frame indicates that it is a keyframe, which will be discussed below.

At the bottom of the Timeline, you will see the letters “12.0 fps”. This is the speed of the movie: 12 frames per second. 12 fps is standard. You can go faster or slower as you wish. However, this is a ballpark figure – actual speed depends on many factors. Supposedly there are ways to improve the accuracy of the timing, but I have never tried or tested any of them. It is also worth noting that there does seem to be a maximum speed. As the fps increases, the actual speed when running does not necessarily increase. Also, at high speeds, the program will run faster when run locally (i.e., on your desktop) than when run through the Internet.

The central, larger part, is the stage. This shows you what is actually displayed in a particular frame (unless those images are dynamically created, which will be discussed below).

There are several popup windows you will want to have at your disposal. Make sure they are checked off under the “Window” menu item. These are:

Properties. The property inspector allows you to modify anything that is selected on the stage or in the timeline. Click on the stage itself, and you will see several options in the properties window. The most important one is the stage size. Default is usually 550 x 400. In general, it is better to keep your stage small, since you do not know how large the screen of your participants will be.

Select Window->Development Panels->Actions to bring up the Actions window. This window contains all the code that is linked to whatever you currently have selected, be in frame, movie clip or component. Note that the search function only searches that code. Currently, there is no way to search through all the code in a given program.

Select Window->Library to open the library. The library contains any components used in your program. This will be important later.

3. Displaying stimuli.

First, change the fps to 4. This will make the math simpler.

Now, we will create a background layer.

Choose Insert->Timeline->Layer to add a new layer.

In the Timeline, select Layer 1. Using the drawing tools, draw a rectangle of any size. In the properties window, click the “fill” icon. In the popup window, click on the color wheel. In the new “colors” window, select the “sliders” button. Below that, change “Gray Scale Slider” to “RGB Slider”. This allows you to type in exact RGB values. Type in 145 for Red, Green, and Blue. Click “OK”. Now, in the properties window, change the width and height of the rectangle so that it matches the size of the stage. Make sure the X and Y coordinates are 0. Now, you can change the foreground layer without affecting the background, and visa versa. In this example, the background is a neutral gray, but it could be much more complicated and even dynamic. To lock the background so you don’t accidentally edit it, do the following. In the timeline, just after the words “Layer 1”, you should see a couple diamonds. One is under the “Eye” column. One is under the “lock” column. Click the diamond under the “Lock” column. This locks the layer. (The “Eye” diamond toggles the visibility of the layer.)

Now we want to add some stimuli. Make sure you have Frame1 and Layer 2 selected. Go to File->Import->Import To Stage and import Stim1.pic. If asked to import via QuickTime, click OK.

Click on the image. Go to the properties inspector and change the X coordinate to 75 and the Y coordinate to 50. This will center the 400 x 300 image on a 550 x 400 stage.

Now, click on the 2nd frame of Layer 2. Go to Insert->Timeline->Blank Keyframe. (Inserting a non-blank Keyframe would make a new keyframe that contains the same images as the current one.) The second frame should now be a keyframe.

Now, click on the 3rd frame of Layer 2. Go to Insert->Timeline->Frame. Now, there should be a box around frames 2 and 3. You’ve now made frame 3 active, but it is not a keyframe. This means it cannot be modified independently of the keyframe in frame 2. Now click Insert->Timeline->Blank Keyframe. Notice that this converted your frame to a keyframe. It did not actually “insert” anything.

Notice also that our background does not appear when we select frames 2 or 3. This is because Layer 1 only has a single frame. You will need to expand Layer 1. Click on frame 3 of layer one and click on Insert->Timeline->Frame. Now the background should appear on all 3 frames.

Now, import Stim2 to the 3rd frame of Layer 2. Center it as before.

Now, test your “experiment” by clicking Control->Test Movie. The movie, you will notice, loops. As soon as it gets to the end, it goes back to the beginning. This won’t happen when it is run from a website, but it is still annoying. Let’s add some code to prevent this.

Close the test window and click on the 3rd frame of Layer 2. Now go to the “Actions” window and type:

stop();

This ActionScript pauses the timeline in the current frame. However, code that appears after the “stop” command will still be executed.

Now test the experiment. You probably will have trouble seeing the first stimulus. This is because Flash starts playing even before the program is fully loaded. In the future, we will add a “loader” to fix this problem. For now, however, just add several blank frames to the beginning of the experiment.

Test the experiment. It is now a basic change-detection experiment. It has several problems, however, the first of which is that there is no way for the participant to respond.

4. Getting responses.

There are two ways to input responses: with the mouse and with the keyboard. Because the experiment is on the Web, the more natural method seems to be using the mouse. However, to get accurate reaction times, keyboard entry is probably preferred. That will be discussed below, under “Getting Reaction Time”.

Select the final frame of Layer 2. In the same floating window as the one that shows the library, expand the “components” window. Drag the “button” component onto the final frame of Layer 2. Put it somewhere that it will not interfere with the image itself. In the property inspector, change the label to “Same”. The “button” component should now be in the program library. Drag a new instance of the button component onto the stage and change the label to “Different.”

Select the “same” button. In the Actions window, add the following ActionScript:

on (release){
_root.correct = 0;
_root.gotoAndPlay('feedback');
}


This subroutine will execute when the user clicks on the button and then releases the mouse button. Then, it sets a variable called “correct” to 0, since the stimuli in fact are not the same. This variable is attached to the main program, which is why it is prefixed by “_root.”. Similarly, the command “gotoAndPlay(‘feedback’)” is attached to the main program, since the frame “feedback” is part of the main program. “gotoAndPlay” does what it sounds like: it tells Flash to go to the frame called “feedback” and start playing from that point. The “AndPlay” part is important, because currently we’ve told Flash to pause playing.

To the “different” button, add:

on (release){
_root.correct = 1;
_root.gotoAndPlay('feedback');
}

Of course, there isn’t any frame called “feedback.” Add a blank keyframe to the end of the experiment. In the property inspector, name it “feedback”. In the center of that frame, draw a text box. In the property inspector, make sure it is dynamic text, not static text. Under “var”, type “feedback_text”.

In the Actions window for the “feedback” frame, add the following code:

stop();
if (correct==1){
feedback_text='Correct';
}else{
feedback_text='Incorrect';
}

Notice that we don’t need to use _root. This code is already attached to the “root”. Also notice the use of the double equals sign. If you accidentally type:

If (correct=1){

The variable “correct” will actually be set to 1.

Play this experiment. If you respond “Same,” you should be told that this response was incorrect. If you respond “Different,” you should be told that this response is correct. You may want to compare your work to the demo.

Note: Here and elsewhere, if you'd like the original .fla file to look at, please email me at gameswithwords@gmail.com

Our experiment has only one trial. You could add trials manually by adding new frames. This isn’t very efficient, and it also doesn’t allow for any randomization. In the next part, we’ll add these aspects.

5. Creating a dynamic experiment.

To dynamically choose which stimuli appear, we will need to convert them to symbols.

Go to the frame with Stim1. Right-click on the copy on the stage and select “Convert to Symbol”. In the dialogue, name the symbol “Stim1”. Make it a Movie Clip. Under “Linkage”, first select “Export for ActionScript”, then change the Identifier to “Stim1”. This is the name you will use to refer to this Movie Clip in the code.

Now double-click on Stim1. The Timeline will now show Stim1’s timeline (remember that movie clips have their own timeline). All our movie clips will consist of a single frame, but remember that you have the option to make a movie clip more dynamic. In the property inspector, change the X coordinate of Stim1 to “–200” and the Y coordinate to “-150”. This centers the 400 x 300 image (in the future, you can see what happens if you skip this step). Now go to Edit->Edit Document.

Repeat this process for Stim2. Drag Stim3 onto the stage somewhere and repeat the process. Now, delete the images of Stim1, Stim2 and Stim3 from whatever frames they appear in.

Name the frame where Stim1 was originally presented “stim”. Name the frame where Stim2 was originally presented “probe”.

In the frame “stim”, the ActionScript should read as follows:

stimulus=random(3)+1;
_root.attachMovie("Stim"+stimulus,"Stimulus",1);
setProperty(eval("Stimulus"), _x, 275);
setProperty(eval("Stimulus"), _y, 200);

First, a random number from 0 to 2 is chosen, to which we add 1. _root.attachMovie attaches a movie clip to the frame. The first argument (“Stim”+stimulus) names the movie clip. Notice that possible values are “Stim1”, “Stim2” and “Stim3” – our three stimuli! The second argument is a name for this instance. The final argument determines which movie clip level the movie clip is assigned to. If there is already a movie clip on level 1, this will overwrite it. Bugs in Flash very often result from overwriting a movie clip in this fashion.

Next, we position the movie clip in the center of the screen (275, 200) by setting the X and Y parameters. Note that if the movie clip image itself is not centered (as we did above), the outcome will be different. For instance, if the origin of the movie clip is it’s upper left corner, then you would need to place this instance at (75,50) in order to center the instance on the stage.

In the next frame, add the following code:

removeMovieClip("Stimulus");

This removes the movie clip we just added. Note that it removes it using the name we assigned it in the previous frame.

In the “Probe” frame, change the code to the following:

stop();
probe = random(3)+1;
if (stimulus==probe){
match=1;
}else{
match=0;
}
_root.attachMovie("Stim"+probe,"Probe",1);
setProperty(eval("Probe"), _x, 275);
setProperty(eval("Probe"), _y, 200);

Now, the probe is chosen randomly from the 3 stimuli. When the stimulus and the probe match, this is noted. Notice that the stimulus and probe will match 1/3 of the time. 2/3 of the time, they do not. This is done here to simplify the programming, any ratio can be implemented.

We also have to change the code for our response buttons. It is no longer the case that “different” is always correct and “same” is always wrong. Modify the code for the “Same” button as follows:

on (release){
if (_root.match==1){
_root.correct = 1;
}else{
_root.correct = 0;
}
_root.gotoAndPlay('feedback');
}

Modify the code for the “Different” button similarly.

Finally, add this code to the top of the Feedback frame:

removeMovieClip("Probe");

Test the experiment. There is still only one trial, but the stimuli should now be chosen randomly.

Writing a loop so that there are multiple trials is fairly straight-forward. First, add a blank keyframe to the beginning on Layer 2. Add the following code:

total_trials = 5;
current_trial = 1;

This initializes two variables. total_trials sets the total number of trials. current_trial tracks how many trials have been completed.

Now, name the second keyframe of Layer 2 “Next_Trial”.

Remove the stop() command from “feedback” and add 2 frames. Convert the final one to a keyframe (don’t change it to a blank keyframe – we want the feedback_text to continue to display on this final frame). Add this code to that new keyframe:

if (current_trial==total_trials){
gotoAndPlay('finish');
}else{
current_trial += 1;
trace(current_trial);
gotoAndPlay('Next_Trial');
}

This code does pretty much what it says. Now we need to add a final frame to Layer 2 and call it “finish”. Draw a textbox on the stage and write, “Thank you for participating.”

Your program should now look like this.

We are still not recording any data. That will be discussed in a future chapter.

Web Experiment Tutorial: Chapter 1, Overview



Several years ago, I wrote a tutorial for my previous lab on how to create Web-based experiments in Flash. Over the next number of months, I'll be posting that tutorial chapter by chapter.



1. What does this manual cover?

- Methodological considerations
- Creating experiments in Flash
- Implementing those experiments on the Internet
- Collecting and retrieving data
- Recruiting participants online

2. What does this manual not cover?

- Creating experiments in Java or other technologies
- Basic programming skills
i. (You should be familiar with variables, arrays, subroutines, functions, etc. Familiarity with object-oriented programming is useful but hopefully not necessary.)

3. Who should use this manual?
- Researchers who want to create and use Web-based experiments.
- Those who are simply interested in the methodology should read “Methodolgical Considerations” and “Recruiting participants online” first/only.

Web Experiment Tutorial: Chapter 2, Methodological Considerations

Several years ago, I wrote a tutorial for my previous lab on how to create Web-based experiments in Flash. Over the next number of months, I'll be posting that tutorial chapter by chapter.

1. Why do an experiment online?

Whenever designing an experiment, it is important to consider your methods. Explicit measures or implicit? Is self-report sufficient? Etc. Ideally, there is a reason for every aspect of your methods.

So why put an experiment on the Web? This chapter discusses the pros and cons of Web-based research, as well as when to use it and when to skip it.

Note: I've added an additional section to the end of this chapter to discuss paid systems like Amazon Mechanical Turk.

2. Cost.

Experiments cost both time and money – yours as well as your participants’. Here, Web-based experiments have an obvious edge. Participants are not paid, and you don’t have to spend time scheduling or testing them (however, recruiting them may be extremely time-consuming, especially in the beginning).

3. Reliability of the subjects

A common complaint about Web-based experiments is that “who knows what your participants are actually doing?”

It is true that people lie on the Internet. There is a great deal of unreliable information. This worry probably stems from this phenomenon.

However, subjects who come to the lab also lie. Some don’t pay attention. Others don’t follow the directions. Some just generally give bad data. The question becomes whether this phenomenon is more likely on the Web or in the lab.

There are several reasons to favor the Web. Lab-based subjects are generally coerced by either cash or course credit, while Web-based participants are typically volunteers (although some labs do use lotteries to induce participants). If a lab-based subject gets bored, s/he is nonetheless stuck with finishing the experiment. Although they have the right to quit, this is quite rare. Web-based subjects can and do quit at any time, thus removing themselves and their boredom-influenced data from the analyses.

In addition, good experimental design contains built-in checks to ensure that the subjects are not just randomly pressing buttons. That’s out of the scope of this manual.

Finally, you might be concerned that the same subject is participating multiple times. If you have large numbers of subjects, this is probably just noise. However, there are several ways to check. You can record IP addresses. If the participants have DHCP (dynamically-assigned IP addresses), this is imperfect, because the last several digits of the IP address can change every few minutes. Also, different people may use the same computer. Nonetheless, IP address inspection can give you an idea as to whether this is a pervasive problem. If you have very large numbers of subjects, it’s probably not an issue.

You can also require subjects to get usernames and passwords, though this adds a great deal of complexity to your programming and will likely turn away many people. Also, some people (like me) frequently forget their passwords and just create new accounts every time I go to a website.

Another option is to collect initials and birthdates. Two people are unlikely to share the same initials and birthdays. Here, though, there is a particularly high risk that subjects will lie. You can decrease this by asking only for day of the month, for instance.

Another worry are bots. Bots are programs that scour the Web. Some are designed to fill out surveys. If you are using HTML forms to put together a survey, this is a definite risk. You should include some way of authenticating that the participant is in fact a human. The most typical approach is generating an image of letters and numbers that the participant must type back in. To sign up for a free email address, this is always required.

To my knowledge, bots do not interface well with the types of Flash applications described in this book. I have not run across any evidence that bots are doing my experiments. But the day may come, so this is something to consider. The only real worry is that a single bot will give you large amounts of spurious data, masquerading as many different participants. Many of the safeguards described above to protect against uncooperative subjects will also help you with this potential problem.

Several studies have actually compared Web-based and lab-based data, and Web-based data typically produces results of equivalent or even better quality.

4. Reliability of the data

Here, lab-based experiments have a clear edge. Online, you cannot control the size of the stimulus display. You cannot control the display timing. You cannot control the distance the subject sits from the screen. You cannot control the ambient sound or lighting. If you need these things to be controlled, you should do the experiment in the lab.

Similarly, reaction times are not likely to be very precise. If your effect is large enough, you can probably get good results. But a 5 millisecond effect may be challenging.

That said, it may be worth trying anyway. Unless you think that participants’ environments or computer displays will systematically bias their data, it’s just additional noise. The question is whether your effect will be completely masked by this noise or not.

The one way in which the Web has an advantage here is that you may be able to avoid fatigue (by making your experiment very short) or order effects. By “order effect,” I mean that processing of one stimulus may affect the processing a subsequent stimuli. Online, you can give some subjects one stimulus and other subjects the other stimulus, and simply compensate by recruiting larger numbers of subjects. Another example includes experiments that require surprise trials (e.g., inattentional blindness studies). You can only surprise the same subject so many times.

5. Numbers of subjects

Here, Web-based experiments are the run-away winner. If you wanted to test 25,000 people in the lab, that would be essentially impossible. Some Web-based labs have successfully tested that number online.

6. Length of the experiment

If you have a long experiment, do it in the lab. The good Samaritans online are simply not going to spend 2 hours on your experiment out of the goodness of their hearts. Unless you make it really interesting to do!

If you can make your Web-experiment less than 2 minutes long, do it. 5 minutes is a good number to shoot for. At 15 minutes, it becomes very difficult (though still possible) to recruit subjects. The longer the experiment, the fewer participants you will get. This does interact with how interesting the experiment is: TestMyBrain.org frequently has experiments that run 15 minutes or more, and they still get many, many participants.

That said, if you have a long experiment, consider why it is so long. Probably, you could shorten it. In the lab, you may be used to having the same subject respond to the same stimulus 50 times. But what about having 50 subjects respond only once?

Suppose you have 200 stimuli that you need rated by subjects. Consider giving each subject only 20 to rate, and get 10x as many subjects.

The only time you may run into difficulty is if you need to insert a long pause into the middle of the experiment. For instance, if you are doing a long-term memory experiment and you need to retest them after a 1 hour delay. This may be difficult. I have run several experiments with short delays (2-10 minutes). I fill the delay with some of the Web’s most popular videos in the hopes that this will keep the subjects around. Even so, the experiment with the 10 minute delay attracts very few subjects. So this is something to consider.

7. Individual differences

Generally, your pool of subjects online will be far more diverse than those you can bring into the lab. This in and of itself may be a reason for favoring the method, but it also brings certain advantages.

Laughably large numbers of subjects allow you to test exploratory questions. Are you curious how age affects performance on the task? Ask subjects’ their ages. It adds only a few seconds to the experiment, but may give you fascinating data. I added this to a study of visual short-term memory and was able to generate a fascinating plot of VSTM capacity against age.

Similarly, Hauser and colleagues compared moral intuitions of people from different countries and religious backgrounds. This may not have been the original purpose of the experiment, but it was by far the most interesting result, and all it required was adding a single question to the experiment.

You could also compare native and non-native English speakers, for instance, just to see if it matters.

Online, you may have an easier time recruiting difficult-to-find subjects. One researcher I know wanted to survey parents of children with a very rare disorder. There simply aren’t many in his community, but he was able to find many via the Internet. Maybe you want to study people with minimal exposure to English. You are unlikely to find them on your own campus, but there are literally millions online.

8. I have an experiment. Which should I pick?

This is of course up to you. Here are the guidelines I use:

Pick lab-based if:
The experiment must be long
The experiment requires careful controls of the stimuli and the environment

Pick Web-based:
The experiment requires very large numbers of subjects
You don’t have many stimuli and don’t want to repeat them
The experiment is very short
You want to avoid order effects
You want to look at individual differences
You want to study a rare population
You want to save money


Note that for most experiments, you could go either way.

9. Amazon Mechanical Turk

Since I originally wrote this tutorial, a number of people have started using Amazon Mechanical Turk. Turk was designed for all kinds of outsourcing. You have a project? Post it on the Turk site, say how much you are willing to pay, and someone will do it. It didn't take long for researchers to realize this was a good way of getting data (I don't know who thought of this first, but I heard about it from the Gibson lab at MIT, who seem to be the pioneers in Cambridge, at least).

Turk has a few advantages over lab-based experiments: it's a lot cheaper (people typically pay at around $2/hour) and it's fast (you can run a whole experiment in an afternoon).

Comparing it with running your own website like I do is tricky. First, since participants are paid, it obviously costs more than running a website. So if you do experiments involving very, very large numbers of participants, it may still be too expensive. On the flip side, recruitment is much easier. I have spent hundreds of hours over the last few years recruiting people to my website.

Second, your subject pool is more restricted, as participants must have an Amazon Payments account. There may be some restrictions on who can get an account, and in any case, people who might otherwise do your experiment may not feel like making an account.

One interesting feature of Turk is that you can refuse to pay people who give you bad data (for instance, get all the catch trials wrong). Participants know this and thus may be motivated to give you better data.

Currently, I am using Turk for norming studies. Norming studies typically require a small, set number of participants, making them ideal for Turk. Also, they are boring, making it harder to recruit volunteers. I do not expect to switch over to using Turk exclusively, as I think the  method described in this tutorial still has some advantages.

Amazon has a pretty decent tutorial on their site for how to do surveys, so I won't cover that here. More complex experiments involving animations, contingent responses, etc., should in theory be possible, but I don't know anybody doing such work at this time.

CHARGE

I recently received an email from the CHARGE Syndrome Foundation, which is trying to provide information to people who might need it. Based on demographics, there should be a least a few readers of this blog who know somebody with CHARGE syndrome, so as a public service, I'm linking to the website and including some additional information below.

CHARGE syndrome is a relatively rare (1 per 9-10,000 births, according to the Foundation website) pattern of congenital birth defects. It usually appears in families without any history of the syndrome or similar syndromes. There are a number of physical problems (often heart defects, breathing problems, and swallowing problems) as well as nervous system problems such as malfunction of cranial nerves, blindness and deafness (the exact constellation of impairments differs from person to person).

Given the blindness and deafness, along with Autistic-like behaviors, it should not be surprising that there are consequences for language and communication. The forthcoming CHARGE Syndrome book (full disclosure: I am a co-author on one of the chapters in said book, and my father is the lead editor of the book) has a chapter (not by me) reviewing some recent work on communicative abilities in people with CHARGE. For very good reason, that work is focused on communication, rather than structural properties of language. Of course, I am interested in how or whether particular components of language are impacted by the syndrome (similar to how the linguistic consequences of Autism have been studied in some depth, telling us both more about Autism and about language), but I don't know of any relevant work having been done.

For those who want to know more about CHARGE, I suggest going to the Charge Foundation website. One place to find some of the recent research on CHARGE is to check out the publications page of the CHARGE Lab at Central Michigan University.

New features

Careful observers may have noticed some strange tags showing up in the titles of posts, such as lab notebook and findings. The idea -- borrowed from the excellent but now-defunct Cognitive Daily and their "casual Fridays" -- is to have some regular features. Here's an introduction to a few of them:

Class Notes
My advisor is finally teaching her much-awaited graduate seminar on language acquisition (it has been postponed several times for a variety of reasons). This class involves about 2 full days of reading each week, and then a three-hour discussion of the material and covers many of the core debates over how children learn language. I'll be recording what I learn under the heading 'class notes'. 

Lab Notebook
In this feature, I describe (hopefully) interesting issues that have arisen in the course of day-to-day lab life. Hopefully, these will be interesting in their own right. These posts also contribute to my goal of making science more transparent. I doubt many people know what goes on in a lab (in my case, it seems to consist mostly of answering email). That's problematic to the extent that it's hard for people to evaluate the administration's science policy without really knowing what exactly it is scientists do. 

Findings
This feature has been around for a while: posting results from experiments, particularly the experiments that are run on the website.

Web Experiment Tutorial
Several years ago I put together a tutorial for my previous lab on how to program experiments in Flash and post them on the Web. I keep meaning to post the whole tutorial on the website, but it requires some updating and reformatting, and there's never enough time to do it all at once. So instead, I'll be posting it chapter by chapter. This may not be of a great deal of general interest, but I do keep getting requests for this tutorial.


Others may appear (I did have one post titled "Briefings," but I'm not sure that will be continued).

Lab Notebook: Building a Better Eyetracker

Many of my experiments in the lab use eye-tracking. Eye-tracking is a fantastic method for studying language comprehension, developed mainly by Michael Tanenhaus and his colleagues. People tend to look at whatever is being talked about, and we can use that fact to measure what people think is being talked about.

For instance, in the eyetracking experiments related to the Pronoun Sleuth experiment  have online, I have people look at a picture with a couple characters on it while they listen to a story about the characters. My interest is in what happens when they hear a pronoun: who do they look at?

This has two advantages over just asking (which is what I do in Pronoun Sleuth): first, it's an implicit measure, so I don't have to worry about people just telling me what they think I want to hear. More importantly, the measure is sensitive to time: people only look at who they think is being talked about when they decide who's being talked about. So I can get a measure of how long it takes to understand a pronoun in different conditions.

A Better Eyetracker

In the lab, we use an automated eyetracker (the Tobii T60), which can record in real time what people are looking at. This is a fantastic time-saver. Unfortunately, it's also really expensive and really heavy, so it's mostly good for use in the lab. I'll be going to Taiwan to run some experiments in March, and I won't be taking the Tobii with me.

A cheap eyetracker can be built by other means, though. Our lab traditionally used what we affectionately call the "poor man's eyetracker," which is just a video camera. In a typical experiment, participants see four physical objects and hear information about the objects. The four objects are arranged in a square, and right in the center of the square is the video camera, pointed back at the participant. From the recording, then, we can work out whether the participant is looking at the object in the top left, bottom left, top right or bottom right.

This is a slower method than using the Tobii, because you have to code where the participant is looking frame-by-frame in each video (Tobii does all that automatically). And it has much less resolution that Tobii, which can record where a participant is looking down to an inch or so, whereas with Poor Man's we can only get quadrants of a screen. But it's a lot cheaper.

Although members of the lab have traveled with such a setup before, it's less portable than it could be. These experiments involve many objects, so you end up having a huge box of supplies to cart around.

Enter Laptop

Many laptops -- and all Mac laptops -- come with cameras built into their screens. One of my research assistants and a grad student at MIT and I have been trying to turn our MacBooks into portable eyetrackers. My experiments usually only involve stories about two characters, one on the left-hand side of the screen and one on the right-hand side.

We show the pictures on the screen, and by use the built-in camera plus iMovie to record the participant. Based on testing, the camera records a frame 20 times per second, which is slower than the Tobii T60 (60 times per second), but is enough for the relatively slow effects I study. This is incredibly portable, since all that is required is the laptop.

Except

There has only been one problem so far. As I said, I'm interested in where people look when they hear a pronoun. So I need to know exactly when they heard the pronoun, down to about 1/20 or at the worst 1/10 of a second. The software that I use to code eye-movements doesn't play the sound while you're coding, so there's no way of using it to determine the beginning of the pronoun.

What researchers have often done is have two cameras working simultaneously: one recording the participant and one recording the screen. These video streams can be spliced together so that you can see both simultaneously when coding. If you know, for instance, that the pronoun is spoken exactly 14,345 milliseconds after the picture appears on the screen, you just measure forwards 14,345 ms after the picture appeared on the screen. This however, requires additional equipment (the camera, not to mention the apparatus that combines the video signals).

Another trick people will use is to have a mirror behind the participant. You can then see what the participant is looking at in the mirror. Our current plan is to adopt this method, except since the whole thing needs to be portable, we're using a very small (compact makeup) mirror, which is mounted on a 9-inch make-shift tripod. This can be placed in front of the laptop, but slightly off to the side so it doesn't block the participant's view.

With any luck, this will work. We're currently running a few pilot participants.

Head-Mounted Eyetrackers

I should point out that there are also head-mounted eyetrackers, which the participant actually has to wear. These will give you a video of the scene in front of the subject, with a little cross-hair over the exact spot the participant is focusing on. These are the most flexible, since participants can turn their heads and walk around (not possible in any of the set-ups above), but they still require frame-by-frame coding (the eyetracker can't recognize objects; it doesn't know if the participant was looking at a chair or a cat -- you have to code this yourself) and they aren't great for working with kids, since they are usually too heavy for young kids to wear.

Recent Findings Don't Prove there's a Ghost in the Machine (Sorry Saletan)

When I took intro to psychology (way too long ago), the graduate instructor posed the following question to the class: Does your brain control your mind, or does your mind control your brain? At first I thought this was a trick question -- coming from most neuroscientists or cognitive scientists it would be -- but she meant it seriously.

On Tuesday, William Saletan at Slate posed the same question. Bouncing off recent evidence that some supposedly vegetative patients are in fact still able to think, Saletan writes, "Human minds stripped of every other power can still control one last organ--the brain."

Huh?

Every neuroscientist I've talked to would read this as a tautology: "the brain controls the brain." Given the gazillions of feedback circuits in the brain, that's a given. Reading further, though, Saletan clearly has something else in mind:

We think of the brain as its own master, controlling or fabricating the mind ... If the brain controls the mind this way, then brain scanning seems like mind reading ... It's fun to spin out these neuro-determinist theories and mind-reading fantasties. But the reality of the European scans is much more interesting. They don't show the brain controlling the mind ... The scans show the opposite: the mind operating the brain."

Evidence Mind is Master

As I've already mentioned above, the paragraph quoted above is nonsensical in modern scientific theory, and I'll get back to why. But before that, what evidence is Saletan looking at?

In the study he's talking about, neuroscientists examined 54 patients who show limited or no awareness and no ability to communicate. Patients brains were scanned while they were asked to think of motor activities (swinging a tennis racket) or navigation activities (moving around one's home town). 5 of the 54 were able to do this. They also tried to ask the patients yes-no questions. If the answer was 'yes', the patient was to think about swinging a tennis racket; if 'no', moving around one's home town. One patient was able to do this successfully.

Note that the brain scans couldn't see the patient deciding 'yes' or 'no' -- actually, they couldn't see the patient deciding at all. This seems to be why Saletan thinks this is evidence of an invisible will controlling the physical brain: "On the tablet of your brain, you can write whatever answer you want."

The Mistake

The biggest problem with this reasoning is a misunderstanding of the method the scientists used. FMRI detects very, very small signals in the brain. The technology tracks changes in blood oxygenation levels, which correlates with local brain activity (though not perfectly). A very large change is on the order of 1%. For more complicated thoughts, effect sizes of 0.5% or even 0.1% are typical. Meanwhile, blood oxygen levels fluctuate a good deal for reasons of their own. This low signal-to-noise ratio means that you usually need dozens of trials: have the person think the same thoughts over and over again and average across all the trials. In the fMRI lab I worked in previously, the typical experiment took 2 hours. Some labs take even longer.

To use fMRI for meaningful communication between a paralyzed person and their doctors, you need to  be able to detect the response to an individual question. Even if we knew were to look in the brain for 'yes' or 'no' answers -- and last I heard we didn't, but things change quickly -- its unlikely we could hear this whispering over the general tumult in the brain. The patients needed to shout at the top of their lungs. It happens that physical imagery produces very nice signals (I know less about navigation, but presumably it does, too, or the researchers wouldn't have used it).

Thus, the focus on visual imagery rather than more direct "mind-reading" was simply an issue of technology.

Dualism

The more subtle issue is that Saletan takes dualism as a starting point: the mind and brain are separate entities. Thus, it makes sense to ask which controls the other. He seems to understand modern science as saying the brain controls the mind.

This is not the way scientists currently approach the problem -- or, at least, not any I know. The starting assumption is that the mind and brain are two ways of describing the same thing. Asking whether the mind can control the brain makes as much sense as asking whether the Senate controls the senators or senators control the Senate. Talking about the Senate doing something is just another way of talking about some set of senators doing something.

Of course, modern science could be wrong about the mind. Maybe there is a non-material mind separate from the brain. However, the theory that the mind is the brain has been enormously productive. Without it, it is extremely difficult to explain just about anything in neuroscience. Why does brain trauma lead to amnesia, if memories aren't part of the brain? Why can strokes leave people able to see but unaware that they can see?

Descartes' Error

A major problem with talking about the mind and brain is that we clearly conceptualize of them differently. One of the most exciting areas of cognitive science in the last couple decades has looked at mind perception. It appears humans are so constructed that we are good at detecting minds. We actually over-detect minds, otherwise puppet shows wouldn't work (we at least half believe the puppets are actually thinking and acting). Part of our concept of mind is that it is non-physical but controls physical bodies. While our concept of mind appears to develop during early childhood, the fact that almost all humans end up with a similar concept suggests that either the concept or the propensity to develop it is innate.

Descartes,  who produced probably the most famous defense of dualism, thought the fact that he had the concept of God proved that God exists (his reasoning: how can an imperfect being have the thought of a perfect being, unless the perfect being put that thought there?). Most people would agree, however, that just because you have a concept doesn't mean the thing the concept refers to exists. I, for instance, have the concept of cylons, but I don't expect to run into any.

Thus, even as science becomes better and better at explaining how a physical entity like the brain gives rise to our perceptions, our experience of existing and thinking, the unity of mind and brain won't necessarily make any more intuitive sense. This is similar to the problem with quantum physics: we have plenty of scientific evidence that something can be both a wave and a particle simultaneously, and many scientists work these theories with great dexterity. But I doubt anyone really has a clear conception of a wave/particle. I certainly don't, despite a semester of quantum mechanics in college. We just weren't set up to think that way.

For this reason, I expect we'll continue to read articles like Saletan's long in the future. This is unfortunate, as neuroscience is becoming an increasingly important part of our lives and society, in a way quantum physics has yet to do. Consider, for instance, insanity pleas in the criminal justice system, lie detectors, and so on.

Briefings: New Science Budget

Details on Obama's 2011 science budget are now available. The last issue of Nature has a run-down. The news is better than it could have been -- and certainly better than the disastrous Bush years.

Cancellation of the Constellation program (the replacement for the Shuttle) and the moon mission made the headlines, but despite that, NASA's budget will increase slightly. The end of the Constellation project would have seriously increased the amount available for science, but in fact a lot of the money budgeted for that will be spent stimulating the development of commercial rockets.

NIH is getting a $1 billion increase -- which only amounts to 3.2% because NIH is the biggest of the US science programs. Because the NIH received $10.4 billion in stimulus funds, the number of grants they will be able to give out in 2011 will fall considerably. One nice piece of news is that stipends for many NIH-supported doctoral students and post-doctoral fellows will rise, showing the administration's continued focus on supporting young scientists.

The DoE's budget is getting a significant boost of 7% to $28.4 billion, with money going to energy research and eveloment, nuclear weapons and physical sciences.

The NSF -- the smallest of the non-defense research programs but the one that funds me and most psycholinguists -- is getting a small hike up from $6.9 billion to $7.4 billion.

Most of what I've seen in the science press has been relative contentment with the budget, given that many other programs are being cut. That said, it's worth keeping in mind that the last decade saw the US losing steady ground to the rest of the world in science and technology; whether small increases will help remains to be seen.