Music and language depend on the some of the same neural substrates according to researchers at Georgetown University.
The quick summary is that the authors, Robbin Miranda and Michael Ullman of Georgetown University, found that memory for musical melodies uses the same part of the brain as memory for words, and that "rules" for music use the same part of the brain as rules (grammar) for language. In the case of this particular experiment, musical "rules" means following the key structure of a song.
Why is this interesting? To the extent that anything about the mind and brain is well understood, music is particularly not well understood. I suspect this is probably partly because it took a long time for anybody to figure out how to study it empirically. Language, on the other hand, is fairly well understood. That is, it's maybe like physics in the 1600s, whereas the study of music isn't even that advanced.
If researchers are able to tie aspects of music processing to parts of the brain that we already know something about, suddenly we know a whole lot more about music. That's one exciting outcome.
The other exciting outcome is that, as I said, language has been studied scientifically for some time. This means that psychologists and neuroscientists have a whole battery of empirical methods for probing different aspects of language. To the extent that music and language overlap, that same arsenal can be set loose on music.
This shouldn't be taken as implying that nobody else has ever studied the connection between language and music before. That's been going on for a long time. What's important here is that these aspects of music were tied to one of the most complete and best-specified models of how the brain understands and produces language -- the Declarative/Procedural model.
Unfortunately, the paper isn't yet available on the Ullman website, but you can read a press release here.
Full disclosure: I was working in the Ullman lab when Robbin joined as a graduate student. You can read about some of my research with Ullman here.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development4 weeks ago in The Curious Wavefunction
-
Political pollsters are pretending they know what's happening. They don't.4 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections5 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Your brain knows when you should be afraid, even if you don't
I just got back to my desk after an excellent talk by Paul Whalen of Dartmouth College. Whalen studies the amygdala, an almond-shaped region buried deep in the brain. Scientists have long known that the amygdala is involved in emotional processing. For instance, when you look at a person whose facial expression is fearful, your amygdala gets activated. People with damage to their amygdalas have difficulty telling if a given facial expression is "fear" as opposed to just "neutral."
It was an action-packed talk, and I recommend that anybody interested in the topic visit his website and read his latest work. What I'm going to write about here are some of his recent results -- some of which I don't think have been published yet -- investigating whether you have to be consciously aware of seeing a fearful face in order for your amygdala to become activated.
The short answer is "no." What Whalen and his colleagues did was use an old trick called "masking." If you present one stimulus (say, a fearful face) very quickly (say, 1/20 of a second) and then immediately present another stimulus (say, a neutral face) immediately afterwards, the viewer typically reports only having seen the second stimulus. Whalen used fMRI to scan the brains of people while they viewed emotional faces (fearful or happy) that were masked by neutral faces. The participants said they only saw neutral faces, but the brain scans showed that their amygdalas knew different.
One question that has been on researcher's minds for a while is what information does the amygdala care about? Is it the whole face? The color of the face? The eyes? Whalen ran a second experiment which was almost exactly the same, but he erased everything from the emotional faces except the eyes. The amygdala could still tell the fearful faces from the happy faces.
You could be wondering, "Does it even matter if the amygdala can recognize happy and fearful eyes or faces that the person doesn't remember seeing? If the person didn't see the face, what effect can it have?"
Quite possibly plenty. In one experiment, the participants were told about the masking and asked to guess whether they were seeing fearful or happy eyes. Note that the participants still claimed to be unable to see the emotional eyes. Still, they were able to guess correctly -- not often, but more often than if they were guessing randomly. So the information must be available on some level.
There are several ways this might be possible. In ongoing research in Whalen's lab, he has found that people who view fearful faces are more alert and more able to remember what they see than people who view happy faces. Experiments in animals show that when you stimulate the amygdala, various things happen to your body such as your eyes dilating. Whalen interprets this in the following way: when you see somebody being fearful, it's probably a clue that there is something dangerous in the area, so you better pay attention and look around. It's possible that subjects who guessed correctly [this is my hypothesis, not his] were tapping into the physiological changes in their bodies in order to make these guesses. "I feel a little fearful. Maybe I just saw a fearful face."
For previous posts about the dissociation between what you are consciously aware of from what your brain is aware of, click here, here and here.
It was an action-packed talk, and I recommend that anybody interested in the topic visit his website and read his latest work. What I'm going to write about here are some of his recent results -- some of which I don't think have been published yet -- investigating whether you have to be consciously aware of seeing a fearful face in order for your amygdala to become activated.
The short answer is "no." What Whalen and his colleagues did was use an old trick called "masking." If you present one stimulus (say, a fearful face) very quickly (say, 1/20 of a second) and then immediately present another stimulus (say, a neutral face) immediately afterwards, the viewer typically reports only having seen the second stimulus. Whalen used fMRI to scan the brains of people while they viewed emotional faces (fearful or happy) that were masked by neutral faces. The participants said they only saw neutral faces, but the brain scans showed that their amygdalas knew different.
One question that has been on researcher's minds for a while is what information does the amygdala care about? Is it the whole face? The color of the face? The eyes? Whalen ran a second experiment which was almost exactly the same, but he erased everything from the emotional faces except the eyes. The amygdala could still tell the fearful faces from the happy faces.
You could be wondering, "Does it even matter if the amygdala can recognize happy and fearful eyes or faces that the person doesn't remember seeing? If the person didn't see the face, what effect can it have?"
Quite possibly plenty. In one experiment, the participants were told about the masking and asked to guess whether they were seeing fearful or happy eyes. Note that the participants still claimed to be unable to see the emotional eyes. Still, they were able to guess correctly -- not often, but more often than if they were guessing randomly. So the information must be available on some level.
There are several ways this might be possible. In ongoing research in Whalen's lab, he has found that people who view fearful faces are more alert and more able to remember what they see than people who view happy faces. Experiments in animals show that when you stimulate the amygdala, various things happen to your body such as your eyes dilating. Whalen interprets this in the following way: when you see somebody being fearful, it's probably a clue that there is something dangerous in the area, so you better pay attention and look around. It's possible that subjects who guessed correctly [this is my hypothesis, not his] were tapping into the physiological changes in their bodies in order to make these guesses. "I feel a little fearful. Maybe I just saw a fearful face."
For previous posts about the dissociation between what you are consciously aware of from what your brain is aware of, click here, here and here.
Monkeys know their plurals
Anybody who reads this blog knows that I am deeply skeptical of claims about animal language. Some of the best work on animal language has come from Marc Hauser's lab at Harvard. Recently they reported that rhesus monkeys have the cognitive machinery to understand the singular/plural distinction.
First, a little background. Many if not most scientists who study language are essentially reverse-engineers. They/we are in the business of figuring out what all the parts are and how they work. This turns out to be difficult, because there are many parts and we don't really have the option of taking apart the brains of random people since they usually object. So the task is something like reverse-engineering a Boeing 747 while it's in flight.
There are many different ways you could approach the task. Hauser tries to get at language by looking at evolution. Obviously, rhesus monkeys can't speak English. Just as obviously, they can do some of the tasks that are necessary to speak English (like recognizing objects -- you have to recognize something before you can learn its name). Any necessary components of language that non-human animals can successfully perform must not be abilities that evolved for the purpose of language. If you can figure out what they did evolve for, you can better understand their structure and function. So the next step is perhaps to figure out why those particular abilities evolved and what non-human animals use them for. This ultimately leads to a better understanding of these components of language.
That is one reason to study language evolution in this manner, but there are many others (including the fact that it's just damn cool). If you are interested, I suggest you read one this manifesto on the subject.
Back to the result. Nouns in many languages such as English can either be singular or plural. You couldn't learn to use "apple" and "apples" correctly correctly if you couldn't distinguish between "one apple" and "more than one apple". This may seem trivial to you, but no non-human animals can distinguish between 7 apples and 8 apples -- seriously, they can't. In fact, some human groups seemingly cannot distinguish between 7 apples and 8 apples, either (more on that in a future post).
So can rhesus monkeys? Hauser and his colleagues tested wild rhesus monkeys on the beautiful monkey haven of Cayo Santiago in Puerto Rico. The monkeys were shown two boxes. The experimenters then put some number of apples into each box. The monkeys were then allowed to approach one box to eat the contents. Rhesus monkeys like apples, so presumably they would go to the box that they think has more apples.
If one box had 1 apple and the other had 2 apples, the monkeys went with the two apples. If one box had 1 apple and the other had 5, the monkeys picked the 5 apple box. But they chose at random between 2 and 4 apples or 2 and 5 apples. (For those who are familiar with this type of literature, there are some nuances. The 2, 4 or 5 apples had to be presented to the monkeys in a way that encouraged the monkeys to view them as a set of 2, 4 or 5 apples. Presenting them in a way that encourages the monkeys to think of each apple as an individual leads to different results.)
This suggests that when the monkeys saw one box with "apple" and one with "apples," they knew which box to choose. But when both boxes had "apples," they were at a loss. Unlike humans, they couldn't count the apples and use that as a basis to make their decision.
Full disclosure: I considered applying to his lab as a graduate student. I am currently a student in a different lab at the same school.
Caveat: These results have not been formally published. The paper I link to above is a theory paper that mentions these results, saying that the paper is under review.
First, a little background. Many if not most scientists who study language are essentially reverse-engineers. They/we are in the business of figuring out what all the parts are and how they work. This turns out to be difficult, because there are many parts and we don't really have the option of taking apart the brains of random people since they usually object. So the task is something like reverse-engineering a Boeing 747 while it's in flight.
There are many different ways you could approach the task. Hauser tries to get at language by looking at evolution. Obviously, rhesus monkeys can't speak English. Just as obviously, they can do some of the tasks that are necessary to speak English (like recognizing objects -- you have to recognize something before you can learn its name). Any necessary components of language that non-human animals can successfully perform must not be abilities that evolved for the purpose of language. If you can figure out what they did evolve for, you can better understand their structure and function. So the next step is perhaps to figure out why those particular abilities evolved and what non-human animals use them for. This ultimately leads to a better understanding of these components of language.
That is one reason to study language evolution in this manner, but there are many others (including the fact that it's just damn cool). If you are interested, I suggest you read one this manifesto on the subject.
Back to the result. Nouns in many languages such as English can either be singular or plural. You couldn't learn to use "apple" and "apples" correctly correctly if you couldn't distinguish between "one apple" and "more than one apple". This may seem trivial to you, but no non-human animals can distinguish between 7 apples and 8 apples -- seriously, they can't. In fact, some human groups seemingly cannot distinguish between 7 apples and 8 apples, either (more on that in a future post).
So can rhesus monkeys? Hauser and his colleagues tested wild rhesus monkeys on the beautiful monkey haven of Cayo Santiago in Puerto Rico. The monkeys were shown two boxes. The experimenters then put some number of apples into each box. The monkeys were then allowed to approach one box to eat the contents. Rhesus monkeys like apples, so presumably they would go to the box that they think has more apples.
If one box had 1 apple and the other had 2 apples, the monkeys went with the two apples. If one box had 1 apple and the other had 5, the monkeys picked the 5 apple box. But they chose at random between 2 and 4 apples or 2 and 5 apples. (For those who are familiar with this type of literature, there are some nuances. The 2, 4 or 5 apples had to be presented to the monkeys in a way that encouraged the monkeys to view them as a set of 2, 4 or 5 apples. Presenting them in a way that encourages the monkeys to think of each apple as an individual leads to different results.)
This suggests that when the monkeys saw one box with "apple" and one with "apples," they knew which box to choose. But when both boxes had "apples," they were at a loss. Unlike humans, they couldn't count the apples and use that as a basis to make their decision.
Full disclosure: I considered applying to his lab as a graduate student. I am currently a student in a different lab at the same school.
Caveat: These results have not been formally published. The paper I link to above is a theory paper that mentions these results, saying that the paper is under review.
How do universities choose professors? The survey is in.
Several studies have looked at university hiring practices. Search committees (committees in charge of filling vacant positions) around the country were surveyed, and the results are in.
The studies, published in Teaching of Psychology, looked specifically at psychology departments, so it may not generalize well to other departments. However, as a PhD student in psychology, it's the department I care most about.
The older of the two, written in 1998 by Eugene Sheehan, Teresa McDevitt & Heather Ross, all then at the University of Northern Colorado, had several interesting results. One was that teaching was valued more highly than research, which was surprising to me. I would like to know how this broke down by type of institution. Some schools are said to highly value teaching (i.e., Oberlin, Swarthmore) while others are said to highly value research (i.e., Harvard, Georgetown). Since they only got back 90 complete surveys, they probably couldn't do do an analysis breaking down by "teaching schools" and "research schools."
Luckily, R. Eric Landrum & Michael A. Clump from Boise State University wondered the same thing and published an answer in 2004,. They compared public and private universities as well as undergraduate-only departments against programs with graduate programs as well. Private schools were significantly more likely to care about teaching experience, whereas public institutions were significantly more likely to care about research-related issues and the ability to get grants. Undergraduate-only departments were similarly much more conserned with teaching-related issues, whereas programs with graduate students cared more about research- and grant-related issues.
Another interesting result was that the Sheehan study found that the job interview was the most important factor in deciding between interviewed candidates. This is not surprising in the sense that we all know that the interview is very important. It is surprising because it's well-known that job interviews are very poor indicators of future performance, and you would think a university psychology department would know that. The later study did not consider interviews vs. CVs vs. letters of recommendation.
Now for the data.
Sheehan et al. 1998:
The factors that the search committees considered when deciding who to interview are listed below, in order of most important to least:
Letters of recommendation
Fit between applicant's research interest and department needs
Experience teaching courses related to the position description
General teaching experience
Quality of course evaluations
Quality of journals in which the applicant has published
Number of publications
Potential for future research
Quality of applicant's doctoral granting institution
Awards for teaching
The factors considered when deciding among interviewed candidates were, in order of most important to least:
Performance at interview with search committee
Performance during colloquium (i.e., the "job talk")
Fit between applicants research interests and department needs
Experience teaching courses related to the position description
Performance during undergraduate lecture
Candidate's ability to get along with other faculty
General teaching experience
Letters of recommendation
Candidate's personality
Performance at interview with chair
Landrum & Clump, 2004:
Factors more imporant to private schools vs. public:
It is important that applicant publications be from APA journals only.
Teaching experience at the undergraduate level is important for applicants.
Research experience utilizing undergraduate undergraduates is important for our applicants.
Experience in academic advising is important for the successful job applicant.
It hurts an applicant if he or she does not address specific courses listed in the job adertisement.
In our department, teaching is more important that research.
Teaching experience.
Previous work with undergraduates.
Factors more important to public schools vs. private:
Our department has an expectation of grant productivity.
Faculty in our department need to receive grants in order to be successful.
In our department, research is more important than teaching.
Quality of publications.
Potential for successful grant activity.
The comparison between undergraduate-only and undergrad/grad programs revealed many more significant differences, so I refer you to the original paper.
The studies, published in Teaching of Psychology, looked specifically at psychology departments, so it may not generalize well to other departments. However, as a PhD student in psychology, it's the department I care most about.
The older of the two, written in 1998 by Eugene Sheehan, Teresa McDevitt & Heather Ross, all then at the University of Northern Colorado, had several interesting results. One was that teaching was valued more highly than research, which was surprising to me. I would like to know how this broke down by type of institution. Some schools are said to highly value teaching (i.e., Oberlin, Swarthmore) while others are said to highly value research (i.e., Harvard, Georgetown). Since they only got back 90 complete surveys, they probably couldn't do do an analysis breaking down by "teaching schools" and "research schools."
Luckily, R. Eric Landrum & Michael A. Clump from Boise State University wondered the same thing and published an answer in 2004,. They compared public and private universities as well as undergraduate-only departments against programs with graduate programs as well. Private schools were significantly more likely to care about teaching experience, whereas public institutions were significantly more likely to care about research-related issues and the ability to get grants. Undergraduate-only departments were similarly much more conserned with teaching-related issues, whereas programs with graduate students cared more about research- and grant-related issues.
Another interesting result was that the Sheehan study found that the job interview was the most important factor in deciding between interviewed candidates. This is not surprising in the sense that we all know that the interview is very important. It is surprising because it's well-known that job interviews are very poor indicators of future performance, and you would think a university psychology department would know that. The later study did not consider interviews vs. CVs vs. letters of recommendation.
Now for the data.
Sheehan et al. 1998:
The factors that the search committees considered when deciding who to interview are listed below, in order of most important to least:
Letters of recommendation
Fit between applicant's research interest and department needs
Experience teaching courses related to the position description
General teaching experience
Quality of course evaluations
Quality of journals in which the applicant has published
Number of publications
Potential for future research
Quality of applicant's doctoral granting institution
Awards for teaching
The factors considered when deciding among interviewed candidates were, in order of most important to least:
Performance at interview with search committee
Performance during colloquium (i.e., the "job talk")
Fit between applicants research interests and department needs
Experience teaching courses related to the position description
Performance during undergraduate lecture
Candidate's ability to get along with other faculty
General teaching experience
Letters of recommendation
Candidate's personality
Performance at interview with chair
Landrum & Clump, 2004:
Factors more imporant to private schools vs. public:
It is important that applicant publications be from APA journals only.
Teaching experience at the undergraduate level is important for applicants.
Research experience utilizing undergraduate undergraduates is important for our applicants.
Experience in academic advising is important for the successful job applicant.
It hurts an applicant if he or she does not address specific courses listed in the job adertisement.
In our department, teaching is more important that research.
Teaching experience.
Previous work with undergraduates.
Factors more important to public schools vs. private:
Our department has an expectation of grant productivity.
Faculty in our department need to receive grants in order to be successful.
In our department, research is more important than teaching.
Quality of publications.
Potential for successful grant activity.
The comparison between undergraduate-only and undergrad/grad programs revealed many more significant differences, so I refer you to the original paper.
Another non-human first
First ever Economist obituary for a non-human:
http://www.economist.com/obituary/displaystory.cfm?story_id=9828615
http://www.economist.com/obituary/displaystory.cfm?story_id=9828615
Not your granddaddy's subconscious mind
To the average person, the paired associate for "psychology," for better or worse, is "Sigmund Freud." Freud is probably best known for his study of the "unconscious" or "subconscious". Although Freudian defense mechanisms have long since retired to the history books and Hollywood movies, along with the ego, superego and id, Freud was largely right in his claim that much of human behavior has its roots outside of conscious thought and perception. Scientists are continually discovering new roles for nonconscious activities. In this post, I'll try to go through a few major aspects of the nonconscious mind.
A lab recently reported that they were able to alter people's opinions through a cup of coffee. This was not an effect of caffeine, since the cup of coffee was not actually drunk. Instead, study participants were asked to hold a cup of coffee momentarily. The cup was either hot or cold. Those who held the hot cup judged other people to be warmer and more sociable than those who held the cold cup.
This is one in a series of similar experiments. People are more competitive if a briefcase (a symbol of success) is in sight. They do better in a trivia contest immediately after thinking about their mothers (someone who wants you to succeed). These are all examples of what is called "social priming" -- where a socially-relevant cue affects your behavior.
Social priming is an example of a broader phenomenon (priming) that is a classic example of nonconscious processing. One simple experiment is to have somebody read a list of words presenting one at a time on a computer. The participant is faster if the words are all related (dog, cat, bear, mouse) than if they are relatively unrelated (dog, table, mountain, car). The idea is that thinking about dogs also makes other concepts related to dogs (i.e., other animals) more accessible to your conscious thought. In fact, if you briefly present the word "dog" on the screen so fast that the participant isn't even aware of having seen it, they will still be faster at reading "cat" immediately afterwards than if "mountain" had flashed on the screen.
Mahzarin Banaji has made a career around the Implicit Association Test. In this test, you press a key (say "g") when you see a white face or a positive word (like "good" or "special" or "happy") and a different key (say "b") when you see a black face or a negative word (like "bad" or "dangerous"). You do this as fast as you can. Then the groupings switch -- good words with black faces and bad words with white faces. The latter condition is typically harder for white Americans, even those who self-report being free of racial prejudice. Similar versions of the test have been used in different cultures (i.e., Japan) and have generally found that people are better able to associate good words with their own in-group than a non-favored out-group. I didn't describe the methodology in detail here, but trust me when I say it is rock-solid. The interpretation that this is a measure of implicit, nonconscious prejudice is up for debate. For the purposes of my post here, though, this is clearly a nonconscious prejudice. (Try it for yourself here.)
Vision turns out to be divided into conscious vision and nonconscious vision. Yes, you read that correctly: nonconscious vision. The easiest way to tell this for yourself is to blindfold a single eye. You probably know that you need two eyes for depth perception, but with one eye blindfolded, the world doesn't suddenly look flat. (At least, it doesn't for me.) You may notice some small differences, but to get a real sense of what you have lost, try playing tennis. The ball becomes nearly impossible to find. This is because the part of your vision that you use to orient in space is largely inaccessible to your conscious mind.
An even more interesting case study of this -- though not one you can try at home -- is blindsight. People with blindsight report being blind. As far as they can tell, they can't see a thing. However, if you show them a picture and ask them to guess what the picture is of, they can "guess" correctly. They can also reach out and take the picture. They are unaware of being able to see, but clearly on some level they are able to do so.
It is also possible to learn something without being aware of learning it. My old mentor studies contextual cueing. The experiment works like this: You see a bunch of letters on the screen. You are looking for the letter T. Once you find it, you press an arrow key to report which direction the "T" faces. This repeats many hundreds of times. Some of the displays repeat over and over (the letters are all in the same places). Although you aren't aware of the repetition -- if asked, you would be unable to tell a repeated display from a new display -- you are faster at finding the T on repeated displays than new displays.
In similar experiments about language learning, you listed to nonsense sentences made of nonsense words. Unknown to you, the sentences all conform to a grammar. If asked to explain the grammar, you would probably just say "huh?" but if asked to pick between two sentences, one of which is grammatical and one of which is not, you can do so successfully.
Actually, an experiment isn't needed to prove this last point. Most native speakers are completely ignorant of the grammar rules governing their language. Nobody knows all the grammar rules of their language. Yet we are perfectly capable of following those grammar rules. When presented with an ungrammatical sentence, you may not be able to explain why it's ungrammatical (compare "Human being is important" with "The empathy is important"), yet you still instinctively know there is a problem.
And the list goes on. If people can think of other broad areas of subconscious processing, please comment away. These are simply the aspects of the unconscious I have studied.
You'll notice I haven't talked about defense mechanisms or repressed memories. These Freudian ideas have fallen out of the mainstream. But the fact remains that conscious thought and perception are just one corner of our minds.
A lab recently reported that they were able to alter people's opinions through a cup of coffee. This was not an effect of caffeine, since the cup of coffee was not actually drunk. Instead, study participants were asked to hold a cup of coffee momentarily. The cup was either hot or cold. Those who held the hot cup judged other people to be warmer and more sociable than those who held the cold cup.
This is one in a series of similar experiments. People are more competitive if a briefcase (a symbol of success) is in sight. They do better in a trivia contest immediately after thinking about their mothers (someone who wants you to succeed). These are all examples of what is called "social priming" -- where a socially-relevant cue affects your behavior.
Social priming is an example of a broader phenomenon (priming) that is a classic example of nonconscious processing. One simple experiment is to have somebody read a list of words presenting one at a time on a computer. The participant is faster if the words are all related (dog, cat, bear, mouse) than if they are relatively unrelated (dog, table, mountain, car). The idea is that thinking about dogs also makes other concepts related to dogs (i.e., other animals) more accessible to your conscious thought. In fact, if you briefly present the word "dog" on the screen so fast that the participant isn't even aware of having seen it, they will still be faster at reading "cat" immediately afterwards than if "mountain" had flashed on the screen.
Mahzarin Banaji has made a career around the Implicit Association Test. In this test, you press a key (say "g") when you see a white face or a positive word (like "good" or "special" or "happy") and a different key (say "b") when you see a black face or a negative word (like "bad" or "dangerous"). You do this as fast as you can. Then the groupings switch -- good words with black faces and bad words with white faces. The latter condition is typically harder for white Americans, even those who self-report being free of racial prejudice. Similar versions of the test have been used in different cultures (i.e., Japan) and have generally found that people are better able to associate good words with their own in-group than a non-favored out-group. I didn't describe the methodology in detail here, but trust me when I say it is rock-solid. The interpretation that this is a measure of implicit, nonconscious prejudice is up for debate. For the purposes of my post here, though, this is clearly a nonconscious prejudice. (Try it for yourself here.)
Vision turns out to be divided into conscious vision and nonconscious vision. Yes, you read that correctly: nonconscious vision. The easiest way to tell this for yourself is to blindfold a single eye. You probably know that you need two eyes for depth perception, but with one eye blindfolded, the world doesn't suddenly look flat. (At least, it doesn't for me.) You may notice some small differences, but to get a real sense of what you have lost, try playing tennis. The ball becomes nearly impossible to find. This is because the part of your vision that you use to orient in space is largely inaccessible to your conscious mind.
An even more interesting case study of this -- though not one you can try at home -- is blindsight. People with blindsight report being blind. As far as they can tell, they can't see a thing. However, if you show them a picture and ask them to guess what the picture is of, they can "guess" correctly. They can also reach out and take the picture. They are unaware of being able to see, but clearly on some level they are able to do so.
It is also possible to learn something without being aware of learning it. My old mentor studies contextual cueing. The experiment works like this: You see a bunch of letters on the screen. You are looking for the letter T. Once you find it, you press an arrow key to report which direction the "T" faces. This repeats many hundreds of times. Some of the displays repeat over and over (the letters are all in the same places). Although you aren't aware of the repetition -- if asked, you would be unable to tell a repeated display from a new display -- you are faster at finding the T on repeated displays than new displays.
In similar experiments about language learning, you listed to nonsense sentences made of nonsense words. Unknown to you, the sentences all conform to a grammar. If asked to explain the grammar, you would probably just say "huh?" but if asked to pick between two sentences, one of which is grammatical and one of which is not, you can do so successfully.
Actually, an experiment isn't needed to prove this last point. Most native speakers are completely ignorant of the grammar rules governing their language. Nobody knows all the grammar rules of their language. Yet we are perfectly capable of following those grammar rules. When presented with an ungrammatical sentence, you may not be able to explain why it's ungrammatical (compare "Human being is important" with "The empathy is important"), yet you still instinctively know there is a problem.
And the list goes on. If people can think of other broad areas of subconscious processing, please comment away. These are simply the aspects of the unconscious I have studied.
You'll notice I haven't talked about defense mechanisms or repressed memories. These Freudian ideas have fallen out of the mainstream. But the fact remains that conscious thought and perception are just one corner of our minds.
Can computers talk? (The Chinese Room)
Can computers talk? Right now, no. Natural Language Processing -- the field of Artificial Intelligence & Linguistics that deals with computer language (computers using language, not C++ or BASIC) -- has made strides in the last decade, but the best programs still frankly suck.
Will computers ever be able to talk? And I don't mean Alex the Parrot talk. I mean speak, listen and understand just as well as humans. Ideally, we'd like something like a formal proof one way or another, such as the proof that it is impossible to write a computer program that will definitively determine whether another computer program has a bug in it (specifically, a type of bug known as an infinite loop). That sort of program has been proven to be impossible. How about a program to emulate human language?
One of the most famous thought experiments to deal with this question is the The Chinese Room, created by John Searle back in 1980. The thought experiment is meant to be a refutation to the idea that a computer program, even in theory, could be intelligent. It goes like this:
Suppose you have a computer in a room. The computer is fed a question in Chinese, and it matches the question against a database in order to find a response. The computer program is very good, and its responses are indistinguishable from that of a human Chinese speaker. Can you say that this computer understands Chinese?
Searle says, "No." To make it even more clear, suppose the computer was replaced by you and a look-up table. Occasionally, sentences in Chinese come in through a slot in the wall. You can't read Chinese, but you were given a rule book for manipulating the Chinese symbols into an output that you push out the "out" slot in the wall. You are so good at using these rules that your responses are as good as those of a native Chinese speaker. Is it reasonable to say that you know Chinese?
The answer is, of course, that you don't know Chinese. Searle believes that this demonstrates that computers cannot understand language and, scaling the argument up, cannot be conscious, have beliefs or do anything else interesting and mentalistic.
One common rebuttal to this argument is that the system which is the room (input, human, look-up table) knows Chinese, even though the parts do not. This is attractive, since in some sense that is true of our brains -- the only systems we know do in fact understand language. The individual parts (neurons, neuron clusters, etc.) do not understand language, but the brain as a whole does.
It's an attractive rebuttal, but I think there is a bigger problem with Searle's argument. The thought experiment rests on the presupposition that the Chinese Room would produce good Chinese. Is that plausible?
If the human in the room only had a dictionary, it's clearly not reasonable. Trying to translate based on dictionaries produces terrible language. Of course, Searle's Chinese Room does not use a dictionary. The computer version of it uses a database. If this is a simple database with two columns, one for input and one or output, it would have to be infinitely large to perform as well as a human Chinese speaker. As Chomsky famously demonstrated long ago, the number of sentences in any language is infinite. (The computer program could be more complicated, it is true. At an AI conference I attended several years ago, template-based language systems were all the rage. These systems try to fit all input into one of many template sentences. Responses, similarly, are created out of templates. These systems work much better than earlier computerized efforts, but they are still very restricted.)
The human version of the Chinese Room Searle gives us is a little bit different. In that one, the human user has a set of rules to apply to the input to achieve an output. In Minds, Brains and Science, which contains the version of this argument that I'm working from, he isn't very explicit as to how this would work, but I'm assuming it is something like a grammar for Chinese. Even supposing using grammar rules without knowledge of the meaning of the words would work, the fact is that after decades of research, linguists still haven't worked out a complete grammatical description of any living language.
The Chinese Room would require a much, much more sophisticated system than what Searle grants. In fact, it requires something so complicated that nobody even knows what it would look like. The only existing algorithm that can handle human language is implemented in the human brain. The only machine currently capable of processing human language as well as a human is the human brain. Searle's conceit was that we could have "dumb" algorithm -- essentially a look-up table -- that processed language. We don't have one. Maybe we never will. Maybe in order to process human language at the same level of sophistication as a human, the "system" must be intelligent, must actually understand what it's talking about.
This brings us to the flip argument to Searle's thought expeirment: Turing's. Turing proposed to test the intelligence of computers this way: once a computer can compete effectively in parlor games, it's reasonable to assume it's as intelligent as a human. The parlor game in question isn't important: what's important is the flexibility it required. Modern versions of the Turing Test focus on the computer being able to carry on a normal human conversation -- essentially, to do what the Chinese Room would be required to do. The Turing assumption is that the simplest possible method of producing human-like language requires cognitive machinery on par with a human.
If anybody wants to watch a dramatization of these arguments, I suggest the current re-imagining of Battlestar Galactica. The story follows a war between humans and intelligent robots. The robots clearly demonstrate emotions, intelligence, pain and suffering, but the humans are largely unwilling to believe any of it is real. "You have software, not feelings," is the usual refrain. Some of the humans begin to realize that the robots are just as "real" to them as the other humans. The truth is that our only evidence that other humans really have feelings, emotions, consciousness, etc., is through their behavior.
Since we don't yet have a mathematical proof one way or another, I'll have to leave it at that. In the meantime, having spent a lot of time struggling with languages myself, the Turing view seems much more plausible than Searle's.
Will computers ever be able to talk? And I don't mean Alex the Parrot talk. I mean speak, listen and understand just as well as humans. Ideally, we'd like something like a formal proof one way or another, such as the proof that it is impossible to write a computer program that will definitively determine whether another computer program has a bug in it (specifically, a type of bug known as an infinite loop). That sort of program has been proven to be impossible. How about a program to emulate human language?
One of the most famous thought experiments to deal with this question is the The Chinese Room, created by John Searle back in 1980. The thought experiment is meant to be a refutation to the idea that a computer program, even in theory, could be intelligent. It goes like this:
Suppose you have a computer in a room. The computer is fed a question in Chinese, and it matches the question against a database in order to find a response. The computer program is very good, and its responses are indistinguishable from that of a human Chinese speaker. Can you say that this computer understands Chinese?
Searle says, "No." To make it even more clear, suppose the computer was replaced by you and a look-up table. Occasionally, sentences in Chinese come in through a slot in the wall. You can't read Chinese, but you were given a rule book for manipulating the Chinese symbols into an output that you push out the "out" slot in the wall. You are so good at using these rules that your responses are as good as those of a native Chinese speaker. Is it reasonable to say that you know Chinese?
The answer is, of course, that you don't know Chinese. Searle believes that this demonstrates that computers cannot understand language and, scaling the argument up, cannot be conscious, have beliefs or do anything else interesting and mentalistic.
One common rebuttal to this argument is that the system which is the room (input, human, look-up table) knows Chinese, even though the parts do not. This is attractive, since in some sense that is true of our brains -- the only systems we know do in fact understand language. The individual parts (neurons, neuron clusters, etc.) do not understand language, but the brain as a whole does.
It's an attractive rebuttal, but I think there is a bigger problem with Searle's argument. The thought experiment rests on the presupposition that the Chinese Room would produce good Chinese. Is that plausible?
If the human in the room only had a dictionary, it's clearly not reasonable. Trying to translate based on dictionaries produces terrible language. Of course, Searle's Chinese Room does not use a dictionary. The computer version of it uses a database. If this is a simple database with two columns, one for input and one or output, it would have to be infinitely large to perform as well as a human Chinese speaker. As Chomsky famously demonstrated long ago, the number of sentences in any language is infinite. (The computer program could be more complicated, it is true. At an AI conference I attended several years ago, template-based language systems were all the rage. These systems try to fit all input into one of many template sentences. Responses, similarly, are created out of templates. These systems work much better than earlier computerized efforts, but they are still very restricted.)
The human version of the Chinese Room Searle gives us is a little bit different. In that one, the human user has a set of rules to apply to the input to achieve an output. In Minds, Brains and Science, which contains the version of this argument that I'm working from, he isn't very explicit as to how this would work, but I'm assuming it is something like a grammar for Chinese. Even supposing using grammar rules without knowledge of the meaning of the words would work, the fact is that after decades of research, linguists still haven't worked out a complete grammatical description of any living language.
The Chinese Room would require a much, much more sophisticated system than what Searle grants. In fact, it requires something so complicated that nobody even knows what it would look like. The only existing algorithm that can handle human language is implemented in the human brain. The only machine currently capable of processing human language as well as a human is the human brain. Searle's conceit was that we could have "dumb" algorithm -- essentially a look-up table -- that processed language. We don't have one. Maybe we never will. Maybe in order to process human language at the same level of sophistication as a human, the "system" must be intelligent, must actually understand what it's talking about.
This brings us to the flip argument to Searle's thought expeirment: Turing's. Turing proposed to test the intelligence of computers this way: once a computer can compete effectively in parlor games, it's reasonable to assume it's as intelligent as a human. The parlor game in question isn't important: what's important is the flexibility it required. Modern versions of the Turing Test focus on the computer being able to carry on a normal human conversation -- essentially, to do what the Chinese Room would be required to do. The Turing assumption is that the simplest possible method of producing human-like language requires cognitive machinery on par with a human.
If anybody wants to watch a dramatization of these arguments, I suggest the current re-imagining of Battlestar Galactica. The story follows a war between humans and intelligent robots. The robots clearly demonstrate emotions, intelligence, pain and suffering, but the humans are largely unwilling to believe any of it is real. "You have software, not feelings," is the usual refrain. Some of the humans begin to realize that the robots are just as "real" to them as the other humans. The truth is that our only evidence that other humans really have feelings, emotions, consciousness, etc., is through their behavior.
Since we don't yet have a mathematical proof one way or another, I'll have to leave it at that. In the meantime, having spent a lot of time struggling with languages myself, the Turing view seems much more plausible than Searle's.
Visual illusions -- a compendium
Academic conferences involve a lot of long speeches. To liven things up, the Vision Sciences Society always has a visual illusions night. If you can't make it to their conference, I want to direct you to an incredible Web compendium of illusions.
Many of the illusions illustrate the point that I've made before, which is that what you see is usually, but not always, a reasonably accurate depiction of reality. One such is an illustration of motion-induced blindness. As you stare at an image with both stationary and moving parts, the stationary parts seem to flicker in and out of existence.
Another is the "stepping feet" illusion, which causes you to perceive time and space out of whack. Two small rectangles move across the screen. Sometimes, they seem to be going the same speed. Other times, they seem out of step with each other. In fact, they are always parallel; the "stepping" is your mind playing tricks with you.
One of my favorites is the "Rotating Snake" illusion. The image appears to be in constant motion, but in fact it is completely stationary.
If you want to be enlightened as well as stunned and amazed, the website provides detailed explanations of each illusion, as well as references to scientific papers investigating these phenomena. The main page of the website is here. One thing it does not explain, however, is why, although vision scientists spend their lives studying the visual world, are their websites always so ugly.
Many of the illusions illustrate the point that I've made before, which is that what you see is usually, but not always, a reasonably accurate depiction of reality. One such is an illustration of motion-induced blindness. As you stare at an image with both stationary and moving parts, the stationary parts seem to flicker in and out of existence.
Another is the "stepping feet" illusion, which causes you to perceive time and space out of whack. Two small rectangles move across the screen. Sometimes, they seem to be going the same speed. Other times, they seem out of step with each other. In fact, they are always parallel; the "stepping" is your mind playing tricks with you.
One of my favorites is the "Rotating Snake" illusion. The image appears to be in constant motion, but in fact it is completely stationary.
If you want to be enlightened as well as stunned and amazed, the website provides detailed explanations of each illusion, as well as references to scientific papers investigating these phenomena. The main page of the website is here. One thing it does not explain, however, is why, although vision scientists spend their lives studying the visual world, are their websites always so ugly.
The neuroscience of political orientation
The new issue of Nature Neuroscience carries work from David Amodio's group at New York University titled "Neurocognitive Correlates of Liberalism & Conservatism." It's a solid piece of basic cognitive neuroscience. The study's claim to fame is that it "is the first study connecting individual differences in political ideology to a basic neuro-cognitive mechanism for self-regulation." The paper is on very solid empirical grounds and makes very limited claims.
This did not stop William Saletan of Slate from publishing a sarcastic rebuke ("Liberal Interpretation: Rigging a study to make conservatives look stupid") that is misleading, irresponsible and simply a bad piece of reporting. To find out what the research really says -- and to understand what makes Saletan's article such an embarassment to him, to Slate and to its readers -- read on.
(Caveat: For a long time, Saletan was my favorite writer on Slate. I read his column religiously. As fantastic as John Dickerson is, I mourned when he replaced Saletan on Slate's politics beat. Saletan's recent science writing is another story.)
I won't blame Saletan for his provocative title ("Rigging a study to make conservatives look stupid" -- fighting words if there ever were any). I have my own penchant for catchy, over-the-top titles. However, Saletan wastes no time in setting up a straw man: "Are liberals smarter than conservatives? It looks that way, according to a study published this week in Nature Neuroscience." While he proceeds to tar, feather, tear down, set fire to and otherwise abuse the straw man, Saletan never bothers to mention that these are his words, not Amodio's. The study never discusses intelligence nor even uses the word.
The heart of Saletan's article is meant to be a refutation of what he claims are the study's central tenants: conservatives get stuck in habitual ways of thinking, are less responsive to information and less able to deal with complexity and ambiguity. He then proceeds to argue that the experiment outlined in the paper tests none of these things. His star witness: the paper itself:
"[E]ither the letter "M" or "W" was presented in the center of a computer monitor screen. … Half of the participants were instructed to make a "Go" response when they saw "M" but to make no response when they saw "W"; the remaining participants completed a version in which "W" was the Go stimulus and "M" was the No–Go stimulus. … Responses were registered on a computer keyboard placed in the participants' laps. … Participants received a two-minute break halfway through the task, which took approximately 15 minutes to complete."
Fifteen minutes is a habit? Tapping a keyboard is a way of thinking? Come on.
Well, when you put it that way, it does sound a little silly. But only if you put it that way. Saletan quoted directly from the paper's methods section, and it comes across as faintly ridiculous when reproduced within a chatty Slate-style article. It would come across very differently if he had written:
In this experiment, participants watched letters flash on a computer screen. When they saw the target letter appear, they pressed a button. Otherwise, they did nothing. This is called a 'go/no-go' task and is heavily used in psychology and neuroscience as a test of attention and cognitive control. A quick PsychINFO database for 'go' & 'no go' finds 674 published papers, research covering everything from children with ADHD to baseball players.
It is a very simple task far removed from what most of us would consider 'thinking,' but scientists have found it to be very useful, precisely because it is so simple and easy to use. A major component of an IQ exam is a test where you have to repeat back a list of numbers. It may not sound like much, but performance on that task turns out to be very predictive of general intelligence. Plus, it's a simple, easy-to-administer task, very much unlike trying to derive quantum mechanics from scratch, which is arguably a more common-sense test of intelligence.
That's a crime of omission. Here's the crime of commission: Saletan is arguing that the experiment fails to support the claim that conservatives get stuck in habitual ways of thinking, are less responsive to information and less able to deal with complexity and ambiguity.
What the paper actually says is that
Political scientists and psychologists have long noted differences in the cognitive and motivational profiles of liberals and conservatives in the USA and elsewhere. Across dozens of behavioral studies, conservatives have been found to be more structured and persistent in their judgments and approaches to decision-making, as indicated by higher average scores on psychological measures of personal needs for order, structure and closure. Liberals, by contrast, report higher tolerance of ambiguity and complexity, and greater openness to new experiences on psychological measures. Given that these associations between political orientation and cognitive styles have been shown to be heritable, evident in early childhood, and relatively stable across the lifespan, we hypothesized that political orientation may be associated with individual differences in a basic neurocognitive mechanism involved broadly in self-regulation.
In other words, previous studies have suggested that conservatives are less responsive to information, etc. The purpose of this study is to see whether conservatives and liberals differ on a "basic neurocognitive mechanism." The paper goes on to show that the brain waves recorded when conservatives and liberals do the go/no-go task (a basic test of this neurocognitive mechanism) do in fact differ. (Strangely, although the paper's focus is brain waves, this gets mostly lost in Saletan's description.)
There is so much more fault to find with Saletan's mistake, but I'll finish with just one more. He writes, "The conservative case against this study is easy to make. Sure, we're fonder of old ways that you are. That's in our definition... If you studied us in real life, you'd find that while we second-guess the status quo less than you do, we second-guess putative reforms more than you do, so in terms of complexity, ambiguity, and critical thinking, it's probably a wash."
The Amodio study, as already pointed out, never claimed that conservatives are dumb or that their behavior is per se maladaptive. However, it does say this:
Although a liberal orientation was associated with better performance on the response-inhibition task examined here, conservatives would presumably perform better on tasks in which a more fixed response style is optimal.
It appears that Saletan's conservative case was already made for him, by Amodio & co. It would be tempting to say that Saletan never read the paper, except that he quotes from it so much.
Slate has made its mark by putting news into context. Legal experts parse the latest supreme court rulings, putting them into historical and legal context so that we know what they really mean. Ex-corporate hot-shots interpret business news. Politicos give us a feel for what it's really like on a campaign.
Saletan is the closest Slate comes to a full-time science writer, but he consistently fails to put science news into perspective. Often, he seems to completely misunderstand it. In this case, he should have explained exactly what the study was and what it wasn't. He might have explained why the scientists used the methods that they did. He could have discussed how the study has been received in the media and pointed out mistakes and misreadings.
Basically, he should have written this post.
This did not stop William Saletan of Slate from publishing a sarcastic rebuke ("Liberal Interpretation: Rigging a study to make conservatives look stupid") that is misleading, irresponsible and simply a bad piece of reporting. To find out what the research really says -- and to understand what makes Saletan's article such an embarassment to him, to Slate and to its readers -- read on.
(Caveat: For a long time, Saletan was my favorite writer on Slate. I read his column religiously. As fantastic as John Dickerson is, I mourned when he replaced Saletan on Slate's politics beat. Saletan's recent science writing is another story.)
I won't blame Saletan for his provocative title ("Rigging a study to make conservatives look stupid" -- fighting words if there ever were any). I have my own penchant for catchy, over-the-top titles. However, Saletan wastes no time in setting up a straw man: "Are liberals smarter than conservatives? It looks that way, according to a study published this week in Nature Neuroscience." While he proceeds to tar, feather, tear down, set fire to and otherwise abuse the straw man, Saletan never bothers to mention that these are his words, not Amodio's. The study never discusses intelligence nor even uses the word.
The heart of Saletan's article is meant to be a refutation of what he claims are the study's central tenants: conservatives get stuck in habitual ways of thinking, are less responsive to information and less able to deal with complexity and ambiguity. He then proceeds to argue that the experiment outlined in the paper tests none of these things. His star witness: the paper itself:
"[E]ither the letter "M" or "W" was presented in the center of a computer monitor screen. … Half of the participants were instructed to make a "Go" response when they saw "M" but to make no response when they saw "W"; the remaining participants completed a version in which "W" was the Go stimulus and "M" was the No–Go stimulus. … Responses were registered on a computer keyboard placed in the participants' laps. … Participants received a two-minute break halfway through the task, which took approximately 15 minutes to complete."
Fifteen minutes is a habit? Tapping a keyboard is a way of thinking? Come on.
Well, when you put it that way, it does sound a little silly. But only if you put it that way. Saletan quoted directly from the paper's methods section, and it comes across as faintly ridiculous when reproduced within a chatty Slate-style article. It would come across very differently if he had written:
In this experiment, participants watched letters flash on a computer screen. When they saw the target letter appear, they pressed a button. Otherwise, they did nothing. This is called a 'go/no-go' task and is heavily used in psychology and neuroscience as a test of attention and cognitive control. A quick PsychINFO database for 'go' & 'no go' finds 674 published papers, research covering everything from children with ADHD to baseball players.
It is a very simple task far removed from what most of us would consider 'thinking,' but scientists have found it to be very useful, precisely because it is so simple and easy to use. A major component of an IQ exam is a test where you have to repeat back a list of numbers. It may not sound like much, but performance on that task turns out to be very predictive of general intelligence. Plus, it's a simple, easy-to-administer task, very much unlike trying to derive quantum mechanics from scratch, which is arguably a more common-sense test of intelligence.
That's a crime of omission. Here's the crime of commission: Saletan is arguing that the experiment fails to support the claim that conservatives get stuck in habitual ways of thinking, are less responsive to information and less able to deal with complexity and ambiguity.
What the paper actually says is that
Political scientists and psychologists have long noted differences in the cognitive and motivational profiles of liberals and conservatives in the USA and elsewhere. Across dozens of behavioral studies, conservatives have been found to be more structured and persistent in their judgments and approaches to decision-making, as indicated by higher average scores on psychological measures of personal needs for order, structure and closure. Liberals, by contrast, report higher tolerance of ambiguity and complexity, and greater openness to new experiences on psychological measures. Given that these associations between political orientation and cognitive styles have been shown to be heritable, evident in early childhood, and relatively stable across the lifespan, we hypothesized that political orientation may be associated with individual differences in a basic neurocognitive mechanism involved broadly in self-regulation.
In other words, previous studies have suggested that conservatives are less responsive to information, etc. The purpose of this study is to see whether conservatives and liberals differ on a "basic neurocognitive mechanism." The paper goes on to show that the brain waves recorded when conservatives and liberals do the go/no-go task (a basic test of this neurocognitive mechanism) do in fact differ. (Strangely, although the paper's focus is brain waves, this gets mostly lost in Saletan's description.)
There is so much more fault to find with Saletan's mistake, but I'll finish with just one more. He writes, "The conservative case against this study is easy to make. Sure, we're fonder of old ways that you are. That's in our definition... If you studied us in real life, you'd find that while we second-guess the status quo less than you do, we second-guess putative reforms more than you do, so in terms of complexity, ambiguity, and critical thinking, it's probably a wash."
The Amodio study, as already pointed out, never claimed that conservatives are dumb or that their behavior is per se maladaptive. However, it does say this:
Although a liberal orientation was associated with better performance on the response-inhibition task examined here, conservatives would presumably perform better on tasks in which a more fixed response style is optimal.
It appears that Saletan's conservative case was already made for him, by Amodio & co. It would be tempting to say that Saletan never read the paper, except that he quotes from it so much.
Slate has made its mark by putting news into context. Legal experts parse the latest supreme court rulings, putting them into historical and legal context so that we know what they really mean. Ex-corporate hot-shots interpret business news. Politicos give us a feel for what it's really like on a campaign.
Saletan is the closest Slate comes to a full-time science writer, but he consistently fails to put science news into perspective. Often, he seems to completely misunderstand it. In this case, he should have explained exactly what the study was and what it wasn't. He might have explained why the scientists used the methods that they did. He could have discussed how the study has been received in the media and pointed out mistakes and misreadings.
Basically, he should have written this post.
Visual memory -- does it even exist?
Researchers at Rochester recently reported that short-term memory for sign language words is more limited than for spoken words. In some sense, this is surprising. We've known for a long time now that sign languages recruit the same brain areas as spoken languages, so it stands to reason that many of the properties of sign languages would be similar to those of spoken languages, despite the obvious differences.
On the other hand, short-term visual memory is severely limited. If you give somebody a list of 7 spoken words, they can typically remember all of them. If you show somebody 7 objects, they cannot remember them. The textbooks say that you can only remember about 4 visual objects, but that turns out only to be true for very simple objects. In a series of experiments I ran (some of them online), the average person could remember only about 2 objects.
Even more striking is that visual short-term memory cannot be trained like verbal memory can be. A few people have learned to extend their verbal memory so that they could remember dozens of words at a time. However, nobody has been able to significantly improve their visual short-term memory (see a research report here).
Visual short-term memory is so incredibly limited that some vision scientists have wondered if it, in some sense, really exists. That is, they think that it may just be a biproduct of some other system (like our ability to imagine visual scenes), rather than a memory system in its own right. There is some sense to this. After all, what do you need short-term visual memory for? With verbal memory, it's obvious. You need to be able to remember the first part of a sentence while reading/hearing the rest of it. But why would you need to remember what you see over very short intervals?
Those who do not want to read a plug for my ongoing research should stop reading here.
I've been really fascinated by the limitations of short-term visual memory. I have run several experiments, one of which is still continuing. You can participate in it here.
On the other hand, short-term visual memory is severely limited. If you give somebody a list of 7 spoken words, they can typically remember all of them. If you show somebody 7 objects, they cannot remember them. The textbooks say that you can only remember about 4 visual objects, but that turns out only to be true for very simple objects. In a series of experiments I ran (some of them online), the average person could remember only about 2 objects.
Even more striking is that visual short-term memory cannot be trained like verbal memory can be. A few people have learned to extend their verbal memory so that they could remember dozens of words at a time. However, nobody has been able to significantly improve their visual short-term memory (see a research report here).
Visual short-term memory is so incredibly limited that some vision scientists have wondered if it, in some sense, really exists. That is, they think that it may just be a biproduct of some other system (like our ability to imagine visual scenes), rather than a memory system in its own right. There is some sense to this. After all, what do you need short-term visual memory for? With verbal memory, it's obvious. You need to be able to remember the first part of a sentence while reading/hearing the rest of it. But why would you need to remember what you see over very short intervals?
Those who do not want to read a plug for my ongoing research should stop reading here.
I've been really fascinated by the limitations of short-term visual memory. I have run several experiments, one of which is still continuing. You can participate in it here.
Can a parrot really talk? (So long, Alex)
Alex the Parrot, research subject and beloved friend of Irene Pepperberg of Brandeis and Harvard Universities, died last week. This may be the first parrot to merit an obituary in the New York Times. The parrot was famous for being able to not just name objects, but count them -- something not all human cultures do. Alex could also name the colors of objects.
A colleague of mine has actually seen this in action and confirms that it is really true. Alex was no doubt a very remarkable birrd, but that doesn't mean tha the parrot could really talk. Clever Hans, an early 20th century phenomenon, was a horse that was able to stomp out the answers to simple math problem with his hooves. It turned out that Hans didn't actsually know the answer to 3 + 3, but he did know to watch the behavior of his human observers. Whenever he noticed his owner getting excited, he knew that if he stopped stomping right then, he'd get a horsey treat.
The question, then, is whether Alex really used words as symbolic labels for colors, or if he was just very good at getting crackers. Dr. Pepperberg has been met with a lot of skepticism, partly because she is actually trained as a chemist, not as a linguist, psychologist or even biologist. Even my colleague, who admires Dr. Pepperberg greatly, isn't convinced that the professor uses the proper controls in her experiments to rule out simpler explanations for Alex's behavior.
Another reason to be skeptical is that there is a long history of animal language hoaxes. By "hoax," I don't mean that the researcher is pulling a fast one. Clever Hans's owner really believed his horse could complete math problems. The most famous example, though, is Koko the gorilla.
Koko, trained by Dr. Francine Patterson, is said to be able to use more than 1,000 words from American Sign Language and to understand approximately 2,000 words of spoken English. She was the inspiration for the talking gorilla in Michael Crichton's novel Congo. The problem with Koko, is that her conversations are always interpreted by her trainers, who can be rather fanciful in ascribing meaning to the ape's gestures.
The best way to see this is to read Koko's Internet chat from 1998. Here are some excerpts:
HaloMyBaby: Is Koko aware that she's chatting with thousands of people now?
LiveKOKO: Good here.
DrPPatrsn: Koko is aware.
Patterson's translation seems reasonable, if a stretch, but look at the next question:
HaloMyBaby: Koko are you going to have a baby in the future?
LiveKOKO: Pink
Huh?
It continues:
DrPPatrsn: What about a baby? She's thinking...
LiveKOKO: Unattention
DrPPatrsn: She covered her face with her hands...which means it's not happening, bascially, or it hasn't happened yet.
Or maybe she's playing peekaboo.
HaloMyBaby: What are the names of your kittens? (and dogs?)
LiveKOKO: foot
DrPPatrsn: Foot isn't th e name of your kitty
HaloMyBaby: Koko, what's the name of your cat?
LiveKOKO: no
....
Question: Do you like to chat with other people?
LiveKOKO: fine nipple
DrPPatrsn: Nipple rhymes with people. She doesn't sign people per se, she was trying to do a "sounds like..."
I don't know of anybody who has accused Patterson of a hoax. It seems like she really believes that Koko talks. But the evidence isn't very compelling.
Koko is by all accounts a remarkable ape. Alex may be one of the brightest animals alive. But are they really talking, or just "aping," as it were, language?
The NYT piece ended by saying that as Dr. Pepperberg left the lab the last evening that Alex was alive, "Alex looked at her and said: 'You be good, see you tomorrow. I love you.'" Here's a quote from the Language Log:
"It's certainly not unimpressive that Alex had apparently learned to associate the quoted string of words above with being put back in his cage for the night. Call me callous, but I can't help thinking that Alex's last words would have been very different if Dr. Pepperberg and her associates had taken to saying 'see you later, bird-brain' to Alex every night."
A colleague of mine has actually seen this in action and confirms that it is really true. Alex was no doubt a very remarkable birrd, but that doesn't mean tha the parrot could really talk. Clever Hans, an early 20th century phenomenon, was a horse that was able to stomp out the answers to simple math problem with his hooves. It turned out that Hans didn't actsually know the answer to 3 + 3, but he did know to watch the behavior of his human observers. Whenever he noticed his owner getting excited, he knew that if he stopped stomping right then, he'd get a horsey treat.
The question, then, is whether Alex really used words as symbolic labels for colors, or if he was just very good at getting crackers. Dr. Pepperberg has been met with a lot of skepticism, partly because she is actually trained as a chemist, not as a linguist, psychologist or even biologist. Even my colleague, who admires Dr. Pepperberg greatly, isn't convinced that the professor uses the proper controls in her experiments to rule out simpler explanations for Alex's behavior.
Another reason to be skeptical is that there is a long history of animal language hoaxes. By "hoax," I don't mean that the researcher is pulling a fast one. Clever Hans's owner really believed his horse could complete math problems. The most famous example, though, is Koko the gorilla.
Koko, trained by Dr. Francine Patterson, is said to be able to use more than 1,000 words from American Sign Language and to understand approximately 2,000 words of spoken English. She was the inspiration for the talking gorilla in Michael Crichton's novel Congo. The problem with Koko, is that her conversations are always interpreted by her trainers, who can be rather fanciful in ascribing meaning to the ape's gestures.
The best way to see this is to read Koko's Internet chat from 1998. Here are some excerpts:
HaloMyBaby: Is Koko aware that she's chatting with thousands of people now?
LiveKOKO: Good here.
DrPPatrsn: Koko is aware.
Patterson's translation seems reasonable, if a stretch, but look at the next question:
HaloMyBaby: Koko are you going to have a baby in the future?
LiveKOKO: Pink
Huh?
It continues:
DrPPatrsn: What about a baby? She's thinking...
LiveKOKO: Unattention
DrPPatrsn: She covered her face with her hands...which means it's not happening, bascially, or it hasn't happened yet.
Or maybe she's playing peekaboo.
HaloMyBaby: What are the names of your kittens? (and dogs?)
LiveKOKO: foot
DrPPatrsn: Foot isn't th e name of your kitty
HaloMyBaby: Koko, what's the name of your cat?
LiveKOKO: no
....
Question: Do you like to chat with other people?
LiveKOKO: fine nipple
DrPPatrsn: Nipple rhymes with people. She doesn't sign people per se, she was trying to do a "sounds like..."
I don't know of anybody who has accused Patterson of a hoax. It seems like she really believes that Koko talks. But the evidence isn't very compelling.
Koko is by all accounts a remarkable ape. Alex may be one of the brightest animals alive. But are they really talking, or just "aping," as it were, language?
The NYT piece ended by saying that as Dr. Pepperberg left the lab the last evening that Alex was alive, "Alex looked at her and said: 'You be good, see you tomorrow. I love you.'" Here's a quote from the Language Log:
"It's certainly not unimpressive that Alex had apparently learned to associate the quoted string of words above with being put back in his cage for the night. Call me callous, but I can't help thinking that Alex's last words would have been very different if Dr. Pepperberg and her associates had taken to saying 'see you later, bird-brain' to Alex every night."
Why is learning a foreign language so darn hard?
I work in an toddler language lab, where we study small children who are breezing through the process of language acquisition. They don't go to class, use note cards or anything, yet they pick up English seemingly in their sleep (see my previous post on this).
Just a few years ago, I taught high school and college students (read some of my stories about it here) and the scene was completely different. They struggled to learn English. Anyone who has tried to learn a foreign language knows what I mean.
Although this is well-known, it's a bit of mystery why. It's not the case that my Chinese students didn't have the right mouth shape for English (I've heard people -- not scientists -- seriously propose this explanation before). It's also not just that you can learn only one language. There are plenty of bilinguals out there. Jesse Snedeker (my PhD adviser as of Monday) and her students recently completed a study of cross-linguistic late-adoptees -- that is, children who were adopted between the ages of 2 and 7 into a family that spoke a different language from that of the child's original home or orphanage. In this case, all the children were from China. They followed the same pattern of linguistic development -- both in terms of vocabulary and grammar -- as native English speakers and in fact learned English faster than is typical (they steady caught up with same-age English-speaking peers).
So why do we lose that ability? One model, posited by Michael Ullman at Georgetown University (full disclosure: I was once Dr. Ullman's research assistant), has to do with the underlying neural architecture of language. Dr. Ullman argues that basic language processes are divided into vocabulary and grammar (no big shock there) and that vocabulary and grammar are handled by different parts of the brain. Simplifying somewhat, vocabulary is tied to temporal lobe structures involved in declarative memory (memory for facts), while grammar is tied to procedural memory (memory for how to do things like ride a bicycle) structures including the prefrontal cortex, the basal ganglia and other areas.
As you get older, as we all know, it becomes harder to learn new skills (you can't teach an old dog new tricks). That is, procedural memory slowly loses the ability to learn new things. Declarative memory stays with us well into old age, declining much more slowly (unless you get Alzheimer's or other types of dementia). Based on Dr. Ullman's model, then, you retain the ability to learn new words but have more difficulty learning new grammar. And grammar does appear to be the typical stumbling block in learning new languages.
Of course, I haven't really answered my question. I just shifted it from mind to brain. The question is now: why do the procedural memory structures lose their plasticity? There are people studying the biological mechanisms of this loss, but that still doesn't answer the question we'd really like to ask, which is "why are our brains constructed this way?" After all, wouldn't it be ideal to be able to learn languages indefinitely?
I once put this question to Helen Neville, a professor at the University of Oregon and expert in the neuroscience of language. I'm working off of a 4-year-old memory (and memory isn't always reliable), but her answer was something like this:
Plasticity means that you can easily learn new things. The price is that you forget easily as well. For facts and words, this is a worthwhile trade-off. You need to be able to learn new facts for as long as you live. For skills, it's maybe not a worthwhile trade-off. Most of the things you need to be able to do you learn to do when you are relatively young. You don't want to forget how to ride a bicycle, how to walk, or how to put a verb into the past tense.
That's the best answer I've heard. But I'd still like to be able to learn languages without having to study them.
Just a few years ago, I taught high school and college students (read some of my stories about it here) and the scene was completely different. They struggled to learn English. Anyone who has tried to learn a foreign language knows what I mean.
Although this is well-known, it's a bit of mystery why. It's not the case that my Chinese students didn't have the right mouth shape for English (I've heard people -- not scientists -- seriously propose this explanation before). It's also not just that you can learn only one language. There are plenty of bilinguals out there. Jesse Snedeker (my PhD adviser as of Monday) and her students recently completed a study of cross-linguistic late-adoptees -- that is, children who were adopted between the ages of 2 and 7 into a family that spoke a different language from that of the child's original home or orphanage. In this case, all the children were from China. They followed the same pattern of linguistic development -- both in terms of vocabulary and grammar -- as native English speakers and in fact learned English faster than is typical (they steady caught up with same-age English-speaking peers).
So why do we lose that ability? One model, posited by Michael Ullman at Georgetown University (full disclosure: I was once Dr. Ullman's research assistant), has to do with the underlying neural architecture of language. Dr. Ullman argues that basic language processes are divided into vocabulary and grammar (no big shock there) and that vocabulary and grammar are handled by different parts of the brain. Simplifying somewhat, vocabulary is tied to temporal lobe structures involved in declarative memory (memory for facts), while grammar is tied to procedural memory (memory for how to do things like ride a bicycle) structures including the prefrontal cortex, the basal ganglia and other areas.
As you get older, as we all know, it becomes harder to learn new skills (you can't teach an old dog new tricks). That is, procedural memory slowly loses the ability to learn new things. Declarative memory stays with us well into old age, declining much more slowly (unless you get Alzheimer's or other types of dementia). Based on Dr. Ullman's model, then, you retain the ability to learn new words but have more difficulty learning new grammar. And grammar does appear to be the typical stumbling block in learning new languages.
Of course, I haven't really answered my question. I just shifted it from mind to brain. The question is now: why do the procedural memory structures lose their plasticity? There are people studying the biological mechanisms of this loss, but that still doesn't answer the question we'd really like to ask, which is "why are our brains constructed this way?" After all, wouldn't it be ideal to be able to learn languages indefinitely?
I once put this question to Helen Neville, a professor at the University of Oregon and expert in the neuroscience of language. I'm working off of a 4-year-old memory (and memory isn't always reliable), but her answer was something like this:
Plasticity means that you can easily learn new things. The price is that you forget easily as well. For facts and words, this is a worthwhile trade-off. You need to be able to learn new facts for as long as you live. For skills, it's maybe not a worthwhile trade-off. Most of the things you need to be able to do you learn to do when you are relatively young. You don't want to forget how to ride a bicycle, how to walk, or how to put a verb into the past tense.
That's the best answer I've heard. But I'd still like to be able to learn languages without having to study them.
Cognitive Science 3.0
Scholars have been wondering how thought works -- and how it is even possible -- for a long time. Philosophers such as John Locke and early psychologists such as Sigmund Freud relied mostly thought about thought very hard. As a method, this is called introspection. That's Cognitive Science 1.0.
Experimental Psychology, which got its start around 100 years ago (William James was an important early proponent), uses the scientific method to probe the human mind. Psychologists develop controlled experiments, where participants are assigned to one of several conditions and their reactions are measured. Famous social psychology experiments include the Milgram experiment or the Stanford Prison Experiment, but most are more mundane, such as probing how humans read by monitoring eye movements as they read a simple text. This is Cognitive Science 2.0.
Typically, such experiments involve anywhere from 2 to 20 participants, rarely more. Partly, this is because each participant is expensive -- they have to be recruited and then tested. There are many bottlenecks, including the fact that most labs have only a small number of testing facilities and a handful of experimenters. Partly, cognitive scientists have settled into this routine because a great variety of questions can be studied with a dozen or so subjects.
The Internet is changing this just as it has changed so much else. If you can design an experiment that will run via the Web, the bottlenecks disappear. Thousands of people can participate simultaneously, in their own home, on their own computer, and without the presence of an experimenter. This isn't just theoretical; one recent paper in Science reported the results of a 2,399-person Web-based survey. I currently have over 1,200 respondents for my Birth Order Survey.
Distance also ceases to be an issue. Want to know whether people from different cultures make different decisions regarding morality? In the past, you would need to travel around the world and survey people in many different countries. Now, you can just put the test online. (The results of a preliminary study over around 5,000 participants found that reasoning about certain basic moral scenarios does not differ across several different cultures.) Or perhaps you are interested in studying people with a very rare syndrome like CHARGE. There is no need to go on a road trip to track down enough participants for a study -- just email them with a URL. (Results from this study aren't available, but it was done by Timothy Hartshorne's lab.)
This may not be as radical a shift as adopting the scientific method, but it is affecting what can be studied and -- because Web experiments are so cheap -- who can study it. I don't think it's an accidence that labs at poorer institutions seem to be over-represented on the Web. It also is opening up direct participation in science to pretty much anybody who cares to click a few buttons.
Beyond surveys
The experiments above were essentially surveys, and although using the Web for surveys is a good idea, surveys are limited in terms of what they can tell you. If you ask somebody, "Do you procrastinate?" you will learn whether or not your subject thinks s/he procrastinates, not necessarily whether they do. Also, surveys aren't very good for studying vision, speech perception and many other interesting questions. If surveys were all the Web was good for, I would not be nearly so excited.
A few labs have begun pushing the Web envelope, seeking ways to perform more "dynamic" experiments. One of my favorite labs is Face Research. Their experiments involve such things as rating faces, carefully manipulating different aspects of the face (contrast, angle, etc.) to see which will lead you to say the face is more attractive. An even more ambitious experiment -- and the one that prompted me to start my own Web-based research -- is the AudioVisual Test, which integrates sound and video clips to study speech processing.
Of the hundreds of studies posted on the largest clearing house for Web-based experiments, all but a handful are surveys. Part of the issue is that cognitive science experiments often focus on the speed with which you can do something, and there has been a lot of skepticism about the ability to accurately record reaction times over the Internet. However, one study I recently completed successfully found a significant reaction time effect of less than 80 milliseconds. 80 ms is a medium-sized amount of time in cognitive science, but it is a start. This may improve, as people like Tim O'Donnell (a graduate student at Harvard) are building new software with more accurate timing. The Implicit Association Test, which attempts to measure subtle, underlying racial biases, is also based on reaction time. I do not know if they have been getting usable data, but I do know that they were also recently also developing proprietary software that should improve the timing accuracy.
Do you really need 5,000 participants?
It has been argued that the only reason to put an experiment on the Web is because you're too lazy to test people yourself. After all, if the typical experiment takes 12 participants, why would you need thousands?
There are many reasons. One is to avoid repeating stimuli. The reason we can get away with having only 12 participants is that we typically ask each one several hundred questions. That can get very boring. For instance, in a memory experiment, you might be asked to remember a few words (dog, cat, elephant) for a few seconds. Then you are presented with another word (mouse) and asked if that's one of the words you were trying to remember. And you will do this several hundred times. After a few hundred, you might simply get confused. So, in a recent Web-based project, I looked specifically at the very first response. If 20 subjects answering 200 questions gives me 4000 responses, that means I need 4000 subjects if I want to ask each one only one question.
Similarly, in an ongoing experiment, I am interested in how people understand different sentences. There are about 600 sentences that I want to test. Reading through that many makes my eyes start to glaze, and it's my experiment. I put the experiment on the Web so that I could ask each person to read only 25 sentences -- which takes no time at all -- and I'll make up the difference by having more participants.
This is sort of like the Open Source model of programming. Rather than paying a small number of people (participants) put in a great deal of work, you get a large number of volunteers to all put in a small amount of time.
Sometimes the effect is subtle, so you need many participants to see an effect. This is true of the work the folks at the Moral Sense Test are doing.
That said, many experiments can be done happily with just a few participants, and some are actually done better with only a small number of participants. Just as introspection (Cognitive Science 1.0) is still a useful technique and still exists alongside Cognitive Science 2.0, Web-based experimentation will not replace the brick-and-mortar paradigm, but extend into new territory.
But are these experiments really valid?
There are several typical objections to Web-based experiments. Isn't the sample unrepresentative? It's on the Internet -- how do you know people aren't just lying? None of them turn out to be important problems.
Isn't the sample unrepresentative? It is true that the people who surf to my Web-based lab are a self-selected group probably not representative of all humans. However, this is true of the participants in pretty much every cognitive or social science experiment. Typically, participants are either undergraduates required to participate in order to pass Psych 101, or they are paid a few dollars for their time. Either way, they aren't simply random people off the street.
It turns out that it typically doesn't matter. While I am fairly certain that liberals and conservatives don't see eye-to-eye on matters like single-payer health care or the war in Iraq, I'm fairly certain that both groups see in the same way. It doesn't really matter if there is an over-abundance of liberal students in my subject pool if I want to study how their vision works. Or speech processing. Etc.
In any case, Web-based experiments allow researchers to reach out beyond the Psych 101 subject pool. That's actually why I put my Birth Order Survey online. I have already surveyed Psych 101 students, and I wanted to make sure my results weren't specific to students in that class at that university. Similarly, the Moral Sense Test is being used to compare people from different social backgrounds in terms of their moral reasoning. Finding conservatives in Harvard Square (MST is run by the Hauser Lab at Harvard) is tough, but online, it's easy.
One study specifically compared Web-based surveys to traditional surveys (Gosling, Vazire, Srivastava, "Should we trust Web-based studies?", American Psychologist, Vol. 49(2), 2004) and found that "Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age."
It's on the Internet. Maybe people are just lying. People do lie on the Internet. People that come to the lab also lie. In fact, my subjects in the lab are in some sense coerced. They are doing it either for the money or for course credit. Maybe they are interested in the experiment. Maybe they aren't. If they get tired half way through, they are stuck until the end. (Technically, they are allowed to quit at any time, but it is rare for them to do so.) Online, everybody is a volunteer, and they can quit whenever they want. Who do you think lies less?
In fact, a couple studies have compared the results of Web-based studies and "typical" studies and found no difference in terms of results (Gosling, Vazire, Srivastava, 2004; Meyerson, Tyron, "Validating Internet research," Behavior Research Methods, Vol 35(4), 2003). As Gosling, Vazire & Srivastava pointed out, a few "nonserious or repeat" participants did not adversely affect the results of Web-based experiments.
The field seems to be more and more comfortable with Web-based experiment. According to one study (Skitka & Sargis, "The Internet as a psychological laboratory," Vol 57, 2006), at least 21% of American Psychological Association journals have published the results of Web-based experiments. Both Science and Nature, the gold standards of science, have published Web-based research.
Please help get my Web-based research published by participating. On the site you can also find results of a couple older studies and more information about my research topics.
Experimental Psychology, which got its start around 100 years ago (William James was an important early proponent), uses the scientific method to probe the human mind. Psychologists develop controlled experiments, where participants are assigned to one of several conditions and their reactions are measured. Famous social psychology experiments include the Milgram experiment or the Stanford Prison Experiment, but most are more mundane, such as probing how humans read by monitoring eye movements as they read a simple text. This is Cognitive Science 2.0.
Typically, such experiments involve anywhere from 2 to 20 participants, rarely more. Partly, this is because each participant is expensive -- they have to be recruited and then tested. There are many bottlenecks, including the fact that most labs have only a small number of testing facilities and a handful of experimenters. Partly, cognitive scientists have settled into this routine because a great variety of questions can be studied with a dozen or so subjects.
The Internet is changing this just as it has changed so much else. If you can design an experiment that will run via the Web, the bottlenecks disappear. Thousands of people can participate simultaneously, in their own home, on their own computer, and without the presence of an experimenter. This isn't just theoretical; one recent paper in Science reported the results of a 2,399-person Web-based survey. I currently have over 1,200 respondents for my Birth Order Survey.
Distance also ceases to be an issue. Want to know whether people from different cultures make different decisions regarding morality? In the past, you would need to travel around the world and survey people in many different countries. Now, you can just put the test online. (The results of a preliminary study over around 5,000 participants found that reasoning about certain basic moral scenarios does not differ across several different cultures.) Or perhaps you are interested in studying people with a very rare syndrome like CHARGE. There is no need to go on a road trip to track down enough participants for a study -- just email them with a URL. (Results from this study aren't available, but it was done by Timothy Hartshorne's lab.)
This may not be as radical a shift as adopting the scientific method, but it is affecting what can be studied and -- because Web experiments are so cheap -- who can study it. I don't think it's an accidence that labs at poorer institutions seem to be over-represented on the Web. It also is opening up direct participation in science to pretty much anybody who cares to click a few buttons.
Beyond surveys
The experiments above were essentially surveys, and although using the Web for surveys is a good idea, surveys are limited in terms of what they can tell you. If you ask somebody, "Do you procrastinate?" you will learn whether or not your subject thinks s/he procrastinates, not necessarily whether they do. Also, surveys aren't very good for studying vision, speech perception and many other interesting questions. If surveys were all the Web was good for, I would not be nearly so excited.
A few labs have begun pushing the Web envelope, seeking ways to perform more "dynamic" experiments. One of my favorite labs is Face Research. Their experiments involve such things as rating faces, carefully manipulating different aspects of the face (contrast, angle, etc.) to see which will lead you to say the face is more attractive. An even more ambitious experiment -- and the one that prompted me to start my own Web-based research -- is the AudioVisual Test, which integrates sound and video clips to study speech processing.
Of the hundreds of studies posted on the largest clearing house for Web-based experiments, all but a handful are surveys. Part of the issue is that cognitive science experiments often focus on the speed with which you can do something, and there has been a lot of skepticism about the ability to accurately record reaction times over the Internet. However, one study I recently completed successfully found a significant reaction time effect of less than 80 milliseconds. 80 ms is a medium-sized amount of time in cognitive science, but it is a start. This may improve, as people like Tim O'Donnell (a graduate student at Harvard) are building new software with more accurate timing. The Implicit Association Test, which attempts to measure subtle, underlying racial biases, is also based on reaction time. I do not know if they have been getting usable data, but I do know that they were also recently also developing proprietary software that should improve the timing accuracy.
Do you really need 5,000 participants?
It has been argued that the only reason to put an experiment on the Web is because you're too lazy to test people yourself. After all, if the typical experiment takes 12 participants, why would you need thousands?
There are many reasons. One is to avoid repeating stimuli. The reason we can get away with having only 12 participants is that we typically ask each one several hundred questions. That can get very boring. For instance, in a memory experiment, you might be asked to remember a few words (dog, cat, elephant) for a few seconds. Then you are presented with another word (mouse) and asked if that's one of the words you were trying to remember. And you will do this several hundred times. After a few hundred, you might simply get confused. So, in a recent Web-based project, I looked specifically at the very first response. If 20 subjects answering 200 questions gives me 4000 responses, that means I need 4000 subjects if I want to ask each one only one question.
Similarly, in an ongoing experiment, I am interested in how people understand different sentences. There are about 600 sentences that I want to test. Reading through that many makes my eyes start to glaze, and it's my experiment. I put the experiment on the Web so that I could ask each person to read only 25 sentences -- which takes no time at all -- and I'll make up the difference by having more participants.
This is sort of like the Open Source model of programming. Rather than paying a small number of people (participants) put in a great deal of work, you get a large number of volunteers to all put in a small amount of time.
Sometimes the effect is subtle, so you need many participants to see an effect. This is true of the work the folks at the Moral Sense Test are doing.
That said, many experiments can be done happily with just a few participants, and some are actually done better with only a small number of participants. Just as introspection (Cognitive Science 1.0) is still a useful technique and still exists alongside Cognitive Science 2.0, Web-based experimentation will not replace the brick-and-mortar paradigm, but extend into new territory.
But are these experiments really valid?
There are several typical objections to Web-based experiments. Isn't the sample unrepresentative? It's on the Internet -- how do you know people aren't just lying? None of them turn out to be important problems.
Isn't the sample unrepresentative? It is true that the people who surf to my Web-based lab are a self-selected group probably not representative of all humans. However, this is true of the participants in pretty much every cognitive or social science experiment. Typically, participants are either undergraduates required to participate in order to pass Psych 101, or they are paid a few dollars for their time. Either way, they aren't simply random people off the street.
It turns out that it typically doesn't matter. While I am fairly certain that liberals and conservatives don't see eye-to-eye on matters like single-payer health care or the war in Iraq, I'm fairly certain that both groups see in the same way. It doesn't really matter if there is an over-abundance of liberal students in my subject pool if I want to study how their vision works. Or speech processing. Etc.
In any case, Web-based experiments allow researchers to reach out beyond the Psych 101 subject pool. That's actually why I put my Birth Order Survey online. I have already surveyed Psych 101 students, and I wanted to make sure my results weren't specific to students in that class at that university. Similarly, the Moral Sense Test is being used to compare people from different social backgrounds in terms of their moral reasoning. Finding conservatives in Harvard Square (MST is run by the Hauser Lab at Harvard) is tough, but online, it's easy.
One study specifically compared Web-based surveys to traditional surveys (Gosling, Vazire, Srivastava, "Should we trust Web-based studies?", American Psychologist, Vol. 49(2), 2004) and found that "Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age."
It's on the Internet. Maybe people are just lying. People do lie on the Internet. People that come to the lab also lie. In fact, my subjects in the lab are in some sense coerced. They are doing it either for the money or for course credit. Maybe they are interested in the experiment. Maybe they aren't. If they get tired half way through, they are stuck until the end. (Technically, they are allowed to quit at any time, but it is rare for them to do so.) Online, everybody is a volunteer, and they can quit whenever they want. Who do you think lies less?
In fact, a couple studies have compared the results of Web-based studies and "typical" studies and found no difference in terms of results (Gosling, Vazire, Srivastava, 2004; Meyerson, Tyron, "Validating Internet research," Behavior Research Methods, Vol 35(4), 2003). As Gosling, Vazire & Srivastava pointed out, a few "nonserious or repeat" participants did not adversely affect the results of Web-based experiments.
The field seems to be more and more comfortable with Web-based experiment. According to one study (Skitka & Sargis, "The Internet as a psychological laboratory," Vol 57, 2006), at least 21% of American Psychological Association journals have published the results of Web-based experiments. Both Science and Nature, the gold standards of science, have published Web-based research.
Please help get my Web-based research published by participating. On the site you can also find results of a couple older studies and more information about my research topics.
Why you can't beat the scientific method
Scientists' parochialism about their method for gaining knowledge (the scientific method) can provoke resentment. I recall some rousing oratory in an undergrad anthropology course about "non-Western science". Even closer to home, I once seriously offended a relative who works in the public school system by suggesting undiplomatically that her thoughts about dyslexia were interesting but that couldn't be taken too seriously until they had been subjected to empirical study.
My tactlessness aside, there are very good reasons to be skeptical of empirical "knowledge" that has not been rigorously tested in a controlled manner. There are many such "alternative routes to wisdom": Scripture, psychics, mediums... -- but the one that I want to specifically debunk in this post is experience. Experience was at the heart of my disagreement with the veteran teacher, who felt her knowledge extracted from the years was more than enough evidence for her ideas. Experience, I believe, is related to why many people believe psychics ("her predictions are always right!") or even Scripture. And experience, it turns out, isn't all it's cracked up to be.
A recent study found that experts weren't necessarily much better than laypeople in predicting the outcomes of disputes. The disputes "included a hostile takeover attempt, nations preparing for war, a controversial investment proposal, a nurses’ strike, an action by football players for a larger share of the gate, an employee resisting the downgrading of her job, artists demanding taxpayer funding, and a new distribution arrangement that a manufacturer proposed to retailers." Experts were asked to make predictions about a particular conflict chosen to match their specialty. Not only were the experts barely better than non-experts, they were barely better than someone guessing randomly.
This isn't actually news, as scientists have suspected this for some time. Mutual funds managers are high-paid investing experts, but it is rare when a fund manager manages to beat the market.
Why doesn't experience help as much as it should? And why do we trust experience? In fact, the failure of expertise isn't all that surprising based on what we know about human memory and learning. People tend to ignore evidence that doesn't fit their beliefs about the world but remember positive evidence strongly. For instance, many people are convinced that the best way to make sure it won't rain is to take an umbrella with them on the way to work. Even without a controlled experiment, I'm fairly sure Sally Brown's umbrella habits don't control the weather. What is probably going on is that she remembers vividly all those times that she lugged around a useless umbrella, but she's completely forgotten all the times it did rain.
Studies of sports gamblers picking winning teams (Gilovich, 1983; Gilovich & Douglas, 1986) found that they tended to remember incorrect picks not as poor judgment on their part, but near-wins. "I was correct in picking St. Louis. They would have won except for that goofy bounce the ball took after the kickoff..." Evidence that should have proved their theories wrong were explained away instead.
Thus, accumulated wisdom is likely to be misleading. You may remember that Madame Juzminda's predictions always come true, but if you want to be sure, you shouldn't trust your memory but written record. You say you can predict which baseball rookies will have meteoric careers and which will fizzle? That's an empirical claim, and we can easily test it scientifically. Billy Beane became a baseball sensation by jettisoning "experience" and relying on fact. I, for instance, believe that I'm better at predicting the rain than Accuweather, but until I've actually proven it in a controlled study, you're probably better off with the real weather report (although, just for kicks, I am planning on testing my weather acumen against the experts and will blog on the results in a month or two).
This is not all to say that Madame Juzminda, the experienced doctor or Scripture are all wrong. Maybe they are. Maybe they aren't. Until you've tested them, it's better to keep an open mind.
(Caveat: Scientists have egos, bad memories and agendas. The scientific method is an ideal not always reached. So's democracy, but it still beats a dictatorship any day.)
My tactlessness aside, there are very good reasons to be skeptical of empirical "knowledge" that has not been rigorously tested in a controlled manner. There are many such "alternative routes to wisdom": Scripture, psychics, mediums... -- but the one that I want to specifically debunk in this post is experience. Experience was at the heart of my disagreement with the veteran teacher, who felt her knowledge extracted from the years was more than enough evidence for her ideas. Experience, I believe, is related to why many people believe psychics ("her predictions are always right!") or even Scripture. And experience, it turns out, isn't all it's cracked up to be.
A recent study found that experts weren't necessarily much better than laypeople in predicting the outcomes of disputes. The disputes "included a hostile takeover attempt, nations preparing for war, a controversial investment proposal, a nurses’ strike, an action by football players for a larger share of the gate, an employee resisting the downgrading of her job, artists demanding taxpayer funding, and a new distribution arrangement that a manufacturer proposed to retailers." Experts were asked to make predictions about a particular conflict chosen to match their specialty. Not only were the experts barely better than non-experts, they were barely better than someone guessing randomly.
This isn't actually news, as scientists have suspected this for some time. Mutual funds managers are high-paid investing experts, but it is rare when a fund manager manages to beat the market.
Why doesn't experience help as much as it should? And why do we trust experience? In fact, the failure of expertise isn't all that surprising based on what we know about human memory and learning. People tend to ignore evidence that doesn't fit their beliefs about the world but remember positive evidence strongly. For instance, many people are convinced that the best way to make sure it won't rain is to take an umbrella with them on the way to work. Even without a controlled experiment, I'm fairly sure Sally Brown's umbrella habits don't control the weather. What is probably going on is that she remembers vividly all those times that she lugged around a useless umbrella, but she's completely forgotten all the times it did rain.
Studies of sports gamblers picking winning teams (Gilovich, 1983; Gilovich & Douglas, 1986) found that they tended to remember incorrect picks not as poor judgment on their part, but near-wins. "I was correct in picking St. Louis. They would have won except for that goofy bounce the ball took after the kickoff..." Evidence that should have proved their theories wrong were explained away instead.
Thus, accumulated wisdom is likely to be misleading. You may remember that Madame Juzminda's predictions always come true, but if you want to be sure, you shouldn't trust your memory but written record. You say you can predict which baseball rookies will have meteoric careers and which will fizzle? That's an empirical claim, and we can easily test it scientifically. Billy Beane became a baseball sensation by jettisoning "experience" and relying on fact. I, for instance, believe that I'm better at predicting the rain than Accuweather, but until I've actually proven it in a controlled study, you're probably better off with the real weather report (although, just for kicks, I am planning on testing my weather acumen against the experts and will blog on the results in a month or two).
This is not all to say that Madame Juzminda, the experienced doctor or Scripture are all wrong. Maybe they are. Maybe they aren't. Until you've tested them, it's better to keep an open mind.
(Caveat: Scientists have egos, bad memories and agendas. The scientific method is an ideal not always reached. So's democracy, but it still beats a dictatorship any day.)
The New and Improved Web-based Cognition and Language Laboratory
Followers of this blog know that it is tied to a Web-based cognitive science lab. Dedicated followers also know that the old website was fuggly, to say the least.
It took longer than I care to admit, but the new and improved laboratory is now open for visitors. Important improvements, beyond the esthetics, include actual results listed on the results page, a bit more background into our twin topics, cognition and language, and other added content.
As with all new releases, this one is bound to be buggy. If you notice any mistakes or problems, please send me a discreet email at coglanglab_AT_coglanglab.org or bare my laundry for all the world to see by leaving a comment here.
It took longer than I care to admit, but the new and improved laboratory is now open for visitors. Important improvements, beyond the esthetics, include actual results listed on the results page, a bit more background into our twin topics, cognition and language, and other added content.
As with all new releases, this one is bound to be buggy. If you notice any mistakes or problems, please send me a discreet email at coglanglab_AT_coglanglab.org or bare my laundry for all the world to see by leaving a comment here.
What is the relationship between short-term and long-term memory?
In a textbook, you may see a description of memory in terms of stages. The first stage is iconic memory, which lasts just a few seconds, during which you can to some degree revive the perceptual experience you are trying to remember. Think of this almost like the afterglow of a bright flash of light.
Then comes short-term memory, which may or may not be also described as working memory (they aren't necessarily the same thing), which allows you to remember something for a short period of time by actively maintaining it. Anteriograde amnesics (like the guy in Memento) have intact short-term memory. What they don't have is long-term memory, which is basically recalling to mind something you haven't thought about in a while.
There are many aspects of the relationship between short-term memory and long-term memory that are still not clear. Over the last several months, Tal Makovski and I have been running a study trying to clarify part of this relationship.
We thought we had concluded it last week. I took the experiment offline, analyzed the results, wrote up a report and sent it to Tal. He wrote back with conclusions based on the data completely different from those that I had. Bascially, the results of two conditions are numerically different, but statistically there is no difference. He believes that if we had more participants, the difference would become statistically significant. I don't.
It's up to you, dear readers, to prove one of us wrong. The experiment is back online (click here for info; here to go straight to the test). It involves a quick visual short-term memory test, then you watch a video, after which you'll be quizzed on your memory for the video. It's by far the most entertaining of the experiments I've put online, mainly because the video is fantastic. It is Bill et John: Episode II, which was profiled in Slate. I've watched it easily a hundred times in the course of designing this study, and it's still fall-out-of-your-chair funny. Which is good, because it's nearly 10 minutes long, making the entire experiment take about 15 minutes.
Once again, you can find the experiment here. Once the results are available, I will post them on this blog and on the website.
Then comes short-term memory, which may or may not be also described as working memory (they aren't necessarily the same thing), which allows you to remember something for a short period of time by actively maintaining it. Anteriograde amnesics (like the guy in Memento) have intact short-term memory. What they don't have is long-term memory, which is basically recalling to mind something you haven't thought about in a while.
There are many aspects of the relationship between short-term memory and long-term memory that are still not clear. Over the last several months, Tal Makovski and I have been running a study trying to clarify part of this relationship.
We thought we had concluded it last week. I took the experiment offline, analyzed the results, wrote up a report and sent it to Tal. He wrote back with conclusions based on the data completely different from those that I had. Bascially, the results of two conditions are numerically different, but statistically there is no difference. He believes that if we had more participants, the difference would become statistically significant. I don't.
It's up to you, dear readers, to prove one of us wrong. The experiment is back online (click here for info; here to go straight to the test). It involves a quick visual short-term memory test, then you watch a video, after which you'll be quizzed on your memory for the video. It's by far the most entertaining of the experiments I've put online, mainly because the video is fantastic. It is Bill et John: Episode II, which was profiled in Slate. I've watched it easily a hundred times in the course of designing this study, and it's still fall-out-of-your-chair funny. Which is good, because it's nearly 10 minutes long, making the entire experiment take about 15 minutes.
Once again, you can find the experiment here. Once the results are available, I will post them on this blog and on the website.
Why girls say "holded" but boys say "held"
The most remarkable aspect of the human language faculty is that children learn so quickly and with such ease. Prior to school, they are already learning new words at the rate of one every two hours, counting the hours that they are asleep. Not only does this happen without intense schooling -- the method by which we learn most difficult skills -- but explicit instruction seems to be useless. Here is a conversation between a small child and father, reproduced in The Language Instinct:
Child: Want other one spoon, Daddy.
Father: You mean, you want THE OTHER SPOON.
Child: Yes, I want other one spoon, please, Daddy.
Father: Can you say "the other spoon"?
Child: Other . . . one . . . spoon.
Father: Say ... "other."
Child: Other.
Father: "Spoon."
Child: Spoon.
Father: "Other ... Spoon."
Child: Other ... spoon. Now give me other one spoon?
All that said, language learning still takes the little geniuses a few years, and along the way they make mistakes. Mistakes are interesting because they provide a window into how and what the child is learning. One of the most interesting and probably best-studied mistakes is grammatical over-regularizations of the sort "I holded the mouses" or "I runned with the wolfes." Notice that the child has probably never heard anybody say "holded" or "mouses," so this proves children aren't simply repeating what they've heard. Here, the child has taken irregular verbs (held, ran) and nouns (mice, wolves) and made them regular. The standard explanation for this -- though not the only one -- is that over-regularization occurs because we have a default rule to add the regular affix (walk+ed, dog+s) unless we have memorized an irregular form. If the child has not yet learned the irregular form or momentarily forgets it, an over-regularization results.
A few years ago, I was working with Michael Ullman at Georgetown University on a project investigating possible differences in how men and women perform particular linguistic computations. The bigger project continues, and if I get enough requests, I may eventually post something about it. The basic idea was that a number of studies have suggested that women perform better on declarative memory tasks than men do. Since Dr. Ullman's model of language processing ascribes certain processes to declarative memory (to be exact, "declarative memory-related brain structures) and others to procedural memory (careful: different researchers use this word to mean different things), that predicted differences between how men and women would perform certain linguistic functions. This was a way of testing Ullman's model and also perhaps learn something that could be useful in medical treatments of patients with language problems.
One day, it occured to me to explore childhood over-regularization. If women/girls have better memory for words, then they should have better memory for irregular forms and make fewer over-regularizations. We tested this and found a very surprising result: girls were actually three times more likely to over-regularize than boys.
This had us stumped for a while, but we eventually found an explanation. The "other model" mentioned above argues that over-regularization happens not through over-applying a grammatical rule but by analogy to regular forms (the difference is subtle, but it has big implications for how our minds and brains work -- for that reason this was one of the hot controversies throughout the 80s and 90s). Ullman's and similar models had always argued this was impossible because regular forms (walked, houses) are (mostly) not stored in memory. However, our ongoing research had suggested that women in fact do store a reasonable number of regular forms in memory after all, presumably because of superior declarative memory function. When we investigated the over-regularization data more carefully, we found evidence that the girls' over-regularizations -- but not the boys' -- were indeed a result of analogical reasoning, not rule-use. For whatever reason -- this is still not well understood -- the over-regularization-by-analogy process led to more "holded"s than the over-regularization-by-rule process.
And that is why girls say "holded" while boys say "held". You can read the journal article here.
I have been thinking about running a related experiment at my Web-based lab some day in the future. In the meantime, I've been investigating other topics (please participate by clicking here).
Child: Want other one spoon, Daddy.
Father: You mean, you want THE OTHER SPOON.
Child: Yes, I want other one spoon, please, Daddy.
Father: Can you say "the other spoon"?
Child: Other . . . one . . . spoon.
Father: Say ... "other."
Child: Other.
Father: "Spoon."
Child: Spoon.
Father: "Other ... Spoon."
Child: Other ... spoon. Now give me other one spoon?
All that said, language learning still takes the little geniuses a few years, and along the way they make mistakes. Mistakes are interesting because they provide a window into how and what the child is learning. One of the most interesting and probably best-studied mistakes is grammatical over-regularizations of the sort "I holded the mouses" or "I runned with the wolfes." Notice that the child has probably never heard anybody say "holded" or "mouses," so this proves children aren't simply repeating what they've heard. Here, the child has taken irregular verbs (held, ran) and nouns (mice, wolves) and made them regular. The standard explanation for this -- though not the only one -- is that over-regularization occurs because we have a default rule to add the regular affix (walk+ed, dog+s) unless we have memorized an irregular form. If the child has not yet learned the irregular form or momentarily forgets it, an over-regularization results.
A few years ago, I was working with Michael Ullman at Georgetown University on a project investigating possible differences in how men and women perform particular linguistic computations. The bigger project continues, and if I get enough requests, I may eventually post something about it. The basic idea was that a number of studies have suggested that women perform better on declarative memory tasks than men do. Since Dr. Ullman's model of language processing ascribes certain processes to declarative memory (to be exact, "declarative memory-related brain structures) and others to procedural memory (careful: different researchers use this word to mean different things), that predicted differences between how men and women would perform certain linguistic functions. This was a way of testing Ullman's model and also perhaps learn something that could be useful in medical treatments of patients with language problems.
One day, it occured to me to explore childhood over-regularization. If women/girls have better memory for words, then they should have better memory for irregular forms and make fewer over-regularizations. We tested this and found a very surprising result: girls were actually three times more likely to over-regularize than boys.
This had us stumped for a while, but we eventually found an explanation. The "other model" mentioned above argues that over-regularization happens not through over-applying a grammatical rule but by analogy to regular forms (the difference is subtle, but it has big implications for how our minds and brains work -- for that reason this was one of the hot controversies throughout the 80s and 90s). Ullman's and similar models had always argued this was impossible because regular forms (walked, houses) are (mostly) not stored in memory. However, our ongoing research had suggested that women in fact do store a reasonable number of regular forms in memory after all, presumably because of superior declarative memory function. When we investigated the over-regularization data more carefully, we found evidence that the girls' over-regularizations -- but not the boys' -- were indeed a result of analogical reasoning, not rule-use. For whatever reason -- this is still not well understood -- the over-regularization-by-analogy process led to more "holded"s than the over-regularization-by-rule process.
And that is why girls say "holded" while boys say "held". You can read the journal article here.
I have been thinking about running a related experiment at my Web-based lab some day in the future. In the meantime, I've been investigating other topics (please participate by clicking here).
Update on cat cognition
For those who were interested in how cats see the world, check out this new study. I'm not sure it isn't just saying that cats have motor memory (as do humans), but it's interesting nonetheless.
The Best Language Site on the Web
News junkies might start up their web browser day any number of ways. There are those who prefer the Post to the Times. Those with a business mind might start with the Journal. On the West Coast, there are those who swear by LA's daily. I myself start with Slate.
However, I can state with little fear of correction, that the website of record for die-hard language buffs is the Language Log. The Language Log, I admit, is not for the faint of heart. The bloggers are linguists, and they like nothing better than parsing syntax. This is not William Safire.
What makes the Language Log great is that the writers really know what they are talking about. Growing up, I went to a chamber music camp several summers in a row (I played viola). One of my friends who attended the camp was a singer. One year, a violinist in her ensemble decided that, rather than play the violin part, she wanted do the voice part for one movement of a piece. I never heard them perform, but I am assured she was awful. My friend complained:
"If you haven't studied the violin, you wouldn't try to perform a difficult piece for an audience of trained musicians. You'd barely be able to get a single note in tune, and you'd know it. Everybody can open their mouths and make sound come out, which means they think they can sing."
The world of language is afflicted with a similar problem. Everybody speaks a language, and many people believe they are experts in language (here, here, here). A great deal of what is written about language is embarrasing. To make matters worse, the field is packed with urban legends about all the (they have less than a half-dozen, approximately the same number as we have in English). Here is an urban legend the Language Log uncovered about the Irish not having a word for sex.
Language is one of the most complicated things in existence, and even the professionals understand remarkably little. The bloggers at the Language Log do a great job of giving even the casual reader a feel for what language is really about. They also spend a considerable portion of their time debunking fallacies and myths. If you read only one blog about language, LL would be my choice. If you read two, then you might consider reading my blog as well:)
However, I can state with little fear of correction, that the website of record for die-hard language buffs is the Language Log. The Language Log, I admit, is not for the faint of heart. The bloggers are linguists, and they like nothing better than parsing syntax. This is not William Safire.
What makes the Language Log great is that the writers really know what they are talking about. Growing up, I went to a chamber music camp several summers in a row (I played viola). One of my friends who attended the camp was a singer. One year, a violinist in her ensemble decided that, rather than play the violin part, she wanted do the voice part for one movement of a piece. I never heard them perform, but I am assured she was awful. My friend complained:
"If you haven't studied the violin, you wouldn't try to perform a difficult piece for an audience of trained musicians. You'd barely be able to get a single note in tune, and you'd know it. Everybody can open their mouths and make sound come out, which means they think they can sing."
The world of language is afflicted with a similar problem. Everybody speaks a language, and many people believe they are experts in language (here, here, here). A great deal of what is written about language is embarrasing. To make matters worse, the field is packed with urban legends about all the (they have less than a half-dozen, approximately the same number as we have in English). Here is an urban legend the Language Log uncovered about the Irish not having a word for sex.
Language is one of the most complicated things in existence, and even the professionals understand remarkably little. The bloggers at the Language Log do a great job of giving even the casual reader a feel for what language is really about. They also spend a considerable portion of their time debunking fallacies and myths. If you read only one blog about language, LL would be my choice. If you read two, then you might consider reading my blog as well:)
Subscribe to:
Posts (Atom)