Field of Science

Anonymice run wild through science

I recently mentioned Jack Shafer's long-standing irritation at the over-use of anonymous sources in journalism. Sometimes the irritation is at using anonymous sources to report banalities. In my favorite column in that series (which has unfortunately been moribund for the last year or two), Shafer calls out anonymous sources whose identities are transparent. Why pretend to be anonymous when a simple Google search will identify you?

I had a similar question recently when reading the method section of a psychology research paper. Here is the first paragraph from the method section:
Sixteen 4-year-olds (age: M = 4,7; range = 4,1-4,11), and 25 college students (age: M = 18,10; range = 18,4-19,6) participated in this study. All participants were residents of a small midwestern city. Children were recruited from university-run preschools and adults were undergraduate students at a large public university.
Small midwestern city? Large public university? I could Google the two authors, but luckily the paper already states helpfully on the front page that both authors work at the University of Michigan, located in Ann Arbor (a small midwestern city). Maybe the subject recruitment and testing was done in some other university town, but that's unlikely.

This false anonymity is common -- though not universal -- in psychology papers. I'm picking on this one not because I have any particular beef with these authors (which is why I'm not naming names), but simply because I happened to be reading their paper today.

This brings up the larger issue of the code of ethics under which research is done (here are the regulations at Harvard). After some notable ethical lapses in the early days of human research (for instance, Dr. Waterhouse trying out the smallpox vaccine on children), it became clear that regulations were needed. As with any regulations, however, form often wins over substance. A lab I used to work at had a very short consent form that said something to the effect that in the experiment, you'll read words, speak out loud, and it won't hurt. This was later replaced with a multi-page consent form, probably at the request of our university ethics board, but I'm not sure. The effect was that our participants stopped reading the consent form before signing it. This was entirely predictable, and I think it is an example of valuing the form -- in particular, having participants sign a form -- over substance -- protecting research participants.

Since most of the research in a psychology department is less dangerous than filling out a Cosmo quiz, this doesn't really keep me up at night. However, I think it's worth periodically rethinking our regulations in light of their purpose.

Can you see this illusion?

Yesterday, Cognitive Daily posted a fairly compelling visual illusion. There is a while disk and a black disk. In the middle of each disk is a circle. The two circles go from black to white back to black in sequence.

Normally, the rules of perceptual grouping would cause you to see the two smaller disks blinking in unison as being related. However, in this case, due to the smaller disks being inside larger disks, most people see the disks blinking out of sequence. That is, you interpret the scene as a hole appearing in the left disk, then in the right disk, then in the left disk.

Both interpretations are correct. It's a matter of what your visual system focuses on. What interests me is, looking at the comments on this post, is that while the vast majority see the alternating blinking, some people only see the disks blinking in unison. One possibility is that they are misunderstanding what they were supposed to see. If there is some small percentage of people whose visual systems focus on different grouping principles, that could be very interesting and be useful in understanding perceptual grouping in the visual system.

So, if you only see the inner disks blinking in unison and don't get the alternation illusion, comment here or send an email to coglanglab_at_gmail.com.

Try out the illusion here.

Why are first-graders smarter than Chomsky?

Linguistics, it turns out, is very difficult. Although it's been over half a century since Chomsky sparked the charge to develop complete, generative grammars for languages (a set of rules that explain how to build grammatical sentences), success has been less than complete -- this despite the fact that children learn languages with ease. Why is it so difficult for a group of the world's most brilliant academics?

Here's a good explanation from Ray Jackendoff, in Foundations of Language:
It is useful to put the problem of learning more starkly in terms of what I like to call the Paradox of Language Acquisition: The community of linguists, collaborating over many decades, has so far failed to come up with an adequate description of a speaker's [knowledge] of his or her native language. Yet every normal child manages to acquire this f-knowledge by the age of ten or so, without reading any linguistics textbooks or going to any conferences. How is it that in some sense every single normal child is smarter than the whole community of linguists?

The answer proposed by the Universal Grammar hypothesis is that the child comes to the task with some [preconceptions] of what language is going to be like, and structures linguistic input according to the dictates (or opportunities!) provided by those expectations. By contrast, linguists, using explicit reasoning--and far more data from the world than the child--have a much larger design space in which they must localize the character of grammar. Hence, their task is harder than the child's: they constantly come face to face with the real poverty of the stimulus.
In other words, the idea is that linguists are too smart for their own good. They consider too many possibilities, and so there isn't enough data to decide between them. This is like trying to solve 3 simultaneous equations with 4 variables; if you remember your algebra, this can't be done. It's very similar to why philosophers can't figure out how it's possible, even in theory, to learn the meaning of a word.

The relationship between neuroscience and psychology

There are a certain amount of heated arguments within the behavioral sciences about the most appropriate way to study the question of behavior. Cellular and systems neuroscientists tend to have no use for psychological methods, finding them inefficient and messy (when I interviewed for graduate school, one monkey physiologist told me that my research interests were a waste of time that would lead to nothing. Monkey physiology, on the other hand...). People who do more cognitive work often feel that while neuroscience is more exact and perhaps makes more concrete progress, it's progress in the wrong direction.

I recently came across an excellent and succinct explanation of why both methods are necessary:

Experimental psychology on both human subjects and animals is an essential part of the enterprise, for the obvious reason that accurate characterizations of psychological phenomena are necessary to guide the search for explanations and mechanisms. Trying to find a mechanism when the phenomenon is misdescribed or underdescribed is likely to be quixotic. Neurology is an essential part of the enterprise because it provides both important behavioral data on human subjects and hypothesizes connections between specific brain structures and behavior. Neuroscience is essential both to discover the functional capacities of neural components and because reverse engineering is an important strategy for figuring out how a novel device works.

Churchland & Sejnowski. (1991) "Perspectives on cognitive neuroscience" in Lister & Weingarter, Perspectives on Cognitive Neuroscience, pp. 3-23.


Misunderstood

In an effort to understand linguistics slightly better, I am reading Ray Jackendoff's Foundations of Language. He starts off the first chapter with the tale of woe of the modern linguist:

Language and biology provide an interesting contrast... People expect to be baffled or bored by the biochemical details of, say, cell metabolism, so they don't ask about them. What interests people about biology is natural history--strange facts about animal behavior and so forth. But they recognize and respect the fact that most biologists don't study that. Similarly, what interests people about language is its "natural history": the etymology of words, where language came from, and why kids talk so badly these days. The difference is that they don't recognize that there is more to language than this, so they are unpleasantly disappointmed when the linguist doesn't share their fascination.
This passage sounded familiar. The psychologists I know spend a lot of time trying to decide how to answer the question, "What do you do?" While there is no agreed-upon response, everybody agrees that saying, "I am a psychologist," is guaranteed to lead to requests for advice about how to deal with somebody's crazy Aunt Maude. Saying "developmental psychology" will lead to requests for parenting advice.

My wife enjoys chronicling my own choices (for a while, I said cognitive neurosciencce, then neurolinguistics, then cognitive science, and now psycholinguistics -- but never psychology). To turn things around, though, she gets tired of people assuming that just because she's studying law, she'll either chase ambulances or defend crooks, when in fact most lawyers probably never set foot in a courtroom.

It's interesting that I've heard very similar complaints from vocalists: "Nobody who had never studied the violin would consider themselves a great talent, but anybody who can make noise come out of their mouths thinks they can sing."

This leads me to wonder if there are any professions who don't think they are widely misunderstood and don't feel ambushed at cocktail parties by well-meaning but clueless new acquaintances.

Volunteers needed

I am presenting the preliminary results from my Learning the Names of Things study a week from Monday. I still need more participants, though, so if you have not yet participated and have 4-5 minutes, please drop by.

It is actually amazing how many people are willing to spend their time doing these experiments, and I really have no right to complain. That said, I would get work done faster if I could recruit the numbers Marc Hauser's lab does (200,000 participants and counting). Does anyone have any ideas how to do that?

Interview Daze

First-year graduate students in my program are in charge of organizing interviews for prospective graduate students. We were given notice last Friday; the interviews start this coming Tuesday. So it's been a busy week.

When I applied to PhD programs in Psychology the first time around, I didn't know there were interviews. Most department websites don't mention them, and the only people I knew who had been to graduate school recently were in other fields and didn't do interviews. So I applied to graduate school and went to Spain for the spring, and was very surprised when I started getting invitations to visit schools. A friend of mine recently told she also had no idea interviews would be required. It turns out, in fact, that some schools do interviews and some do not. It is extremely difficult to find out which schools are which.

I bring up this story, because I think it is emblematic of the graduate school admissions process, at least for psychology. Information is scarce, and the procedure varies considerably from school to school. I don't know whether knowing more about the process would help you get into a program, but it seems reasonable to assume so. In that case, there would be a significant advantage for people already on the inside.

To put this into a concrete example, suppose you want to get a PhD in psychology at Harvard. If you are an undergraduate at Stanford or Yale, it's very likely that your professors can tell you a lot about the admissions process at Harvard (which is quite different from that at Stanford or Yale, as it turns out), because there is a lot of cross-talk between those three schools. If you are an undergraduate at a regional public university, it's much less likely you can get access to that kind of information.

Access to information may not translate into access to admissions. I certainly hope it does not. But, on the off-chance that it does, one goal of this blog is to give more information about the admissions processes to the extent that I can. If any aspiring students have questions, you should be sure to ask.

It's a small world

Recently, Grrlscientist has written a couple posts about double-blind peer review (where the reviewers don't know who the authors are as well as the authors not knowing who the reviewers are). She hopes that more double-blind peer reviewing would diminish possible sex discrimination in peer review.

Aside from some controversy as to whether there actually is any prejudice against women in peer review, many writing comments have asked whether blind peer review isn't just a facade. After all, it's a very, very small world, and we all know who each other are. Not only can you usually tell who is writing a paper without looking at the author's names (based on who they cite, their research topic, and their theoretical perspective), it's not always difficult to figure out who the reviewers are once you read the review.

It's a small world in other ways, too. My section of the department here at Harvard (development) is currently interviewing 6 candidates for faculty positions and 4 prospective graduate students. The job candidates are from the following schools:

Duke, Yale, Max Planck, University of London, MIT, Stanford

The potential graduate students are from:

Columbia, Stanford, Yale, Johns Hopkins

This is a narrow enough set as it is, but notice the repetition of Stanford and Yale. We have particularly close ties with Yale. One of my office-mates is from Yale. One of our upper-level graduate students will be taking up a professorship at Yale in the fall in developmental psychology (and a good friend of mine, who is graduating from the vision lab this spring will be taking up a post-doctoral fellowship at Yale as well).

Consider the job candidates for a different position in the department (cognition, brain and behavior):

Yale, MIT, Giessen, Yale, University of Chicago

It's not really surprising that it's such a tight-knit community, but it's still interesting to observe.

New York Times falls for mind/brain duality...again

Jack Shafer at Slate runs a periodic column where he calls newspapers to task for over-using anonymous sources. An example passage culled from a New York Times article:
Republicans close to the White House said Mr. Bush was the driver of the changes made so far, including the decision to ask Mr. Rove to focus primarily on the midterm elections.
Why, Shafer asks, do those "Republicans" need to be kept anonymous down to their number (are there 2? 3?). Shafer feels this over-use of anonymous sources is at best getting in the way of informing the public, and at worse hiding people with ulterior motives.

As of this entry, I'm starting my own watch-dog column: newspapers which write inane articles espousing mind/brain duality. The latest offender is, coincidentally, The New York Times, which ran a disappointing article a few days ago called "My Cortex Made Me Buy It." It describes a recent study in which people sampled "cheap" and "expensive" wines (actually the exact same wines, just marked with different prices).
When they sampled the wines with lower prices, however, the subjects not only liked them less, their brains registered less pleasure from the experience.
It's important to consider what the alternative was: that subjects reported liking the cheaper wines less, but their brains reported the same amount of pleasure. What would that mean? One possibility is that the participants were lying: they liked both wines the same, but said they liked the more expensive ones more in order to look cultured.

There's another possibility. Dan Gilbert, who studies happiness, usually does so by simply asking people if they are happy. He doesn't worry much about people lying. He could use a physiological measure (like a brain scan, as was done in the above study), but he points out that the reason we think a particular part of the brain is related to happiness is because it correlates with people's self-reports of being happy. Using the brain scan is completely circular. Under this logic, if the brain scans fail to show more pleasure when drinking the expensive wine, it could be because the relevant areas of the brain have been misidentified.

A final alternative possibility is that the participants' immaterial souls liked the expensive wine better, but their brains didn't register a difference.

The Times piece discussed none of this.

Word meaning as a window into thought

Benjamin Whorf has perhaps the best name recognition in psycholinguistics, being known for the Whorfian Hypothesis: the idea that the particular language you learn constrains the way you think about the world.

This hypothesis has made its way into popular culture (or, perhaps it predated Whorf). Many essays -- and sometimes large sections of books -- make a big deal of etymology. That is, the origin of a word is supposed to tell us something about culture. A popular example is the Mandarin word for "China" means, literally, "Center Country." This is supposed to tell us something about how the Chinese view their place in the world.

Maybe it does, maybe it doesn't. But certainly in some cases etymology tells us nothing. Here's a quote from "Formal Semantics" by Genarro Chierchia:
To make this point more vividly, take the word money. An important word indeed; where does it come from? What does its history reveal about the true meaning of money? It comes from Latin moneta, the past participle feminine of the verb moneo 'to warn/to advice.' Moneta was one of the canonical attributes of the Roman goddess Juno; Juno moneta is 'the one who advises.' What has Juno to do with money? Is it perhaps that her capacity to advise extends to finances? No. It so happens that in ancient Rome, the mint was right next to the temple of Juno. So people metonymically transferred Juno's attribute to what was coming out of the mint. A fascinating historical fact that tells us something as to how word meanings may evolve; but it revelas no deep link between money and the capacity to advise.
Back to Chinese. Another good example is the word for turkey: huoji. Literally, it means "fire chicken." Anyone who wants to make a story about how that explains the Chinese psyche is welcome to give it a shot.

The algebraic mind

The brain is a computational device. There may be some cognitive scientists out there who disagree with that statement, but I don't think there are many. There is much less agreement on what type of computational device it is.

One possibility is that the brain is a symbol-processing device very much like a computer. A computer can add essentially any two numbers using the same circuitry. It does not have one microchip for adding 1 + 1 and a different one for adding 2 + 2. It has a single algorithm that can be applied to any arbitrary number (assuming the computer can represent that number -- obviously there are numbers too large for any modern machine to handle).

One of the big mysteries of the brain is that it is unclear how to make a symbol-processing/algebraic device out of neurons. This has led many schools of thought, such as Connectionists, to deny that the brain can do symbol-processing or works anything like a digital computer (see Marcus's The Algebraic Mind for some blow-back). On the flip side, folks like Randy Gallistel have argued that if we don't know how to implement read/write memory into neurons (a related question), then there is a gaping hole in our knowledge about neurons.

This all comes to mind in relation to some work done in the last decade on barn owls. Barn owls locate their prey via both sight and sound, and neuroscientists have located the area of the brain where these two signals are combined. If you put prism goggles on a barn owl so that it's vision is offset (e.g., everything looks like it's 10 degrees left of where it actually is), the two signals get distorted at first, but eventually the neural map that represents location according to the ears shifts so that it's in sync with the the visual map.

As a computer programmer, the obvious thing to do would be to just add 10 degrees to the auditory signals across the map. However, that's not what the brain does. This can be shown by putting barn owls into goggles that shift only part of the field of vision. Only the auditory signals for that region of space shift. That strikes me as very non-algebraic in nature (not that a computer programmer couldn't achieve this effect, but why would she write that ability into the code. Keep in mind that barn owls didn't evolve to wear prism goggles).

That said, there's no reason that all the brain must compute things algebraically. Perceptual systems may be unusual in that respect. Still, as very little is known about how the brain computes anything, this example is very interesting.

For those interested in the barn owl details, check out:

Knudsen, E.I. (2002). Instructed learning in the auditory localization pathway of the barn owl. Nature, 417, 322-328.

ResearchBlogging.org

Will neuroscience end responsibility?

As we learn more and more about the brain, it seems fewer and fewer people are responsible for their actions. You may be mean, ignorant or violent simply because of bad genes or a bad brain.

In Freedom Evolves, Daniel Dennett argues that this is not a perpetually sliding slope, leading to nobody being responsible for anything:

"The anxious mantra returns: 'But where will it all end?' Aren't we headed toward a 100 percent 'medicalized' society in which nobody is responsible, and everybody is a victim of one unfortunate feature of their background or another (nature or nurture)? No, we are not... People want to be held accountable. The benefits that accrue to one who is a citizen in good standing in a free society are so widely and deeply appreciated that there is always a potent presumption in favor of inclusion. Blame is the price we pay for credit, and we pay it gladly under most circumstances."
What does he mean by "benefits that accrue?" Put it this way: kleptomania is the impulsive desire to steal. I had a friend his high school who was a compulsive shop-lifter, and I believe the disease is real. However, it doesn't matter whether you think kleptomania is a true medical condition or simply another symptom of Prozac Nation -- either way, you wouldn't put a kleptomaniac in charge of your store. More generally, people recognized as moral and responsible are likely to receive many advantages (more friends, better credit rating, community awards, etc.).

I'm less sure what he means when stating that this causes people to "want to be held responsible." However, I think do think the argument can be made that we will continue to hold people responsible for their actions. Acting morally leads to personal gain, but only if society at large recognizes and rewards good character (for instance, by employing people known to be honest and passing over thieves). Of course, anyone who does not distinguish between trustworthy and untrustworthy neighbors is not going to last long -- it doesn't matter whether the untrustworthy neighbor has a "condition" is is simply "bad."

By this argument, bad behavior is always going to punished. The question is how. Our growing understanding of neuroscience may change what bad behavior leads to jail time and what bad behavior leads to medication, but it won't affect whether we hold people responsible for their actions. As Dennett and others have convincingly argued, morality is useful.

The impostor syndrome

The New York Times is carrying an interesting story about the Impostor Syndrome: the belief that any success you have enjoyed is due to luck. Many graduate students I know admit to feeling -- or, at least, admit to their friends feeling -- that they are a fraud and don't deserve to be in the program.

The Times article suggests that at least some impostors are impostors (phony phonies). Read on.

Understanding our own minds

Freud was wrong about most things, but one thing he was dead on about was that we have at best limited access to our own minds. Here is an excellent quote from Daniel Dennet's Freedom Evolves:

For Descartes, the mind was perfectly transparent to itself, with nothing happening out of view, and it has taken more than a century of psychological theorizing and experimentation to erode this ideal of perfect introspectability, which we can now see gets the situation almost backward. Consciousness of the springs of action is the exception, not the rule, and it requires some rather remarkable circumstances to have evolved at all.
See also previous posts on related topics:
Your brain knows when do be afraid, even if you don't.
Free will.
Not your granddaddy's subconscious mind.

Pauses in speech

While I was off guest-posting elsewhere, I talked Kristina Lundholm, a PhD student in linguistics and also a speech & language pathologist, into guest-posting here. She knows a great deal more about speech errors and disfluencies than I do. This is her follow-up to my post about errors in speech:

For a long time, spoken language was seen as inferior to written language. Hesitations and pauses were seen as flaws in the production of speech. This way of looking at language and communication (as proposed by e.g. Chomsky) proposes that there is an ultimate way to deliver an utterance.

Today, we know that pauses, hesitations etc are a vital part of communication and not some sort of unnecessary interruption in the speech signal. Natural conversation is studied by conversation analysts, sociolinguists, sociologists, anthropologists and more.

To understand why pauses are important, imagine trying to have a conversation with someone who never pauses. Would you get anything said? Pauses are necessary in communication; we need to breathe, think, and leave gaps where another person can take over. Pauses also make it easier for the listener to process and understand what we are saying.

Even when the people engaged in conversation understand the importance of pausing and make pauses in all the right places, that might not be enough. An agreement on pause length is key to a successful conversation. Pause length tolerance, i.e. how long pause you tolerate before the silence seems unbearable and you feel you have to say something, varies between languages, sociolects, dialects etc.

Because of this you may experience what a friend of mine went through when he moved south to study: he had a high tolerance of pauses, which also meant that if the pause in conversation was short, he didn't take his turn since he felt he was interrupting. Therefore, he felt that his new friends were quite rude who never let him talk, and they thought he must be terribly shy since he rarely spoke. The effect of different pause lengths has been verified by for example Scollon & Scollon and Deborah Tannen.

Pauses also influence how we perceive what is being said. If someone asks you for a ride home, your answer will be interpreted as more negative if you take longer to respond, even if the answer itself is positive; see e.g. Roberts et al.

The location of the pause is also meaningful: if the speaker does not want to be interrupted, it is wise to pause within a syntactic unit, for example before an important content word: "I want a (pause) green sweater". If you pause between syntactical units, chances are that your conversation partner will think that you're finished and will start talking.

Now, about those uuuums and eeeers… There are a bunch of different names for those small units in communication: filler words, fillers, filled pauses, hesitation phenomena, disfluencies etc. I prefer the term "filled pause" since I classify them as a sort of pause. Filled pauses have a lot of functions in spoken conversation. One is to signal to other persons that "even though I'm not saying anything particular right now, I don't want anyone else to take over". It can also mean "difficult question, I have to think about that". Or a number of different things, depending on position, prosody, context etc. Quite a lot of research has focused on filled pauses in spoken dialogue, but I don't know if anyone has investigated filled pauses in written communication – well, if not, someone should!

So, in conclusion: pauses are not only important: they may make or break a conversation. And in linguistics today, the spherical cow is not so spherical anymore, but seen as the irregularly formed creature it is.

Guest-blogging on Retrospectacle