Field of Science

The DaVinci stereogram

A few posts ago, I described how to make stereograms. At the end of the post, I showed a second type of stereogram, in which an illusionary white box appears to float in front of a background of Xs, and I promised to explain how that one was done.

This type of stereogram, discovered by Nakayama and colleagues, is called a "DaVinci stereogram" in honor of the famous artist/engineer who worked out the logic centuries ago (though he didn't, to my knowledge, consider building any stereograms).

The idea works like this: Look at an object (such as your computer monitor). Your left eye can
"see around" the left side of the object a bit more than can your right eye, while your right eye can see more of what is behind the object than can your left eye. It turns out that this information alone is sufficient to induce a perception of depth.









Consider that final stereogram (reproduced here). In both images, there is a white box in the center. However, the left image (the one presented to the right eye, if you use the divergence method) has four extra Xs on the right side of the box, while the right image (the one presented to the left eye) has two extra Xs on the left side of the box. This results in the perception of a white box floating above the background.

Getting a job in psychology

Several friends are applying to be research assistants in psychology labs this coming academic year and have been asking me advice. It occurred to me that there may be readers of this blog who are also interested in advice. With the caveat that this advice is based only on my experience and the experience of a few friends, here it goes.

First, if you are considering a PhD in experimental psychology (by which I mean not psychotherapy), I recommend spending some time as a research assistant (either during college or after) before applying to graduate school. There are at least three reasons:

1) You'll almost certainly get into a better school if you have research experience.

2) You'll have a better sense of what type of psychology you want to study, as well as whether you really want to do research at all. Many people quit PhD programs, or graduate and then decide to do something else.

3) You'll probably be more productive in graduate school, because you'll come in with some valuable skills. This may help you get a leg up on the competition (or possibly prevent them from having a leg up on you).

Of course, many brilliant psychologists started graduate school with little or no background in psychology, much less research experience. Einstein got bad grades in math, but that doesn't mean getting bad grades in math is a recommended strategy for becoming a physics genius.

So where are research assistant positions advertised? I have no idea. I got every RA position I've had (5, counting 2 in high school) by contacting a professor directly and asking if they had an opening. But I have noticed that professors sometimes advertise open positions on their websites.

Finally, it seems like RA positions are usually filled between late February and early April. So if you are interested, now is the time to apply.


---
PS If anybody else in the field has anything to add, please use the comments.

Calling all language learners

I looked at the calendar and realized that I have to present data from an experiment to the lab in a few week and to the department in about a month. I took a look, and I have nowhere near enough participants.

So if you have 5 minutes, please participate in this experiment. I've been running it for a while, so do be sure you haven't already participated. The experiment is called "Learning the names of things," and it involves listening to a person mention different objects. You have to figure out which object he is referring to. It's also the only experiment I've done which involves any sound.

You can find it here.

If you are wondering why I don't have enough participants, the answer is simple. There have actually been several versions of this experiment. The data from each version has been very helpful, but I haven't yet quite answered the question I set out to answer. Unfortunately, each version is similar enough to the older versions that it wouldn't be appropriate to test the same people over and over. If you previously participated and want to see what the new version looks like, you can do so, but do be sure to indicate that you have previously participated in the experiment when asked.

Should scientists drink beer?

Apparently not. The more beer you drink, the less you publish and the less your articles are cited...at least, if you are a Czech biologist.

Snake oil and Neuroscience

Readers of this blog know how I feel about neuroscience reporting (here, here and here). One consistent problem is that reporters enthusiastically relate findings that involve brain scans, while ignoring the original and groundbreaking behavioral work.

A truism in psychology, however, is to never trust your impressions of a situation. So often our intuitions (e.g., the average American wouldn't torture an innocent bystander to death just because someone in a lab coat told them to) turn out to be completely incorrect. So I was very happy to hear that a group at Yale actually tested the hypothesis that people will believe basic behavioral findings more (like the existence of cognitive dissonance) more if brain-related words are mentioned.

In brief, it appears that the average non-expert does, in fact, believe it more if there is a picture of the brain somewhere. However, students who have taken an introductory neuroscience class are not only immune to this effect, but they actually find explanations that include references to brain anatomy less compelling. So perhaps this research explains not only why the average reader (and reporter) likes the typical neuroscience reportage as why people like myself (and Dan Gilbert) dislike it.

Cognitive Daily has an excellent in-depth description of the article here.



Weisberg, D.S., Keil, F.C., Rawson, .J., Gray, J.R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477.

Monkey language -- better every year

For many years we've been saying that monkey calls were non-decompositional. That is, you can't break them into parts, each of which has its own meaning (as you can do with this sentence, for instance).

New research suggests that this monkey calls are more complex than we thought. Click here to learn more.

Try this at home: Make your own stereogram

Have you ever wanted to make your own 3D movie? Your own Magic Eye Stereogram? This post will teach you to create (and see) your own 3D images.

Magic Eye Stereograms are a relatively new technology, but they grew out of the classic stereograms created in 1838 by Charles Wheatstone. For those of you who don't know what a stereogram is, the word broadly refers to a 3D-like image produced by presenting different images to each eye.

The theory is pretty straight-forward. Focus on some object in your room (such as your computer). Now close one eye, then the other. The objects in your field of vision should shift relative to one another. The closer or father from you they are (relative to the object you are focusing on), the more they should shift.

When you look at a normal photograph (or the text on this screen), this difference is largely lost. The objects in the picture are in the same position relative to one another regardless of which eye you are looking through. However, if a clever engineer rigs up a device so as to show different images to each eye in a way that mimics what happens when you look at natural scenes, you will see the illusion of depth.

For instance, she might present the drawing below on the left to your right eye, and the drawing on the right to your left eye:










If the device is set up so that each picture is lined up perfectly with the other (for instance, if each is in the center of the field of vision of the appropriate eye), you would see the colored Xs in the center at different depths relative to one another. Why? The green X shifts the most between the two images, so you know it is either the closest or the farthest away. Importantly, because it's farther to the left in the image shown to the right eye, it must be closer than the blue or red Xs.

You can demonstrate this to yourself using a pencil. Hold a pencil perfectly vertical a foot or two in front of your face. It should still look vertical even if you look with only one eye. Now, tilt the pencil so that the bottom part points towards your chest (at about a 45 degree angle from the floor). Close your right eye and move the pencil to the right or the left until the pencil appears to be perfectly vertical. Now look at the pencil with your right eye instead. It should appear to slope down diagonally to the left. That is exactly what is happening in the pictures above.

A device that would fuse these two images for you isn't hard to make, but it's even easier to learn how to fuse them simply by crossing your eyes. There are two ways of crossing your eyes -- making them point inwards towards your nose, and making them point outwards. One way will make the green X closer; one will make it farther away. I'll describe how to use the first method, because it's the own I typically use.

Look at the two images and cross your eyes towards your nose. This should cause each of the images to double. What you want to do is turn those four images into three by causing the middle two to overlap. This takes some practice. Try focusing on the Xs that form the rectangular frames of the images. Make each of those Xs line up exactly with the corresponding X from the frame of the other image. If you do this, eventually the two images should fuse into a single image, and you will see the colored Xs in depth. One tip: I find this harder to do on a computer screen than in print, so you might try printing this out.

That is the basic technique. You should be able to make your own and play around with it to see what you can do. For instance, this example has a bar pointing up out of the page, but you can also make a bar point into the page. You also might try creating more complicated objects. If you want, you can send me any images you make (coglanglab_AT_gmail_DOT_com), and I will post them (you can try including them as comments, but that is tricky).

One final tip -- you'll need to use a font that has uniform spacing. Courier will work. Times will not.

Finally, here's another stereogram that uses a completely different principle. If you can fuse these images, you should see an illusory white box floating in front of a background of Xs. In a future post, I'll explain how to make these.



Sure faces are special, but why?

Faces are special. There appears to be a dedicated area of the brain for processing faces. Neonates just a day or two old prefer looking at pictures of faces to looking at non-faces.

This has led many researchers to claim humans are born with innate knowledge about faces. Others, however, have claimed that these data are not the result of nature so much as nurture. Pawan Sinha at MIT attached a video camera to his infant child and let the tape roll for a few hours. He found that faces were frequently the most salient objects in the baby's visual field, and (I'm working from memory of a talk here) also found that a computational algorithm could fairly easily learn to recognize faces. Similarly, a number of researchers have claimed that the brain area thought to be specialized for face detection is in fact simply involved in detecting any object for which one has expertise, and all humans are simply face experts.

Things have seemed to be at an impass, but today, Yoichi Sugita from AIST spoke at both Harvard and MIT. The abstract itself was enough to catch everybody's attention:

Infant monkeys were reared with no exposure to any faces for 12 months. Before being allowed to see a face, the monkeys showed preference for human- and monkey faces in photographs. They still preferred faces even when presented in reversed contrast. But, they did not show preference for faces presented in upside-down. After the deprivation period, the monkeys were exposed first to human faces for a week. Soon after, their preference changed drastically. They preferred upright human faces but lost preference for monkey faces. Furthermore, they lost preference for human faces presented in reversed contrast. These results indicate that the interrelated features of the face can be detected without experience, and that a face prototype develops abruptly when flesh faces are shown.
Just to parse this: the monkeys were raised individually without contact with other monkeys. They did have contact with a human caregiver who wore a mask that obstructed view of the face. The point about not preferring upside down faces is important, as this is a basic feature of face processing.

This seems pretty decisive evidence for an innate face module in the brain, though one that requires some tuning (the monkeys' face preferences evolved with experience). However, Sugita apparently noted during the talk -- I heard this second-hand -- that perhaps the monkeys in question did in fact have some experience with faces prior to the face preference test; they could have learned by touching their own faces. This strikes me as a stretch, since that doesn't explain why they would become face experts.

Music on the Brain (on TV)

A few weeks ago, our building had a fire alarm. A friend who works on another floor speculated that maybe it was all the heat from the lights and cameras in her PI's office that day.

"Who was interviewing him?" I asked.

"I don't know," she replied. "There's always somebody interviewing him."

Our 11th floor has fewer media stars, but last week it was crawling with reporters. A magazine reporter was there most of the week interviewing people (I still don't know what magazine). Even more exciting, though, was the NBC camera crew.

Click here for the clip.

As far as science reporting goes, I'm afraid I have to admit it's uninspiring. But it was fun to see my friends and colleagues on the nightly news, and it couldn't have happened to nicer people.

You like video games, but does your brain?

According to CBC in Canada:
Men are more rewarded by video games than women on a neural level, which explains why they're more likely to become addicted to them.
In other words, men like video games more because their brains like them more. Since only one's brain can like or dislike something, this could be rewritten: Men like video games more because they like video games more.

It's hard to blame CBC entirely for this one. I haven't tracked down the article itself, but the abstract remarks:
Males showed greater activation and functional connectivity compared to females in the mesocorticolimbic system... These gender differences may help explain why males are more attracted to, and more likely to become "hooked" on video games than females.
This is hard to parse, and given the authors work at Stanford Medical School, I'm inclined to give them the benefit of the doubt. However, the way this is phrased seems to have the natural order of investigation backwards. Men are more likely to be addicted to video games than are women. Given they show these particular brain differences during video game playing, we can make some intelligent guesses as to what those parts of the brain do.

Once we understand those parts of the brain much, much better than we do today, we may actually have a good structural model that explains this gender difference. That may be what the authors of the study meant, and they may spell this out in the full article. However, CBC's statement that men are more likely to get addicted to video games because they are "more rewarded on the neural level," is both repetitious and obvious.

See the original CBC article here.

Anonymice run wild through science

I recently mentioned Jack Shafer's long-standing irritation at the over-use of anonymous sources in journalism. Sometimes the irritation is at using anonymous sources to report banalities. In my favorite column in that series (which has unfortunately been moribund for the last year or two), Shafer calls out anonymous sources whose identities are transparent. Why pretend to be anonymous when a simple Google search will identify you?

I had a similar question recently when reading the method section of a psychology research paper. Here is the first paragraph from the method section:
Sixteen 4-year-olds (age: M = 4,7; range = 4,1-4,11), and 25 college students (age: M = 18,10; range = 18,4-19,6) participated in this study. All participants were residents of a small midwestern city. Children were recruited from university-run preschools and adults were undergraduate students at a large public university.
Small midwestern city? Large public university? I could Google the two authors, but luckily the paper already states helpfully on the front page that both authors work at the University of Michigan, located in Ann Arbor (a small midwestern city). Maybe the subject recruitment and testing was done in some other university town, but that's unlikely.

This false anonymity is common -- though not universal -- in psychology papers. I'm picking on this one not because I have any particular beef with these authors (which is why I'm not naming names), but simply because I happened to be reading their paper today.

This brings up the larger issue of the code of ethics under which research is done (here are the regulations at Harvard). After some notable ethical lapses in the early days of human research (for instance, Dr. Waterhouse trying out the smallpox vaccine on children), it became clear that regulations were needed. As with any regulations, however, form often wins over substance. A lab I used to work at had a very short consent form that said something to the effect that in the experiment, you'll read words, speak out loud, and it won't hurt. This was later replaced with a multi-page consent form, probably at the request of our university ethics board, but I'm not sure. The effect was that our participants stopped reading the consent form before signing it. This was entirely predictable, and I think it is an example of valuing the form -- in particular, having participants sign a form -- over substance -- protecting research participants.

Since most of the research in a psychology department is less dangerous than filling out a Cosmo quiz, this doesn't really keep me up at night. However, I think it's worth periodically rethinking our regulations in light of their purpose.

Can you see this illusion?

Yesterday, Cognitive Daily posted a fairly compelling visual illusion. There is a while disk and a black disk. In the middle of each disk is a circle. The two circles go from black to white back to black in sequence.

Normally, the rules of perceptual grouping would cause you to see the two smaller disks blinking in unison as being related. However, in this case, due to the smaller disks being inside larger disks, most people see the disks blinking out of sequence. That is, you interpret the scene as a hole appearing in the left disk, then in the right disk, then in the left disk.

Both interpretations are correct. It's a matter of what your visual system focuses on. What interests me is, looking at the comments on this post, is that while the vast majority see the alternating blinking, some people only see the disks blinking in unison. One possibility is that they are misunderstanding what they were supposed to see. If there is some small percentage of people whose visual systems focus on different grouping principles, that could be very interesting and be useful in understanding perceptual grouping in the visual system.

So, if you only see the inner disks blinking in unison and don't get the alternation illusion, comment here or send an email to coglanglab_at_gmail.com.

Try out the illusion here.

Why are first-graders smarter than Chomsky?

Linguistics, it turns out, is very difficult. Although it's been over half a century since Chomsky sparked the charge to develop complete, generative grammars for languages (a set of rules that explain how to build grammatical sentences), success has been less than complete -- this despite the fact that children learn languages with ease. Why is it so difficult for a group of the world's most brilliant academics?

Here's a good explanation from Ray Jackendoff, in Foundations of Language:
It is useful to put the problem of learning more starkly in terms of what I like to call the Paradox of Language Acquisition: The community of linguists, collaborating over many decades, has so far failed to come up with an adequate description of a speaker's [knowledge] of his or her native language. Yet every normal child manages to acquire this f-knowledge by the age of ten or so, without reading any linguistics textbooks or going to any conferences. How is it that in some sense every single normal child is smarter than the whole community of linguists?

The answer proposed by the Universal Grammar hypothesis is that the child comes to the task with some [preconceptions] of what language is going to be like, and structures linguistic input according to the dictates (or opportunities!) provided by those expectations. By contrast, linguists, using explicit reasoning--and far more data from the world than the child--have a much larger design space in which they must localize the character of grammar. Hence, their task is harder than the child's: they constantly come face to face with the real poverty of the stimulus.
In other words, the idea is that linguists are too smart for their own good. They consider too many possibilities, and so there isn't enough data to decide between them. This is like trying to solve 3 simultaneous equations with 4 variables; if you remember your algebra, this can't be done. It's very similar to why philosophers can't figure out how it's possible, even in theory, to learn the meaning of a word.

The relationship between neuroscience and psychology

There are a certain amount of heated arguments within the behavioral sciences about the most appropriate way to study the question of behavior. Cellular and systems neuroscientists tend to have no use for psychological methods, finding them inefficient and messy (when I interviewed for graduate school, one monkey physiologist told me that my research interests were a waste of time that would lead to nothing. Monkey physiology, on the other hand...). People who do more cognitive work often feel that while neuroscience is more exact and perhaps makes more concrete progress, it's progress in the wrong direction.

I recently came across an excellent and succinct explanation of why both methods are necessary:

Experimental psychology on both human subjects and animals is an essential part of the enterprise, for the obvious reason that accurate characterizations of psychological phenomena are necessary to guide the search for explanations and mechanisms. Trying to find a mechanism when the phenomenon is misdescribed or underdescribed is likely to be quixotic. Neurology is an essential part of the enterprise because it provides both important behavioral data on human subjects and hypothesizes connections between specific brain structures and behavior. Neuroscience is essential both to discover the functional capacities of neural components and because reverse engineering is an important strategy for figuring out how a novel device works.

Churchland & Sejnowski. (1991) "Perspectives on cognitive neuroscience" in Lister & Weingarter, Perspectives on Cognitive Neuroscience, pp. 3-23.


Misunderstood

In an effort to understand linguistics slightly better, I am reading Ray Jackendoff's Foundations of Language. He starts off the first chapter with the tale of woe of the modern linguist:

Language and biology provide an interesting contrast... People expect to be baffled or bored by the biochemical details of, say, cell metabolism, so they don't ask about them. What interests people about biology is natural history--strange facts about animal behavior and so forth. But they recognize and respect the fact that most biologists don't study that. Similarly, what interests people about language is its "natural history": the etymology of words, where language came from, and why kids talk so badly these days. The difference is that they don't recognize that there is more to language than this, so they are unpleasantly disappointmed when the linguist doesn't share their fascination.
This passage sounded familiar. The psychologists I know spend a lot of time trying to decide how to answer the question, "What do you do?" While there is no agreed-upon response, everybody agrees that saying, "I am a psychologist," is guaranteed to lead to requests for advice about how to deal with somebody's crazy Aunt Maude. Saying "developmental psychology" will lead to requests for parenting advice.

My wife enjoys chronicling my own choices (for a while, I said cognitive neurosciencce, then neurolinguistics, then cognitive science, and now psycholinguistics -- but never psychology). To turn things around, though, she gets tired of people assuming that just because she's studying law, she'll either chase ambulances or defend crooks, when in fact most lawyers probably never set foot in a courtroom.

It's interesting that I've heard very similar complaints from vocalists: "Nobody who had never studied the violin would consider themselves a great talent, but anybody who can make noise come out of their mouths thinks they can sing."

This leads me to wonder if there are any professions who don't think they are widely misunderstood and don't feel ambushed at cocktail parties by well-meaning but clueless new acquaintances.