Field of Science

Building a Better Spell-Checker


LinkToday Slate carried an interesting piece about spell-checker technology by Chris Wilson. A spell-checker typically works in the obvious way: a word you type in is compared to a dictionary. The question is where the dictionary comes from. If you use a lot of proper nouns -- or, in my case, a lot of technical jargon -- you risk the red-squiggly wrath of Microsoft Word.

It's been clear to me for a while that search engines work from much larger lexicons than do word processors. The article fills in some detail as to how they do this (not surprisingly, it involves some of the sophisticated statistics that has become so important in computer approaches to language). Read the article here.


(image borrowed from eduscapes.com)

Science in the New Administration

During the Fall, I wrote a number of posts (starting with this one) arguing that science policy in the US was in poor shape, and that Obama looked like the more likely of the candidates to turn that around.

While I focused more on support for basic science, I was certainly concerned about how policy-makers use science to support their policy decisions. So far, all signs from the not-yet-nascent Obama administration continue to be promising.

Children giving orders to Mom and Dad

During the last month, I have been studying requests. Requests are interesting because there are many ways of making them, including commands ("Give me that"), requests ("Please give me that"), indirect requests ("Could you give me that?"), and hints ("Wouldn't it be nice if I had one of those?").

I just ran across a description of a fairly old line of research that is worth quoting directly:

Studies of role playing (Andersen (1978, 1989), Corsaro (1985), Mitchell-Kernan and Kernan (1977) have made it very clear that children make use of the symbolic value of characters' control acts types and forms. In Andersen's study, children of four and five were assigned specific roles through puppets, and she played a complementary role. This allowed her to see, within each child, the representation of contrasting roles, such as Father and Mother and Child, Doctor and Nurse and Patient. Fathers received fewer orders but gave them more, and received few imperatives, but gave them. Doctors were the same. The Child addressed six times as many imperatives to Mothers as to Fathers, and eight times as many 'let's' forms to Fathers as to the Mothers.
It would appear that children not only think fathers outrank mothers on the dominance hierarchy but that they seem to think they themselves outrank their mothers. Why this is I leave to others to speculate on.

Ervin-Tripp, S, Guo, J., Lampert, M. (1990). Politeness and persuasion in children’s control acts. Journal of Pragmatics, 14, 307-331

(Photo borrowed from http://www.radconsultancy.com/).

Mind and Brain

In periodic posts, I've been trying to lay out the modern scientific consensus on the mind/brain problem, with mixed success. If I had come across the following passage, from Ray Jackendoff's Language, Consciousness, Culture, a bit earlier, I might have saved some trouble, since I feel it is one of the clearest, most concise statements on the topic I have seen:

The predominant view is a strict materialism, in which consciousness is taken to be an emergent property of brains that are undergoing certain sorts of activity.

Although the distinction is not usually made explicit, one could assert the materialist position in either of two ways. The first would be 'methodological materialism': let's see how far we can get toward explaining consciousness under materialist assumptions, while potentially leaving open the possibility of an inexplicable residue. The second would be 'dogmatic materialism,' which would leave no room for anything but materialist explanation. Since we have no scientific tools for any sort of nonmaterialist explanation, the two positions are in practice indistinguishable, and they lead to the same research...

Of course, materialism goes strongly against folk intuition about the mind, which concurs with Descartes in thinking of the conscious mind as associated with a nonmaterial 'soul' or the like... The soul is taken to be capable of existence independently of the body. It potentially survives the death of the body and makes its way in the world as a ghost or a spirit or ensconced in another body through reincarnation... Needless to say, most people cherish the idea of being able to survive the death of their bodies, so materialism is more than an 'astonishing hypothesis,' to use Crick's (1994) term: it is a truly distressing and alienating one. Nevertheless, by now it does seem the only reasonable way to approach consciousness scientifically.

CogLangLab по-русски

One of the advantages of posting experiments on the Web rather than running them in a lab is that it makes it easier to recruit participants who don't happen to live near the lab.

Two years ago, I was testing an idea in the literature about the differences between reading an alphabetic script like English vs. reading a character-based script like Chinese. Although there are a fair number of Chinese-speakers living in Cambridge, it was still a lot of work to recruit enough participants. When I finally do the follow-up study, I'll post it on the Web.

In the meantime, I have a new experiment in Russian. Because America produces the bulk of the world's scientific research (for now), much of the work on language has focused on English. Periodically, it's a good idea to check other languages and make sure what we know about English generalizes (or doesn't). And so was born

Угадай кто сликтопоз

It takes about 5 minutes to complete. Participate in it by clicking here.

More amazing birds

I've been hearing for some time that starlings are remarkable vocal learners. For a brief time, there was a starling lab in our building.

I came across this video on grrlscientist's blog. I'm not sure if "amazing" or "creepy" is the right response.


Are elders better scientists?

A recent paper, discussed in a recent issue of Nature, found that across disciplines, professors in their 50s and 60s published about twice the number of papers each year as professors in their 30s. This is taken in the article as evidence that older professors can be very productive.

Nature allows readers to common on news briefs, and the comments raised the same issues I had. Here are the first two, for instance:

They don't seem to consider that older professors have larger research groups, i.e. more underlings to actually write the papers. Perhaps a better photo to illustrate the story would be the aged professor in their office wielding a red pen over their students' manuscripts.

Well, the older professors are also more established and have more connections, and therefore can participate in both small and large collaborative projects. No offense, but this survey only seems to prove an already obvious point.
Basically, older faculty tend to not only have more graduate students and post-docs, they also tend to have broad collaboration networks. This is not to say that older researchers are not productive, or that even less-productive older researchers aren't valuable members of the community, just that these data seem hard to interpret.

Language Wars

I was struck by a comment to a post a while back on Cognitive Daily:

It's "DOES the use of hand gestures." Please, pay attention; grammar matters. "The use of hand gestures" is the subject, and it is singular.

Grammar matters?

A certain segment of the population gets very worked up about "correct usage" of language. As a scientist, I understand the difference between "standard" and "non-standard" language, and why one might care, as an educator, about the standard. Language is most useful when everybody understands one another (cf The Tower of Babel). This is why the standardization of spelling was such an important -- and relatively recent -- achievement.

However, the people who say, "pay attention; grammar matters" seem to be concerned with something else entirely. I can't say for sure what this poster cared about, but most that I know believe that without proper language, one cannot have proper thoughts. Thus, if we could make everybody produce perfectly-diagrammable sentences, everyone would finally think right, too.

Really?
To actually prove this contention, you would have to do a controlled experiment. Find two people who speak with "poor" grammar and have similarly sloppy thinking, teach one the correct grammar, and see if that person now thinks more clearly than the uneducated speaker.

To the best of my knowledge, no such experiment has been done -- no doubt partly because scientists seem to as a group reject such thinking altogether. For one thing, you would have to define "correct grammar," which is a priori impossible to do. The only known way to determine if a sentence is grammatical is to querry a native speaker of a language. That's it. There are no other methods.

So, now suppose we have two people (for instance, Henry Higgins and Eliza Doolittle) who disagree as to whether a sentence is grammatical. How do we decide between the two of them? Typically, most people for whatever reason side with the wealthier and more politically powerful of the two (in this case, Henry Higgins).

That doesn't sound very democratic. So we could take a poll. Typically, you'll find that one judgment is more common than another. But now we have only defined a standard: not necessarily a "correct" judgment. Moreover, these differences in judgments often vary as a function of where you live. As I understand it, there are parts of the South where most people will agree that you simply can't refer to a group of people as "you" -- "y'all" is the correct term.

A war of words
If it is the case that there is no evidence that "correct grammar" helps people think more correctly, and that this is because there is no such thing as correct grammar -- and I assure you, there isn't -- then why do people get so hung up on it?

First, you might answer that most people live their lives just fine without ever thinking about correct and incorrect grammar. I suspect that is false. Much hay has been made about Palin's "mangling" of the English language, some of which is valid, but much of which is due to the fact that she speaks with a nonstandard dialect. It has been remarked by more than one Southerner that Yankees think they are dumb just because of their accent. If you've never done this, then I ask you, have you really never assumed someone with a West Virginian accent was dumb? If you haven't, then at least accept that even babies prefer people who speak with the local standard accent (note that somewhat older children may actually prefer a person with a locally high-status accent rather than their own accent).

I've heard it claimed that wars have been fought over linguistic differences, but I couldn't think of any obvious examples (please comment away if you have one). Still, I think the evidence is compelling that people really, really care about accent and language use, and this goes beyond a belief in the empirical claim that right language leads to right thoughts. This runs deeper. Hopefully we will some day understand why.

Who are you calling a neuroscientist: Has neuroscience killed psychology?

The Chronicle of Higher Education just produced a list of the five young scholars to watch who combine neuroscience and psychology. The first one listed is George Alvarez, who was just hired by Harvard.

Alvarez should be on anybody's top five list. The department buzzed for a week after his job talk, despite the fact that many of us already knew his work. What is impressive is not only the quantity of his research output -- 19 papers at last count, with 6 under review or revision -- but the number of truly ground-breaking pieces of work. Several of his papers have been very influential in my own work on visual working memory.

He is also one of the best exemplars of classical cognitive psychology I know. His use of neuroscience techniques is minimal, and currently appears to be limited to a single paper (Batelli, Alvarez, Carlson & Pascual-Leone, in press). Again, this is not a criticism.

Neurons vs. Behavior

This is particularly odd in the context of the attached article, which tries to explore the relationship between neuroscience techniques and psychology. Although there is some balance, with a look at the effect of neuroscience in draining money away from traditional cognitive science, I read the article as promoting the notion that the intersection of neuroscience and psychology is not just the place to be at, it's the only place to be at.

Alvarez is one of the best examples of the opposite claim: that there is still a lot of interesting cognitive science to be done that doesn't require neuroimaging. I should point out that I say this all as a fan of neuroscience, and as somebody currently designing both ERP and fMRI experiments.

EEG vs. fMRI

One more thing before I stop beating up on the Chronicle (which is actually one of my favorite publications). The article claims that EEG (the backbone of ERP) offers less detailed information about the brain in comparison with fMRI. The truth is that EEG offers less detailed information about spatial location, but its temporal resolution is far greater. If the processes you are studying are lightning-fast and the theories you are testing make strong claims about the timing of specific computations, fMRI is not ideal. I think this is why fMRI has had less impact on the study of language than it has in some other areas.

For instance, the ERP study I am working on looks at complex interactions between semantic and pragmatic processes that occur over a few hundred milliseconds. I have seen some very inventive fMRI work on the primary visual cortex that managed that kind of precision, but it is rare (and probably only succeeded because the layout of the visual areas of the brain, in contrast with the linguistic areas, is fairly well-established).

Talking in New Tongues -- How Easy is It?

Today's post is written by a guest, Kelly Kilpatrick.

It’s a diverse world we live in, where thousands of languages vie with each other to exist and flourish. Some are more widely spoken than others, and some are dying out even as I write this. We are born knowing only one language – that of tears and noises. And as we grow, we’re introduced to the language spoken by those who surround us, picking up bits and pieces as we pass year after year. Languages come easily to some of us, while others have to work harder than the rest to master a different tongue. But there are a few circumstances when picking up a new tongue is easy, and that’s:

• When you’re young: Children tend to learn new languages faster than adults, probably because their brains are still in the developing stage. The best time to learn a new language is when you’re a child, so if you want your kinds to excel in a language besides their mother tongue, it’s best to get them started as early as possible.

• When two languages are spoken at home: When both parents speak different languages, children tend to pick up both tongues pretty fast, especially when both are spoken with the same degree of frequency.

• When you live in a foreign country: Your mother tongue at home and a foreign language when you’re outside, either at school, college or work, helps you speak both fluently. You pick up a new language quickly when everyone around you understands and speaks only that particular tongue since sign language works only up to a certain limit.

• When you work with people of other cultures: Working in a multicultural environment means you get to interact with people of different races from various countries. If you hang around them long enough, you tend to pick up certain terms and slang expressions of their mother tongue. You may even be able to understand what they say even if you’re not able to talk as fluently as they do.

• When you’re forced to: Imagine having to go to another country to work or live amongst a different people; you must learn the language as fast as you can or you’re going to find things extremely difficult. Conditions like these are ideal in encouraging your brain to learn fast since your survival depends on your new ability.

While it’s easy to learn how to speak a new language, it’s much harder to master the written form of many scripts, especially the ones that are calligraphic, like Chinese, Arabic and many other Asian languages. Western tongues more or less follow the English script, so if you know the pronunciations and spellings, you’re good to go. But it takes years and years of practice to get the hang of calligraphic scripts. Learning a new language can be an exercise that’s both fun and useful; go ahead, try taking on a new tongue today.

----
This post was contributed by Kelly Kilpatrick, who writes on the subject of online colleges. She invites your feedback at kellykilpatrick24 at gmail dot com.

For previous posts on the subject of language-learning, click here, here, here, here or here.

Galileo -- Smarter than you thought

It is often said of cognitive scientists that we have, as a group, a memory that only stretches back about 10 years. This is for good reasons and bad. Methods change and improve constantly, constantly making much of the literature irrelevant. Then there is the fact that there is so much new work, it's hard to find time to read the old.

This is a shame, because some of the really old work is impressive for its prescience. A recent issue of Trends in Neurosciences carried an article on Galileo's work on perception. Most people then -- and probably most people now -- conceived of the senses as passing along an accurate representation of the world to your brain. We now know the senses are plagued by illusions (many of them actually adaptive).

Galileo was on to this fact. His study of the moon proved that perceptions of brightness are constantly subject to illusion. More generally, he noted -- contrary to the popular view -- that much of what we sense about the world is in a real sense an illusion. Objects exist, but colors and tastes in an important sense do not. It's worth presenting a few of the quotes from the article:

I say that, as soon as I conceive of a piece of matter, or a corporeal substance,...I do not feel my mind forced to conceive it as necessarily accompanied by such states as being white or red, bitter or sweet, noisy or quiet, or having a nice or nasty smell. On the contrary, if we were not guided by our senses, thinking or imagining would probably never arrive at them by themselves. This is why I think that, as far as concerns the object in which these tastes, smells, colours, etc., appear to reside, they are nothing other than mere names, and they have their location only in the sentient body. Consequently, if the living being were removed, all these qualities would disappear and be annihilated.

see also:

A wine's good taste does not belong to the objective determinations of the wine and hence of an object, even of an object considered as appearance, but belongs to the special character of the sense in the subject who is enjoying this taste. Colours are not properties of bodies to the inuition of which they attach, but are also only modifications of the sense of sight, which is affected in a certain manner by light.


Marco Piccolino, Nicholas J. Wade (2008). Galileo Galilei's vision of the senses Trends in Neurosciences, 31 (11)

Another language blog

My favorite language blog remains Language Log. However, I was informed of a very interesting blog on language. Like Language Log, it's focus is not empirical research (as is the focus here). But the group of authors do regularly hit on interesting phenomena in language and have insightful things to say about them. I recommend that you check it out.

Do Bullies like Bullying?

Although Slate is my favorite magazine, and usually the first website I check each day, I've been known to complain about its science coverage, which typically lacks the insight of its other features. A much-too-rare exception to this are the occasional articles by Daniel Engber (full disclosure: I have attempted to convince Engber, a Slate editor, to run articles by me in the past, unsuccessfully).

Yesterday, he wrote an excellent piece about a recent bit of cognitive neuroscience looking at bullies and how they relate to bullying. Researchers scanned the brains of "bullies" while they viewed videos of bullying and reported that pleasure centers in the brain activated.

In a cheeky fashion typical of Slate, Engber questions the novelty of these findings:

Bullies like bullying? I just felt a shiver run up my spine. Next we'll find out that alcoholics like alcohol. Or that overeaters like to overeat. Hey, I've got an idea for a brain-imaging study of child-molesters that'll just make your skin crawl!
Obviously, I was a sympathetic reader. But Engber does not stop there:

OK, OK: Why am I wasting time on a study so lame that it got a write-up in the Onion? Hasn't this whole fMRI backlash routine gotten a bit passé?
Engber goes on to detail a number of limitations to the study, including how the kids were defined as "bullies" (some appear to be rapists, for instance) and also how "pleasure center" was defined (the area in question is also related to anxiety, so one could reasonably argue bullies find bullying worrisome, not pleasurable).

The second half of the article is a plea for better science reporting, one that I hope is widely-read. Read it yourself here.

How the Presidential Campaign Changed the English Language

Languages change over time, which is why you shouldn't take seriously any claims about this language being older than the other, or vice versa. A language is only old in the same sense that a farmer can say, "I've had this axe for years. I've only changed the handle twice and the head three times."

Language change is probably slowed these days by stasis-inducing factors like books. However, rapid communication means that new phrases or ways of speaking can be disseminated with lightning speed. Here is an interesting article about the effect McCain & Palin's drill, baby, drill has had on the English language.

A Bush-administration flunkee's unfortunate statement that reporters -- but not members of the Bush administration -- are members of "what we call the reality-based community" led to an interesting shift in the way Progressives speak. The compound adjective "reality-based" has become part inside joke, and part simply a new word. I suspect "real America" will similarly entrench itself in the English language.

Don't blink, you'll lose the election!

Sarah Palin has been clear on one subject: You can't blink. While people argue about whether this is a good administrative philosophy, there is no actually scientific evidence that it is good campaign strategy.

The International Journal of Psychophysiology recently published an abstract that claims that from 1960-2004, the US presidential candidate who blinked most during the debates got fewer votes than his opponent in every election. For those counting, that is every election which has featured televised debates.

The point of the abstract, interestingly, is not to predict campaign outcomes. The point was to study eyeblinks. Specifically, there are hypotheses about what elevated rates of blinking might suggest, such as a lack of focus or a negative mental state. The question the researchers were asking was whether observers pick up on eyeblink rates and make judgments or predictions based on them. This *might* suggest that they do.

It's important to note that this is a published abstract, not a full paper, so it is difficult to evaluate the methods used, though presumably they involved counting eyeblinks.

On more psychologists in Congress


Dennis Shulman is an ordained rabbi with a clinical psychology Ph.D. from Harvard. He is the New York Times choice for the New Jersey's 5th district. In the interest of greater representation of psychologists in Congress, he's mine, too.

Still, I wouldn't mind if a psycholinguist ran for Congress.

Physics is for wimps

Matt Springer may not have been throwing down the gauntlet in his Oct. 21 post, but I'm picking it up. In a well-written and well-reasoned short essay, he lays out just what is so difficult about the study of consciousness:

PZ Myers, as is his wont, recently wrote here that after his death he will have ceased to be. In other words, his experience of consciousness will have ended forever. Can we test this?
He goes on to describe some possible ways you might test the hypothesis. It turns out it is very difficult.

[PZ Myers] could die and then make the observation as to whether or not he still existed. If he still did he'd be surprised, but at least he'd be able to observe that he was still somehow existing. If he didn't still exist, he's not around to make the observation of his nonexistence. So personal experimentation can't verify his prediction.
Springer goes through some possible ways one might use neuroscience to test the hypothesis. None of them are very good either. In the end, he concludes:

Where am I going with this? Nowhere, that's the point. Clean experimental testability is why I like physics.

Now, I like physics, too. I almost majored in it. But I like cognitive science more for precisely this reason: developing the right experiment doesn't just take knowing the literature or being able to build precision machinery, though both help. What distinguishes the geniuses in our field is their ability to design an experiment to test something nobody ever thought was testable. (After that, the engineering skill comes in.)

Hands thrown up.

Many people threw up their hands at answering basic questions like how many types of light receptors do we have in our eyes or how fast does a signal travel down a nerve cell ("instantanously" was one popular hypothesis) until Hermann von Helmholtz designed ingenious behavioral experiments long before the technology was available to answer those questions (and likely before anyone knew such technology would be available).

However, while Helmholtz pioneered brilliants methods for understanding the way the adult mind works, he declared it impossible to ever know what a baby was thinking. His methods wouldn't work with babies, and he couldn't think of any others. A hundred years later, however, researchers like Susan Carey, Liz Spelke and others pioneered new techniques to probe the minds of babes. Spelke managed to prove babies only a few months old have basic aspects of object perception in place. But Spelke herself despaired of ever testing certain aspects of object perception in newborns, until a different set of researchers (Valenza, Leo, Gava & Simion, 2006) devised an ingenious experiment that ultimately proved we are born with the ability to perceive objects (not just a blooming, buzzing confusion).

"I study dead people, everywhere."

I'm not saying I know how to test whether dead people are conscious. I'm still stumped by much easier puzzles. But a difficult question is a challenge, not a reason to avoid the subject.

Nature Magazine endorses Obama (but not because of science policy)

Nature Magazine's latest issue, just published online, endorses Obama. Interestingly, this is not because of "any specific pledge to fund some particular agency or initiative at a certain level." Instead, the editorial emphasizes the contrast in the ways the two candidates reach decisions:

On a range of topics, science included, Obama has surrounded himself with a wider and more able cadre of advisers than McCain. This is not a panacea. Some of the policies Obama supports -- continued subsidies for corn ethanol, for example -- seem misguided. The advice of experts is all the more valuable when it is diverse: 'groupthink' is a problem in any job. Obama seems to understands [sic] this. He tends to seek a range of opinions and analyses to ensure that his opinion, when reached, has been well considered and exposed to alternatives. He also exhibits pragmatism -- for example in his proposals for health-care reform -- that suggests a keen sense for the tests reality can bring to bear on policy.

Some will find strengths in McCain that they value more highly than the commitment to reasoned assessments that appeals in Obama. But all the signs are that the former seeks a narrower range of advice. Equally worrying is that he fails to educate himself on crucial matters; hte attitude he has taken to economic policy over many years is at issue here. Either as a result of poor advice, or of advice inadequately considered, he frequently makes decisions that seem capricious or erratic.

The power of because

To ask for a dime just outside a telephone booth is less than to ask for a dime for no apparent reason in the middle of the street.
-Penelope Brown & Stephen Levinson, Politeness

The opening quote seems to be true. It raises the question of why, though. An economist might say a gift of 10 cents is a gift of 10 cents. You are short 10 cents no matter what the requestee's reason. So why should it matter?

The power of because?
Empirically, in a well-known experiment, Ellen Langer and colleagues showed that 95% of people standing in line to use a copy machine were willing to let another cut in line as long as the cutter offered a reason, even if that reason was inane (e.g. "because I have to make copies.")

The explanation given by Langer and colleagues was that people are primed to do defer to somebody who provides a reason. Thus, the word "because" essentially in and of itself can manipulate others. This not only causes us to give money to people who need it to make a phone call, but to simply give money to anybody who gives a reason.

I haven't been able to find the original research paper -- it seems to have perhaps been reported in a book, not in a published article -- so I don't know for sure exactly what conditions were used. However, none of the media reports I have read (such as this one) mention the perhaps the most important control: a condition in which the cutter gives no excuse and does not use the word "because."

What are other possible explanations?
Other possible explanations are that people are simply reluctant to say 'no,' especially if the request is made in earnest.

There are a couple reasons this could be true. People might be pushovers. They might also simply have been taught to be very polite.

Something that strikes me more likely is that most people avoid unnecessary confrontation. Confrontation is always risky. It can escalate into a situation where somebody gets hurt. Certainly, violent confrontations have been started over less than conflicting desires to use the same copier.

Speculation

None of these speculations, however, explain the opening quote. Perhaps there is an answer out there, and if anybody has come across it, please comment away.

A vote for McCain is a vote against science

Readers of this blog know that I have been skeptical of John McCain's support for science. Although he has said he supports increasing science funding, he appears to consider recent science funding budgets that have not kept pace with inflation to be "increases." He has also since called for a discretionary spending freeze.

In recent years vocally anti-science elements have hijacked the science policies of the Republican party -- a party that actually has a strong history of supporting science -- so the question has been where McCain stands, or at least which votes he cares about most. The jury is still out on McCain, but Palin just publicly blasted basic science research as wasteful government spending.

The project that she singled out, incidentally, appears to be research that could eventually lead to new treatments of Autism. Ironically, Palin brought up this "wasteful" research as a program that could be cut in order to fully fund the Individuals with Disabilities Education Act.

Become a Phrase Detective: A new, massive Internet-based language project

A typical speech or text does not consist of a random set of unrelated sentences. Generally, the author (or speaker) starts talking about one thing and continues talking about it for a while. While this tends to be true, there is typically nothing in the text that guarantees it:

This is my brother John. He is very tall. He graduated from high school last year.
We usually assume this is a story about a single person, who is tall, a recent high school graduate, named John, and who is brother of the speaker. But it could very well have been about three different people. Although humans are very good at telling which part of a story relates to which other part, it turns out to be very difficult to explain how we know. We just do.

This is a challenge both to psychologists like myself, as well as to people who try to design computer programs that can analyze text (whether for the purposes of machine translation, text summarization, or any other application).

The materials for research

A group at the University of Essex put together an entertaining new Web game called Phrase Detectives to help develop new materials for cutting-edge research into this basic problem of language. Their project is similar to my ongoing Dax Study, except that theirs is not so much an experiment as a method for developing the stimuli.

Phrase Detectives is set up as a competition between users, and the results is an entertaining game that you can participate in more or less as you choose. Other than its origins, it looks a great deal like many other Web games. The game speaks for itself and I recommend that you check it out.

What's the point?

Their Wiki provides some useful details as to the purpose of this project, but as it is geared more towards researchers than the general public, it could probably use some translation of its own. Here's my attempt at translation:
The ability to make progress in Computational Linguistics depends on the availability of large annotated corpora...
Basically, the goal of Computational Linguistics (and the related field, Natural Language Processing) is to come up with computer algorithms that can "parse" text -- break it up into its component parts and explain how those parts relate to one another. This is like a very sophisticated version of the sentence diagramming you probably did in middle school.

Developing and testing new algorithms requires a log of practice materials ("corpora"). Most importantly, you need to know what the correct parse (sentence diagram) is for each of your practice sentences. In other words, you need "annotated corpora."

...but creating such corpora by hand annotation is very expensive and time consuming; in practice, it is unfeasible to think of annotating more that one million words.
One million words may seem like a lot, but it isn't really. One of the complaints about one of the most famous word frequency corpora (the venerable Francis & Kucera) is that many important words never even appear in it. If you take a random set of 1,000,000 words, very common words like a, and, and the take up a fair chunk of that set.

Also, consider that when a child learns a language, that child hears or reads many, many millions of words. If it takes so many for a human who is genetically programmed to learn language, how long should it take a computer algorithm? (Computers are more advanced than humans in many areas, but in the basic areas of human competency -- vision, language, etc. -- they are still shockingly primitive.)

However, the success of Wikipedia and other projects shows that another approach might be possible: take advantage of the willingness of Web users to collaborate in resource creation. AnaWiki is a recently started project htat iwll develop tools to allow and encourage large numbers of volunteers over the Web to collaborate in the creation of semantically annotated corpora (in the first instance, of a corpus annotated with information about anaphora).
This is, of course, what makes the Web so exciting. It took a number of years for it to become clear that the Web was not just a method of doing the same things we always did but faster and more cheaply, but actually a platform for doing things that had never even been considered before. It has had a deep impact in many areas of life -- cognitive science research being just one.

Human behavior on display in the subway

Riding Boston's T through Cambridge yesterday, I was reminded of why I love this town. You can learn a lot about a city riding its public transportation (and if the city doesn't have public transportation, then you have learned something, too).

In Russia, for instance, people stare coldly off into space. The blank look can appear hostile to those not accustomed to it, but it's really more representative of how Russians carry themselves in public than representative of what Russians are like more generally (some of the warmest people I know are Russian. They just don't display it on the train). To the extent that people do anything while on the train, they mostly do crossword puzzles (at least in St. Petersburg, where I've spent most of my time).

In Taiwan, reading is rampant. You can see this outside of the subway as well, since there are bookstores everywhere, and they are very popular. This made me feel more at home (I almost always read on the train) than in business-minded Hong Kong, where reading was much less common. Hong Kong is one of my favorite cities, but its decidedly short on bookstores.

This brings me back to my T ride through Cambridge yesterday. The person sitting next to me was reading what was clearly a language textbook, but I couldn't recognize the writing system. It looked vaguely Asian, but I know enough of Japanese, Chinese and Korean to know it wasn't one of those. Eventually, he closed the book and I saw it was a an Akkadian textbook. Akkadian, incidentally, hasn't been spoken in about two thousand years.

That is Cambridge -- and Boston more generally. Many of the people on the train are grading papers, reading scientific articles or studying a language. It's very much a town of academics. (A large percentage of the metro riders also wear Red Sox gear. The two populations are not mutually exclusive.)

Singapore's Science Complex


Among developing countries that are investing heavily in science, Singapore (is Singapore still a developing country?) stands out. A recent article in Nature profiles a massive new public/private science complex called "Fusionopolis." This is a physical-sciences counterpart to the existing "Biopolis."

Although the overall spending rate on science is still small by country standards ($880 million/year), it is impressive on a per-capita basis. Currently, it is spending 2.6% of its gross domestic product on science, and plans to increase to 3% by the end of the decade, which would put it ahead of Britain and the United States.

What struck me in the article was that Singapore is very explicit about it's goal, and that isn't knowledge for knowledge's sake. According to Chuan Poh Lim, chairman of A*STAR, Singapore's central agency for encouraging science and technology, Singapore recognizes it can't compete with China or India as a low-cost manufacturer. "In terms of 'cheaper and faster', we will lose out. We need a framework for innovation."

The ultimate goal is to build an economy with a stronger base in intellectual property, generating both new products and also patent royalties.

Iranian politician moonlights as scientific plagiarist

It appears that one of the plagiarists caught by Harold Garner's Deja Vu web database, "author" of a paper, 85% of which was stitched together from five papers by other researchers, is Massoumeh Ebtekar, former spokeswoman for the militant students that held 52 Americans hostage in the US Embassy in Tehran during the Carter administration, former vice-president under Mohammad Khatami, and current member of the Tehran City Council.

Nature, my source for this news, reports that she has blamed this on the student who helped her with the manuscript. This would seem to indicate that the student wrote most or all of the paper, despite not being listed as an author...which is a different kid of plagiarism, if one more widely accepted in academia.

As ice melts, oceanography freezes

Nature reports that the US academic oceanographic fleet is scaling back operations due to a combination of budget freezes and rising fuel costs. This means that at least one of its 23 ships will sit out 2009, and two others will take extended holidays.

Even so, more cuts will probably necessary.

This is of course on top of the budgetary crisis at one of the USA's premier physics facitilities, Fermi Lab.

A rash of scientific plagiarism?

Nature reports that Harold Garner of the University of Texas Southwestern Medical Center in Dallas has been scouring the medical literature using an automated text-matching software package to catch plagiarized articles.

A surprising number have been found. 181 papers have been classified as duplicates, sharing 85% of their text, on average, with a previous paper. One quarter of these are nearly 100% identical to a previous publication.

While it is troubling that anybody would be so brazen, the fact that they have gotten away with it so far says something: there are a lot of journals. And a lot of papers. For a plagiarist to be successful, it must be the case that neither the editor nor any of the referees have read the original article -- this despite the fact that referees are typically chosen because they are experts in the field the article addresses.

That, I think, is the big news: that it is possible to plagiarize so blatantly.

Incidentally, the Nature news brief suggests that the confirmed plagiarism is usually carried out in obscure journals. This means that the plagiarists are gaining relatively little for their effort, and the original authors are losing little.

That said

Garner's project has apparently identified 75,000 abstracts that seem highly similar. It's hard to tell what that means, so we'll have to wait for the full report.

An abstract is about 200 words long. PsychInfo currently lists 10,098 which contain the phrase "working memory." One would assume that, even if all of them are examples of independent work, many are highly similar just by random chance. So I hope to find out more about how "highly similar" is being operationalized in this project.

While I suspect that plagiarism is not a huge problem, I still think it is fantastic that people are attacking it with these modern tools. I think we will be seeing a lot more of this type of work. (Actually, come to think of it, a professor I had in 2002 actually used an automated plagiarism-catching software program to screen student homework, so this has been around for a while.)

Learning verbs is hard

I am getting ready to write up some of my recent research on verb learning -- a project for which the Dax Study and the Word Sense experiments are both follow-ups. This means a spate of reading. As I come across interesting papers, I'll be sharing them here.

A PsychInfo search turned up a very intriguing paper by Kerstin Meints, Kim Plunkett and Paul Harris published earlier this year in Language and Cognitive Processes on verb learning.

What is tricky about verbs?

Verbs are very difficult words -- both for linguists to describe and for babies to learn. In this post, I'll focus on the second part.

First, unlike nouns, verbs refer to something you can't see. You can point to a ball, but it is much harder to point to thinking. Even action verbs (break, jump) are typically used when the action has already been completed and is no longer visible.

To make matters worse, Lera Boroditsky and Deidre Gentner have noted that verbs are more variable across languages than are nouns, which means either that there can be fewer innate constraints in acquisition or that there simply are fewer such constraints.

The tricky aspect of verbs that Meints, Plunkett and Harris focus on, though, is the way in which verbs generalize. For instance, to use the verb eat correctly, you have to use it to describe the actions of many different eaters (horse, cow, Paul, Sally, George) as well as many different objects which are eaten (sandwich, apple, old boot).

Nouns, of course, have to be generalized. Not all apples look the same. But then, neither do all acts of eating (politely, messily, with a fork).

Using a verb to its fullest

Meints, Plunkett & Harris noted that for any given verb, there are stereotypical direct objects (John ate the cookie) and unusual direct objects (John ate the bush).

One might imagine that children start off expecting verbs to apply only to stereotypical events, because that's what they actually hear their parents talk about (Don't eat the cookie!). Only later do they learn that the verb extends much more broadly to events which they have never witnessed or discussed (Don't eat the bush!).

This seems like a very plausible learning story, very similar to Tomasello's Verb Islands hypothesis, though I'm not sure if it's one he explicitly endorses (hopefully, I'll be reading more Tomasello shortly).

Alternatively, children might start by assuming a verb can apply across the board. That is, children treat verbs categorically, more in line with the algebraic theories that most linguists seem to endorse but which a number psychologists find implausible.

The data

The researchers used a tried-and-true method to test the language abilities of young kids: present the kids with two videos side by side (for instance, of John eating a cookie and of Alfred sweeping a floor) and say "look at the eating." If the child knows what "eating" means, they should look at John and not Alfred.

The key manipulation was that the eating event was either stereotypical (John eating a cookie) or unusual (John eating a bush). Note, of course, that a number of verbs were tested, not just eating.

15 month olds failed at the task. They didn't appear to know any of the verbs. 18 month olds, however, looked at the correct video regardless of the typicality of the event. They understood verbs to apply across the board. 24 month olds, just slightly older, however, looked at the correct video for the typical event but not for the atypical. By 3 years old, though, the kids were back to looking at the correct event regardless of typicality (though they reportedly giggled at the atypical events).

What does that mean?

The difficulty with cognitive science is not so much in creating experiments, but in interpreting them. This one is difficult to interpret, though potentially very important, which is why I called it "intriguing."

The results minimally mean that by the time kids can perform this task -- look at an event described by a verb when that verb is mentioned -- they are not immediately sensitive to the typicality of the event. Later, they become sensitive.

At least two issues constrain interpretation. First, we don't really know that the 18 month old infants were not sensitive to typicality, only that they didn't show it in this experiment. Second, we don't know whether the 24 month olds thought the event of John eating a bush could not be described by the verb eat (which would be wild!) or if they simply found the video such an implausible instance of the verb eat that they paid equal attention to the other video, just in case an a more typical example of eating showed up there.

In conclusion

So which theory of language learning does this result support? I honestly am not sure. If it had shown a growing, expanding interpretation of verb meaning, I might have said it supported something like the Verb Island hypothesis. If children started out with an expansive understanding of the verb and stuck with it, I might say it endorsed a more classic linguistic point of view.

The actual results are some combination of the two, and very hard to understand. (I don't want to try characterizing the authors' interpretation, because I'm still not sure I completely understand it. I recommend reading the paper instead.)


-------
Kerstin Meints, Kim Plunkett, Paul Harris (2008). Eating apples and houseplants: Typicality constraints on thematic roles in early verb learning Language and Cognitive Processes, 23 (3), 434-463 DOI: 10.1080/01690960701726232

Text messages for elephants


It has been widely noted that even in areas too remote or poor to have regular telephone service, cell phones and text messaging are ubiquitous. Now, even elephants send text messages.

Modern conservation
As reported by the New York Times
, a protected elephant has been fitted with a collar that sends a text message whenever the elephant nears local farms. This was done after several elephants on a local reservation had to be shot to protect the area farmers. Now, when the elephant wanders from its range, rangers arrive to scare it back.

The article is worth reading in its entirety. It's great that a smart method has been found to help wild animals and human civilization coexist.

What struck me, though, was the note that elephants learn from one another, and deterring one elephant from raiding farms can help stop other elephants as well. If anybody knows more about this, I'd be very interested in hearing what is known about elephant social learning.

Psychic word learning. No, seriously.

When I started to read chapter books, I frequently ran across words I didn't know. Being too lazy to look them up in a dictionary, I just made up definitions for the words and continued along.

Children learn tens of thousands of words, and since they don't generally carry around a pocket dictionary to look up every new word, they are frequently forced to do what I did: make a good guess and run with it. (Contrary to popular belief, adults rarely define words for children, and children don't necessarily ask for definitions, either.)

While we tend to focus on children's acquisition of language, the truth is that even adults must deal with new vocabulary. For one thing, the English language contains many more words than any one person can know. Estimates vary considerably, but you typically see estimates of about 500,000 words in English and only 60,000 in the adult vocabulary. So the chances of coming across a new word are pretty good, especially if you read the New Yorker.

Similarly, new words pop into the language all the time, such as to Bork somebody or to fax a document. (Those are the classic and now somewhat rusty examples. A more modern one is to Swift-Boat a candidate.)

While we can sometimes look up definitions, we often trust our instincts to define these new words for us. How exactly this happens is still not completely understood. Some new uses of old words are probably adopted as a type of metaphor (not that we know how metaphor works, either). Others may be related to derivational morphology (the method of creating a new word from an old word by adding an affix; e.g., happy -> happiness, employ -> employee).

I recently posted a new experiment, the results of which will hopefully help us better understand this process. If you have 5 minutes, please participate (click here). As always, when this study is done, I will post the results here and on the main website.

Science's Call to Arms

In case anyone was wondering, I am far from alone in my call for a new science policy in the coming administration. It is the topic of the editorial in the latest issue of Science Magazine America's premier scientific journal:

For the past 7 years, the United States has had a presidential administration where science has had little place at the table. We have had a president opposed to embryonic stem cell research and in favor of teaching intelligent design. We have had an administration that at times has suppressed, rewritten, ignored, or abused scientific research. At a time when scientific opportunity has never been greater, we have had five straight years of inadequate increases for U.S. research agencies, which for some like the National Institutes of Health (NIH) means decreases after inflation.

All of this has been devastating for the scientific community; has undermined the future of our economy, which depends on innovation; and has slowed progress toward better health and greater longevity for people around the world.
Dr. Porter, the editorialist, goes on to ask

So if you are a U.S. scientist, what should you do now?

He offers a number of ideas, most of which are probably not practical for a graduate student like myself ("volunteer to advise ... candidates on science matters and issues.").

The one that is most practical and which anybody can do is to promote ScienceDebate2008.com. He acknowledges that the program's goal -- a presidential debate dedicated to science -- will not be accomplished in 2008, bu the hope is to signal to the media and to politicians that people care about science and science policy.

And who knows? Maybe there will be a science debate is 2012?

Word Sense: A new experiment from the Cognition & Language Lab

The last several months have been spent analyzing data and writing papers. Now that one paper has been published, two are well into the review process, and another is mostly written, it is time at the Cognition and Language lab to start actively collecting data again.

I just posted our first major new experiment since last winter. It is called "Word Sense," and it takes about 5 minutes to complete. It can be taken by anybody of any age and of any language background.

As always, you can view a description of the study at the end. You also will get a summary of your own results.

I'll be writing more about this research project in future posts.

Does literacy still matter?

In an intriguing recent article in Science Magazine (subscription required), Douglas Oard of the University of Maryland asks what the cultural consequences of better speech recognition software will be.

He notes that the reason literacy is so important is the "emphemeral nature of speech." Even after audio recording became cheap, print was still necessary because it is easier to store and easier to skim and search.

However, new technology is rapidly shifting the balance, as hardware space becomes cheap and computerized searching of audio material becomes effective. Perhaps the costs of learning to read will soon cease to be justified.

Really?

Oard recognizes there will be resistance to the idea (note that he doesn't actually endorse eliminating reading and writing), but he cautions that we should think with our heads, not our biases:

Our parents complained that our generation relied on calculators rather than learning arithmetic... In Plato's Phaedrus, the Pharaoh Thamus says of writing, "If men learn this, it will implant forgetfulness in their souls: They will cease to exercise memory because they rely on that which is written." ... Our generation will unlock the full potential of the spoken word, but it may fall to our children, and to their children, to learn how best to use that gift.


---------
D. W. Oard (2008). SOCIAL SCIENCE: Unlocking the Potential of the Spoken Word Science, 321 (5897), 1787-1788 DOI: 10.1126/science.1157353

Dead languages

Latin is dead, as dead as dead can be.
First it killed the Romans, and now it's killing me.

But not kids in Westchester County, it would seem.

The linked article also notes that the number of students taking the National Latin Exam has risen steadily in the last few years. As somebody who studied Latin in high school and who loves languages generally, that seems like a good thing. But I do have to wonder: why Latin?

The science of flirting and teasing


Flirting appears to be a universal -- and I would venture, innate -- human behavior. It is so universal that the degree to which many aspects of it are downright odd often go unnoticed.

One of the more bewildering aspects of flirting is the degree to which it involves -- on the surface, at least -- insulting one another. This is summed up rather unironically in a dating tips website (check out this article as well):

"From the outside, teasing seems to be a twisted pleasure: affectionate and sort of insulting all at once. Teasing is a very articulate way of winning a person's attraction. It actually helps bring people closer."

Huh?

Something about this analysis seems right, but the "why" seems very unconvincing. Teasing works because it draws attention and brings people closer together. Teasing also leads to bar fights and school shootings.

What gives?

Teasing to reduce social space
Part of an answer appears in Penelope Brown & Stephen Levinson's classic 1978 Politeness: Some universals in language use. On page 72 of the second edition, they note in passing that "a criticism, with the assertion of mutual friendship, may lose much of its sting -- indeed ... it often becomes a game and possibly even a compliment."

I'm not completely sure where they were going with that, but one possible interpretation is that there are things that can be said between friends but not between strangers (criticism, for instance). So when you criticize somebody, you are either being offensive or asserting friendship. If the criticism is done in the right tone under the right circumstances, it comes across as an assertion of friendship.

Of course, the balance can be hard to maintain and it's easy to foul up.

I don't think this is a complete explanation by any means, but there seems to be something right about it. I'm justing beginning to read more in this area of pragmatics, so hopefully I'll have more to add in the future. If anybody is more familiar with this line of research and has something to add, comment away!

What Babies Pay Attention To


The world is a busy place, and there is much for an infant to learn. The typical infant, when carried around town by her parent, when she is not asleep is fixating curiously and intently on the world around her.

Of course, there are a lot of things in the world that are relevant and worth knowing about, and many which are probably not worth paying too much attention to. Since the baby has so much learn, it would be ideal if children had some mechanism for knowing what they should learn and what they can safely ignore.

The Pedagogical Learning Stance

Gergely Csibra and Gyorgy Gergely have argued that specific social cues are used by caregivers to direct infants to those things most worth learning about. They refer to this as a form of "pedagogy," but my sense is that they don't mean something much like formal education -- these cues can be exchanged without the adult necessarily being aware of them.

Their theory has drawn more attention to the ways in which adults and infants communicate, and what they communicate about. In a recent paper published in Science Magazine, Csibra, Gergely and colleagues suggest that they have found a partial solution to an old riddle in human development.

Perseveration

Perserverance -- sticking to your goals -- is often an admirable quality. Perseveration -- fixating on the same thing long after it ceases to be relevant or useful -- is not.

Babies are known to perseverate. In one particularly classic experiment, Piaget found a strange perseveration in infant behavior. The experiment is easy to replicate at home and works like this:

Put a ball in a bucket such that the infant cannot see into the bucket. Let the infant retrieve the ball from the bucket. Repeat this several times.

Now, in full view of the child, put the ball in a different bucket. Despite the fact that the infant just saw the ball go into the second bucket -- and despite the fact that infants are very good at tracking hidden objects and remembering where they are (contrary to earlier believe, infants ahve no problems with basic object permanence) -- they will typically look for the ball in the first bucket (81% of the time in the current study).

Are babies just stupid?

This failure on the part of the infants to carry out this simple task has often been described as a failure of inhibition. The babies remember that a particular action (searching in the first bucket) has typically led to a positive reward (a ball to play with). Even though they have information suggesting that this won't work this time around, they have difficulty inhibiting what is now an almost instinctual behavior.

What the new study shows is that a fair portion of this is due to the way the experimenter behaves during the experiment. If the experimenter actively engages the baby's attention during the task, the babies show the typical effect of continuing to search in Bucket 1 even when they saw the ball go into Bucket 2.

However, if the experimenter does not directly engage the baby (looking off to the side, not speaking to the baby, etc.), the baby actually does much better (fewer than 50% look in the wrong bucket, down from about 80%). The authors argue that this shows the perseveration effect can't be due to simply motor priming.

They suggest, instead, that by socially interacting with the baby, the experimenter is suggesting to the baby that there is something to be learned here: namely, that a ball can always be found in Bucket 1. When the experimenter does not socially engage the baby, the baby has no reason to make that inference.

Limitations

One might suggest that the babies in the non-social condition were less likely to perseverate because they were less interested in the game and just didn't learn the connection between the ball and Bucket 1 as well.

This is to some extent what the authors are also suggesting. It's important to point out that if the perseveration were caused by priming, attention does not appear to be particularly important to the phenomenon of priming in that you can be primed by something you aren't even aware of (subliminal priming). Still, one could imagine some other mechanism beyond infants believing the adult in the social condition wanted them to learn an association between the ball and Bucket 1.

Also, it is important to note that even in the non-social condition, nearly half of the infants perseverated anyway.

-------
J. Topal, G. Gergely, A. Miklosi, A. Erdohegyi, G. Csibra (2008). Infants' Perseverative Search Errors Are Induced by Pragmatic Misinterpretation Science, 321 (5897), 1831-1834 DOI: 10.1126/science.1161437

Why the Insanity Defense is Un-scientific

In his class book Emotional Intelligence, Daniel Goleman presents a very compelling case for studying individual differences in social and emotional skills. Since people who are less empathic -- less aware of others' thoughts and feelings -- are apparently more likely to commit crime, Goleman argues that this raises the issue of what to do about criminals who are biologically limited in their empathic abilities:

If there are biological patterns at play in some kinds of criminality -- such as a neural defect in empathy -- that does not argue that all criminals are biologically flawed, or that there is some biological marker for crime ... Even if there is a biological basis for a lack of empathy in some cases, that does not mean all who have it will drift to crime; most will not. A lack of empathy should be factored in with all other psychological, economic, and social forces that contribute to a vector toward criminality.
This may seem like a reasonable, middle-of-the road take on the issue, but I would argue that it is actually an extremely radical, non-scientific statement. (Since it is such a common sentiment, I realize this means I may be calling most of the public crazy radicals. So be it. Sometimes, that's the case.)

The Fundamental Axiom of Cognitive Science

It is the scientific consensus that all human behavior is the result activity in the brain. Like gravity and the laws of thermodynamics, this cannot be proven beyond a doubt (in fact, there's a good argument that nothing can be proven beyond a doubt). However, it is the foundation upon which modern psychology and neuroscience is built, and there is no good reason to doubt it.

In fact, many of the world's events cannot be understood otherwise. Classic examples are people who, as a result of brain injury, are unaware of the fact that they can see or that the left half of the world exists or think their leg is a foreign entity not part of their own body. The oddest fact about such syndromes is that such patients sometimes are completely unaware of their problem and cannot understand it when it is explained to them (Oliver Sacks is a great source of such case histories).

The Problem for the Insanity Defense

Back to Goleman. He writes "Even if there is a biological basis for a lack of empathy in some cases..."

I hope that the fundamental problem with the quote is now clear. The consensus of the scientific community is that for any behavior, personality trait or disposition, there is always a biological basis. There are fundamental brain differences between Red Sox fans and Yankees fans, or between those attracted to Tom Cruise or Nicole Kidman. There is some brain state such that it is being a Red Sox fan.

So whenever somebody says, "My brain made me do it," they are telling the truth.

The Soft Bigotry of Medical Evidence

As matters currently stand, however, certain brain differences are privileged over others. Say Jane and Sally are both accused of a similar crime, but Jane has a known brain "abnormality" that correlates with criminal behavior, but Sally has no such brain data to point to. They may likely face different sentences.

However, the difference between Jane and Sally may have more to do with the state of our scientific understanding than anything else. If Sally is predisposed to crime in some way, then it must be because of some difference in her brain. At the very least, if you were able to take a snapshot of her brain during the moments leading up the crime, there would be some difference between her brain and the brain of Rebecca, who had the opportunity to commit the crime but chose not to, because the act of choosing is itself a brain state.

The effect is to discriminate between people based, not on their actions or on their persons, but based on current medical knowledge.

A problem without an easy solution

I think that most people prefer that the legal system only punish those who are responsible for wrongdoing. If we exclude from responsibility everybody whose actions are caused by their brains, we must exclude everybody. If we include even those who clearly have little understanding or control of their own actions, that seems grossly unfair.

I don't have any insight into how to solve the problem, but I don't think the current standard is workable. It is an exceptionally complex problem, and many very smart people have thought about it very hard. I hope they come up with something good.

New research on understanding metaphors

Metaphors present a problem for anybody trying to explain language, or anybody trying to teach a computer to understand language. It is clear that nobody is supposed to take the statement, "Sarah Palin is a barracuda" literally.


However, we can imagine that such phrases are memorized like any other idiom or, for that matter, any word. Granted, we aren't sure how word-learning works, but at least metaphor doesn't present any new problems.

Clever Speech

At least, not as long as it's a well-known metaphor. The problem is that the most entertaining and inventive language often involves novel metaphors.

So suppose someone says "Sarah Palin is the new Harriet Miers." It's pretty clear what this means, but it seems to require some very complicated processing. Sarah Palin and Harriet Miers have many things in common. They are white. They are female. They are Republican. They are American. They were born in the 20th Century. What are the common characteristics that matter?

This is especially difficult, since in a typical metaphor, the common characteristics are often abstract and only metaphorically common.

Alzheimer's and Metaphor

Some clever new research just published in Brain and Language looked at comprehension of novel metaphors in Alzheimer's Disease patients.

It is already known that AD patients do reasonably well on comprehending well-known metaphors. But what about new metaphors?

Before I get to the data, a note about why anybody would bother troubling AD patients with novel metaphors: neurological patients can often help discriminate between theories that are otherwise difficult to distinguish. In this case, one theory is that something called executive function is important in interpreting new metaphors.

Executive function is hard to explain and much about it is poorly understood, but what is important here is that AD patients are impaired in terms of executive function. So they provide a natural test case for the theory that executive function is necessary to understand novel metaphors.

The results

In this study, AD patients were as good as controls at understanding popular metaphors. While control participants were also very good at novel metaphors, AD patients had a marked difficulty. This may suggest that executive function is important in understanding novel metaphors and gives some credence to theories based around that notion.

This still leaves us a long way from understanding how humans so easily draw abstract connections between largely unrelated objects to produce and understand metaphorical language. But it's another step in that direction.


-----
M AMANZIO, G GEMINIANI, D LEOTTA, S CAPPA (2008). Metaphor comprehension in Alzheimer’s disease: Novelty matters Brain and Language, 107 (1), 1-10 DOI: 10.1016/j.bandl.2007.08.003

Who advises McCain and Obama on science issues?

I mentioned recently that Obama's statements on science policy convinced me that he had actually talked to some scientists and understood what it's like on the ground. McCain has yet to convince me.

I wasn't surprised, then, to see in this week's Science a report that Obama has been very active in soliciting advice from scientists, whereas McCain's advisory committee was described as "two guys and a dog."

The article (subscription required) details interactions between scientists and the two campaigns. The primary additional piece of analysis that struck me was the following statement:

For many U.S. academic researchers, presidential politics comes down to two big issues: getting more money for science and having a seat at the table. The first requires agreement between the president and Congress, however, and any promise to increase research spending could easily be derailed by the Iraq war, an ailing economy, and rising health care and energy costs. That puts a premium on the second issue, namely, the appointment of people who will make the key decisions in the next Administration.
This makes the open nature of the Obama campaign a good sign.

The article also reports that Obama's science advisors weren't necessary even asked whether they supported his candidacy. After an administration that excluded anyone with a contrary opinion -- or contrary facts -- that is also encouraging.

Do you have the memory of a crow?

It appears that humans aren't the only ones with exceptionally good long-term memory. Crows not only remember individual faces over long periods of time and even seem to be able to communicate to other crows information about the people in question.

That animals, especially birds, have good memories is not all that surprising. That they remember human faces so well is striking.

There is an ongoing debate in the literature about whether the fact that humans are so good at processing faces is because we have specialized neural circuitry for human faces. Given that humans are an intensely social species, it would make sense for us to develop special face-recognition systems. It remains to be seen just how good crow memory for human faces is (the study in question is limited in some ways), but if their human face perception is very good, that would call for a very interesting explanation.

Who does Web-based experiments?

Obviously, I would prefer that people do my Web-based experiments. Having done those, though, the best place to find a list of Web-based experiments is to check the list maintained at the University of Hanover.

Who is posting experiments?

One interesting question that can be answered by this list is who exactly does experiments online. I checked the list of recent experiments posted under the category of Cognition.

From June 1st through September 12, experiments were posted by

Brown University 2
UCLA 1
University College London 2
University of Cologne 1
Duke University 2
University of London 1
Harvard University 1
University of Saskatchewan 3
University of Leeds 2
University of Minnesota 1

US science funding stagnates (China charges ahead)

When the New York Times talks about the US falling behind in science -- or when I do -- it's worth looking at what we mean.

The US has long been the world leader in science and technology. In 2003, the US accounted for 30% of all scientific publications, and in 2005 it accounted for 30% of all research expenditures. However, that first number has fallen precipitously (it was 38% in 1992), probably because the second number is also falling.

Adding up the numbers
Probably the two most significant sources of public science funding the US are the National Science Foundation (NSF), which covers most types of foundational research, and the National Institutes of Health (NIH), which funds medicine- and health-related research (this is broadly interpreted -- I've done research using NIH funds).

The following chart shows the levels of NIH funding during the Bush administration.



As can be seen, the numbers are pretty flat from 2003 on. In fact, it hasn't even kept up with inflation.

Here are the numbers for NSF, which are similarly flat in recent years:


To compare with the previous administration, NIH's budget increased 44% in the Bush years -- mostly during the first two. In contrast, it grew over 70% during the Clinton years. (I couldn't track down NSF funding levels in 1992.)

Well, at least funding isn't decreasing. That's good, right?

Steady levels of funding are better than falling levels of funding, but only barely. First, research has driven the US economy for a long time, but its importance grows with each year. This means it requires more investment.

Second, research becomes more expensive with time. The Clinton and Bush years witnessed the incredible explosion in neuroimaging, which has revolutionized neuroscience. Neuroimaging is also incredibly expensive. (My off-hand recollection is that it costs about $500/hour to use an fMRI machine.) The number of neuroimaging projects has grown exponentially in the last two decades. That money must come from somewhere.

Also, in terms of the US's relative position with the rest of the world, it's important to point out that other countries are emphatically not dropping the ball. These are China's government science and technology expenditures from 2001 to 2006:



That is much more like it. Chinese research expenditures have been increasing rapidly for the last couple decades, but I graphed only the Bush-era data I could find in order to make it comparable to the charts above.

China is of course not alone. The EU, like China, currently lags far behind the US in terms of research expenditures. However, the EU parliament adopted a plan to increase expenditures (this include private-sector spending) to 3% of the GDP, which would put it slightly ahead of the US in terms of percentage of GDP and well ahead of the US in terms of total expenditures (the EU's GDP is larger than that of the US).

The Road Ahead

Although nobody should be a one-issue voter, I firmly believe that investment in science funding is crucial to America's future. As I pointed out recently, Barack Obama has repeatedly called for substantial increases in US science funding. If John McCain is interested in increasing science funding, I can't find evidence of it.