Field of Science

New research on understanding metaphors

Metaphors present a problem for anybody trying to explain language, or anybody trying to teach a computer to understand language. It is clear that nobody is supposed to take the statement, "Sarah Palin is a barracuda" literally.

However, we can imagine that such phrases are memorized like any other idiom or, for that matter, any word. Granted, we aren't sure how word-learning works, but at least metaphor doesn't present any new problems.

Clever Speech

At least, not as long as it's a well-known metaphor. The problem is that the most entertaining and inventive language often involves novel metaphors.

So suppose someone says "Sarah Palin is the new Harriet Miers." It's pretty clear what this means, but it seems to require some very complicated processing. Sarah Palin and Harriet Miers have many things in common. They are white. They are female. They are Republican. They are American. They were born in the 20th Century. What are the common characteristics that matter?

This is especially difficult, since in a typical metaphor, the common characteristics are often abstract and only metaphorically common.

Alzheimer's and Metaphor

Some clever new research just published in Brain and Language looked at comprehension of novel metaphors in Alzheimer's Disease patients.

It is already known that AD patients do reasonably well on comprehending well-known metaphors. But what about new metaphors?

Before I get to the data, a note about why anybody would bother troubling AD patients with novel metaphors: neurological patients can often help discriminate between theories that are otherwise difficult to distinguish. In this case, one theory is that something called executive function is important in interpreting new metaphors.

Executive function is hard to explain and much about it is poorly understood, but what is important here is that AD patients are impaired in terms of executive function. So they provide a natural test case for the theory that executive function is necessary to understand novel metaphors.

The results

In this study, AD patients were as good as controls at understanding popular metaphors. While control participants were also very good at novel metaphors, AD patients had a marked difficulty. This may suggest that executive function is important in understanding novel metaphors and gives some credence to theories based around that notion.

This still leaves us a long way from understanding how humans so easily draw abstract connections between largely unrelated objects to produce and understand metaphorical language. But it's another step in that direction.

M AMANZIO, G GEMINIANI, D LEOTTA, S CAPPA (2008). Metaphor comprehension in Alzheimer’s disease: Novelty matters Brain and Language, 107 (1), 1-10 DOI: 10.1016/j.bandl.2007.08.003

Who advises McCain and Obama on science issues?

I mentioned recently that Obama's statements on science policy convinced me that he had actually talked to some scientists and understood what it's like on the ground. McCain has yet to convince me.

I wasn't surprised, then, to see in this week's Science a report that Obama has been very active in soliciting advice from scientists, whereas McCain's advisory committee was described as "two guys and a dog."

The article (subscription required) details interactions between scientists and the two campaigns. The primary additional piece of analysis that struck me was the following statement:

For many U.S. academic researchers, presidential politics comes down to two big issues: getting more money for science and having a seat at the table. The first requires agreement between the president and Congress, however, and any promise to increase research spending could easily be derailed by the Iraq war, an ailing economy, and rising health care and energy costs. That puts a premium on the second issue, namely, the appointment of people who will make the key decisions in the next Administration.
This makes the open nature of the Obama campaign a good sign.

The article also reports that Obama's science advisors weren't necessary even asked whether they supported his candidacy. After an administration that excluded anyone with a contrary opinion -- or contrary facts -- that is also encouraging.

Do you have the memory of a crow?

It appears that humans aren't the only ones with exceptionally good long-term memory. Crows not only remember individual faces over long periods of time and even seem to be able to communicate to other crows information about the people in question.

That animals, especially birds, have good memories is not all that surprising. That they remember human faces so well is striking.

There is an ongoing debate in the literature about whether the fact that humans are so good at processing faces is because we have specialized neural circuitry for human faces. Given that humans are an intensely social species, it would make sense for us to develop special face-recognition systems. It remains to be seen just how good crow memory for human faces is (the study in question is limited in some ways), but if their human face perception is very good, that would call for a very interesting explanation.

Who does Web-based experiments?

Obviously, I would prefer that people do my Web-based experiments. Having done those, though, the best place to find a list of Web-based experiments is to check the list maintained at the University of Hanover.

Who is posting experiments?

One interesting question that can be answered by this list is who exactly does experiments online. I checked the list of recent experiments posted under the category of Cognition.

From June 1st through September 12, experiments were posted by

Brown University 2
University College London 2
University of Cologne 1
Duke University 2
University of London 1
Harvard University 1
University of Saskatchewan 3
University of Leeds 2
University of Minnesota 1

US science funding stagnates (China charges ahead)

When the New York Times talks about the US falling behind in science -- or when I do -- it's worth looking at what we mean.

The US has long been the world leader in science and technology. In 2003, the US accounted for 30% of all scientific publications, and in 2005 it accounted for 30% of all research expenditures. However, that first number has fallen precipitously (it was 38% in 1992), probably because the second number is also falling.

Adding up the numbers
Probably the two most significant sources of public science funding the US are the National Science Foundation (NSF), which covers most types of foundational research, and the National Institutes of Health (NIH), which funds medicine- and health-related research (this is broadly interpreted -- I've done research using NIH funds).

The following chart shows the levels of NIH funding during the Bush administration.

As can be seen, the numbers are pretty flat from 2003 on. In fact, it hasn't even kept up with inflation.

Here are the numbers for NSF, which are similarly flat in recent years:

To compare with the previous administration, NIH's budget increased 44% in the Bush years -- mostly during the first two. In contrast, it grew over 70% during the Clinton years. (I couldn't track down NSF funding levels in 1992.)

Well, at least funding isn't decreasing. That's good, right?

Steady levels of funding are better than falling levels of funding, but only barely. First, research has driven the US economy for a long time, but its importance grows with each year. This means it requires more investment.

Second, research becomes more expensive with time. The Clinton and Bush years witnessed the incredible explosion in neuroimaging, which has revolutionized neuroscience. Neuroimaging is also incredibly expensive. (My off-hand recollection is that it costs about $500/hour to use an fMRI machine.) The number of neuroimaging projects has grown exponentially in the last two decades. That money must come from somewhere.

Also, in terms of the US's relative position with the rest of the world, it's important to point out that other countries are emphatically not dropping the ball. These are China's government science and technology expenditures from 2001 to 2006:

That is much more like it. Chinese research expenditures have been increasing rapidly for the last couple decades, but I graphed only the Bush-era data I could find in order to make it comparable to the charts above.

China is of course not alone. The EU, like China, currently lags far behind the US in terms of research expenditures. However, the EU parliament adopted a plan to increase expenditures (this include private-sector spending) to 3% of the GDP, which would put it slightly ahead of the US in terms of percentage of GDP and well ahead of the US in terms of total expenditures (the EU's GDP is larger than that of the US).

The Road Ahead

Although nobody should be a one-issue voter, I firmly believe that investment in science funding is crucial to America's future. As I pointed out recently, Barack Obama has repeatedly called for substantial increases in US science funding. If John McCain is interested in increasing science funding, I can't find evidence of it.

Where else to read the Cognition and Language Lab blog

Dear readers.

As many of you know, I have what amounts to a mirror of this blog running over at I maintain this one because it has some great features, like good archiving, better formatting, and the ability to save drafts of posts.

However, almost all the traffic is over at the other site. This means that while those posts get a number of comments, the ones here do not. So if you like reading comments or enjoy the give-and-take of a conversation in comments, you might prefer to read the other site.

This has gotten even easier now that has added my own RSS feed, which you can find here. The purveyor of promises to add some of that extra functionality eventually, but until he does, I will continue posting here as well, so if you prefer this site, you should be able to continue reading it for some time.

Which do you answer: Mail or Email?

One of the most difficult problems in research on humans is response bias. If you want to study rabbits, it's relatively easy to get a random sample of rabbits to test. Rabbits have very little say in the matter.

Humans, on the other hand, can choose to participate or not. Certain types of people tend not to participate in experiments (low-income, rural people, for instance), while other groups provide the bulk of test subjects (college psychology majors, for instance). If you are studying something thought to be fairly independent of geography, SES or education (low-level vision, for instance), this may not matter so much. If you are studying social attitudes, it does.

Web-based Response Bias

One potential concern about Web-based research is that it changes the dynamics of the response bias. Surveying college undergraduates may involve consider sample bias, but at least we know what the bias is. Some researchers are concerned that Web-based studies must suffer from some research bias (suburban teenagers are probably over-represented on the Web), but it's one we don't understand as well and have less control of.

At least one study suggests that, at least in some circumstances, we may not need to worry about this. This is according to Sabina Gesell, Maxwell Drain and Michael Sullivan, writing in the last issue of the International Journal of Internet Science. They ran an employee-satisfaction survey of hospital employees. Half were sent surveys by the Web, and half by regular mail. The response rate (the number of surveys completed) was equivalent using both methods, and the reported job satisfaction of both groups was identical.

Interestingly, the respondents were also asked whether they preferred a Web-based or paper-based survey. The most common response was that they did not care, but of those who expressed an opinion, the majority preferred the survey type they had actually received (sounds like cognitive dissonance).


Sabina B. Gesell, Maxwell Drain, Michael P. Sullivan (2007). Test of a Web and paper employee satisfaction survey: comparison of respondents and non-respondents International Journal of Internet Science, 2 (1), 45-58

How to win at baseball (Do managers really matter?)

It's a standard observation that when a team does poorly, the coach -- or in the case of baseball, the manager -- is fired, even though it wasn't the manager dropping balls, throwing the wrong direction or striking out.

Of course, there are purported examples of team leaders that seem to produce teams better than the sum of the parts that make them up. Bill Belichick seems to be one, even modulo the cheating scandals. Cito Gaston is credited with transforming the Blue Jays from a sub-.500 team into a powerhouse not once but twice, his best claim to excellence being this season, in which he took over halfway through the year.

But what is it they do that matters?

Even if one accepts that managers matter, the question remains: how do they matter? They don't actually play the game. Perhaps some give very good pep talks, but one would hope that the world's best players would already be trying their hardest pep talk or no.

In baseball, one thing the manager controls is the lineup: who plays, and the order in which they bat. While managers have their own different strategies, most lineups follow a basic pattern, the core of which is to put one's best players first.

There are two reasons I can think of for doing this. First, players at the top of the lineup tend to bat more times during a game, so it makes sense to have your best players there. The other reason is to string hits together.

The downside of this strategy is that innings in which the bottom of the lineup bats tend to be very boring. Wouldn't it make sense to spread out the best hitters so that in any given inning, there was a decent chance of getting some hits.

How can we answer this question?

To answer this question, I put together a simple model. I created a team of four .300 hitters and five .250 hitters. At every at-bat, a player's chance of reaching base was exactly their batting average (a .300 hitter reached base 30% of the time). All hits were singles. Base-runners always moved up two bases on a hit.

I tested two lineups: one with the best players at the top, and one with them alternating between the poorer hitters.

This model ignores many issues, such as base-stealing, double-plays, walks, etc. It also ignores the obvious fact that you'd rather have your best power-hitting bat behind people who get on base, making those home-runs count for more. But I think if batting order has a strong effect on team performance, it would still show up in the model.

Question Answered

I ran the model on each of the line-ups for twenty full 162-game seasons. The results surprised me. The lineup with the best players interspersed scored nearly as many runs in the average season (302 1/4) as the lineup with the best players stacked at the top of the order (309 1/2). Some may note that the traditional lineup did score on average 7 more runs per game, but the difference was not actually statistically significant, meaning that the two lineups were in a statistical tie.

Thus, it doesn't appear that stringing hits together is any better than spacing them out.

One prediction did come true, however. Putting your best hitters at the front of the lineup is better than putting them at the end (291 1/2 runs per season), presumably because the front end of the lineup bats more times in a season. Although the difference was statistically significant, it still amounted to only 1 run every 9 games, which is less than I would have guessed.

Thus, the decisions a manager makes about the lineup do matter, but perhaps not very much.

Parting thoughts

This was a rather simple model. I'm considering putting together one that does incorporate walks, steals and extra-base hits in time for the World Series in order to pick the best lineup for the Red Sox (still not sure how to handle sacrifice flies or double-plays, though). This brings up an obvious question: do real managers rely on instinct, or do they hire consultants to program models like the one I used here?

In the pre-Billy Beane/Bill James world, I would have said "no chance." But these days management is getting much more sophisticated.

Science Funding and Presidential Politics

John McCain has just answered a questionnaire by Scientists and Engineers for America (Obama did so several weeks ago).

You should read the answers yourself, but as someone who expects to be involved in American science for the next number of decades, I found McCain's disappointing.

Obama begins his answer to the first question about American innovation this way:

Ensuring that the U.S. continues to lead the world in science and technology will be a central priority for my administration. Our talent for innovation is still the envy of the world, but we face unprecedented challenges that demand new approaches. For example, the U.S. annually imports $53 billion more in advanced technology products than we export. China is now the world’s number one high technology exporter. This competitive situation may only worsen over time because the number of U.S. students pursuing technical careers is declining. The U.S. ranks 17th among developed nations in the proportion of college students receiving degrees in science or engineering; we were in third place thirty years ago.
This reassures me that he understand the problem. McCain, on the other hand, merely says "I have a broad and cohesive vision for the future of American innovation. My policies will provide broad pools of capital, low taxes and incentives for research in America, a commitment to a skilled and educated workforce, and a dedication to opening markets around the world."

Solutions to our Problems.

OK. Maybe McCain isn't as good at setting up the problem as Obama is. What does his broad and cohesive vision look like? Most of his proposals sound nice, but it's hard to tell what exactly he means. For instance, he wants to "utilize the nation's science and technology infrastructure to develop a framework for economic growth both domestically and globally."

Sounds good. How? (One way might be to make the R&D tax credit permanent, something which Obama supports but which McCain strangely neglects to mention.)

Other parts of the boilerplate sound like he is merely suggesting we do what we are already doing. I am refering to points like "Fund basic and applied research in new and emerging fields such as nanotechnology and biotechnology..."

Hmmm. We already do that. Maybe he intends to increase funding for such projects, but he doesn't say.

McCain also says he has supported and will continue to support increasing federal funding for research. However, he doesn't say how much. Federal funding has increased over the last few years. It's just not keeping up with inflation. So hazy talk about "increasing funding" may well be meaningless.

One of the few concrete proposals McCain makes is to "eliminate wasteful earmarks in order to allocate funds for science and technology investments." Sounds good. There are $16 billion of earmarks in the 2008 federal budget. The budget for the National Institutes of Health alone is $28 billion. So even if all the "earmarks savings" were spent on science -- and he has other things he wants to do with that money -- we couldn't even get close to doubling science funding, as Obama has proposed.

Does Obama do any better?

If I find McCain's answers discouraging, Obama's are heartening. Although he uses fewer words than McCain, those words are packed with specific proposals, such as doubling federal funding for basic research over the period of 10 years and making the R&D tax credit permanent.

Within the general increase in science funding, Obama shows again that he, or at least his advisors, actually know something about the state of American science, in that he singles out the need to "increase research grants for early-career researchers to keep young scientists entering these fields."

The problem that he is referring to is that the current average age of a scientist receiving their first NIH grant is over 40 years old. While the truism that scientists do their best work prior to the age of 40 is less true for the biomedical researchers NIH funds than it is for mathematicians and physicists, this funding trend is still worrisome.

While on that topic, out of all NIH 2007 applications in for a new RO1 grant -- the bread-and-butter grant that funds many or most large labs -- only 19% were funded. While we certainly want money to go to the best projects, it's important to remember that when a scientist doesn't get a grant, she doesn't just go back to our day job. Science is her day job. So she has to apply again. With 81% of scientists applying to the NIH failing to get funding each year, that means many, many burdensome reapplications -- taking time and money away from doing actual science.

All that is just another reason that significantly increasing federal funding for research is crucial.

I won't go into detail about Obama's other policies, but I found them similarly encouraging and, frankly, a breath of fresh air.

The Candidate with Vision and Expertise

When Obama first began campaigning, some people wondered what the substance behind his vision was. As I read through his responses to this questionnaire, I was struck time and time again that (1) he seemed to really understand and appreciate what challenges I face in my daily attempts to do science, and (2) he had concrete plans to address those issues.

McCain's, answers, on the other hand, rang hollow and out of touch.

How many memories fit in your brain? More than we thought

One of the most obvious facts about memory is that it is not nearly so good as we would like. This definitely seems true in day-to-day life, and one focus of my research during the last couple years has been why our ability to remember what we see over even very short time periods is so very limited.

So memory is crap, right?

It may be hard to remember a visual scene over a very short time period, but new evidence suggests that it is remarkably easy to remember a visual image over a longer period of time (several hours).

Researchers at MIT showed participants nearly 3,000 visual images (see some of them here) over the course of 5 1/2 hours. Afterwards, their memory was tested. They were able to discriminate the pictures they actually saw from slightly altered versions of the same picture nearly 90% of the time.

This is frankly incredible. When I show participants much, much simpler images and then ask them to recognize the same images just 1 second later, accuracy is closer to 80%!

These results are going to necessitate some re-thinking the literature. It suggests that our brains are storing a lot more information than many of us thought just a little while ago. It also suggests a very strange interaction between time and memory strength that will need to be better understood.

So this is a surprise?

Yes, and no. The results are surprising, but their publication last week in the Proceedings of the National Academy of Sciences was not, at least not for me. I had the opportunity to first be stunned by these data nearly a year ago, when the third author gave a talk at Harvard. It came up again when he gave another talk during the winter. (Oh, and I've known the first author since we sat next to each other during a graduate school application event at MIT, and we still regularly talk about visual memory).

So I and many others have had the opportunity to think through the implications for a long time now, which means it is very possible that there are labs which have already completed follow-up studies.

While this has nothing to do with the big story itself -- the sheer massiveness of visual memory elicited in this study -- I bring it up as an example of my point from last week: the fact that America is (for now) the center of the scientific world gives us tremendous institutional advantages, the least of which is that it is much easier to stay fully up-to-date. If that mantle passes to another country, we will be the ones reading about old news only when it finally comes out in press.

Parting Thoughts

If you yourself have done research like this, the first thing you probably wondered was where they got 3,000 carefully controlled visual images, not to mention all the test images.

Google Images, baby, Google Images. It still took a great deal of time, but as I hear the story told, the ability to download huge numbers of web images via Google was immensely helpful. This is just one more example of Web search as a tool for science.

Politicians who blame universities for their own failings

There has been some flack in Congress lately about the cost of tuition as the famous private universities. The primary senator behind the movement is Charles Grassley, a Republican of Iowa.

Tuition, room, board and fees at a university like Harvard are now hovering around $50,000. In a recent press release, Grassley writes that:

"The Congressional Research Service reports that, on the basis of mean household income of a household in the bottom fifth of the population, the price of college in 2005 was over 70 percent of the household's income."
Is there an easy solution that won't cost me anything?

Luckily, according to Grassley, there is: make schools spend more of their endowments. He notes that endowments are an awful lot like the funds of private foundations:

"In the 1960s, Congressman Wright Patman issued a series of reports, one of which included recommendations to limit foundation life to 25 years..."

So, colleges should only last for 25 years?

I'll give Grassley the benefit of the doubt and assume that's not where he was going with that quote (though there is no evidence to the contrary in the original document). In any case, he has a bigger problem: most schools don't have large endowments. Just 75 universities control 71% of all endowment assets. So that means the other 29% must cover the remaining 2,543 accredited four-year institutions in the US.

It's not clear just how well that will work.

Grassley also has a big conceptual problem. Universities invest their endowments and use the interest for operating costs. Costs go up each year. Inflation is one reason. Increasing numbers of students is another. Libraries must expand to incorporate new books. New departments (microbiology, ecological studies, Arabic, etc.) must be founded, and rarely are old ones (history, physics, literature) abandoned. A stagnant endowment is death to such a university.

Again, if your university has an endowment. So what about the rest?

What about the University of Iowa?

The flagship public university in Grassley's state is the University of Iowa. As a relatively wealthy public university, it has an endowment of almost $1,000,000,000. That seems like a lot, but it also has over 30,000 students, which gives the university an endowment of just over $32,000 per student.

That sounds like a lot, only if the university spends it all this year, after which there won't be any money for next year. Grassley wants universities to spend at least 5% of their endowment each year -- even in bad years in which they get less than 5% interest on the endowment.

That comes out to $1,615 per student per year. The in-state tuition this year (not counting room and board) appears to be $6,554. So that kind of spending will make a dent.

Except that schools already rely heavily on their endowments. That's why they have them. I couldn't find numbers for the University of Iowa, but I have heard that many school already spend over 4% of their endowment yearly. So let's assume the University of Iowa spends 4%. That means Grassley is calling for his home state university to spend an extra $323 per year from its endowment -- or, about the cost of 3-4 college textbooks.

And remember, this is one of the country's richest universities. The vast majority of schools have far, far smaller endowments, if they have one at all.

Right on the fact. Wrong on the reason.

Governments have been steadily cutting funding for publicly-funded universities. I'm fortunate to be at an endowment-rich university, but my father is at a public institution in Michigan. The budget crisis there has caused steep cutbacks in funding ... and thus steep increases in tuition.

This is a story I've heard repeated at many universities across the country.

It's nice to be able to go to Harvard. But only about 2,000 students each year are accepted. I'm sure they'd appreciate the help with tuition, but it's not going to affect most Americans. The university tuition crisis is in the public universities, which have small endowments, if any.

But I suppose Grassley doesn't want to talk about that.

What if someone is watching?

I have heard a number of concerns about Web-based research. One of the most reasonable -- at least, the one that I find most compelling -- is that participants may be more motivated when an experimenter is physically present. This makes some sense: people might take a test more seriously if it is handed to them by a living person than if they run across it on Google. The presence of someone watching you provides a certain amount of social pressure.

The implications of this question go beyond Web-based experimenting to any type of work conducted "virtually."

There are some empirical reasons to suspect this might be a real concern. In the famous Milgram experiments, participants' behavior in a simulated torture scenario varied considerably depending on the appearance and actions of the experimenter.

That said, my Web-based experiments do not involve torture. Some are actually quite fun. So it is not at all clear whether the lack of a physical experimenter actually affects how the participant behaves.

Luckily, Heike Ollesch, Edgar Heineken, and Frank Schulte of the Universitat Duisberg-Essen have looking into just this question. They ran a couple simple experiments. In one, participants compared visual images. In another, they memorized lists of words. Some participants were recruited as part of a standard Web-based experiment. In control trials, the same experiment was run, but either in the lab, with an experimenter present, or outside in campus public areas, again with an experimenter present.

Participants did no better in the lab-based condition than in the Web-based condition, though in the word-memorization experiment, participants performed more poorly in the public areas (as might be expected), than in either the Web-based or lab-based conditions.

In a final, third experiment, participants described short videos. They actually produced more comprehensive descriptions in the Web-based condition than in the lab-based or public-area conditions.

Stanley Milgram (1963). Behavioral Study of obedience. The Journal of Abnormal and Social Psychology, 67 (4), 371-378 DOI: 10.1037/h0040525

Heike Ollesch, Edgar Heineken, Frank P. Schulte (2006). Physical or virtual presence of the experimenter: Psychological online-experiments in different settings International Journal of Internet Science, 1 (1), 71-81

Why it doesn't matter if America falls behind in Science

Earlier this year, an article in the New York Times argued that it doesn't matter that the US is losing its edge in science and research. In fact, the country can save considerable money by letting other countries do the hard work. The article went on to explain how this can be viewed as outsourcing: let other, cheaper countries do the basic research, and try to turn that research into products in the US.


The article quoted research published by the National Academy of Sciences, and while I fully recognize that they know more about this topic, have thought more about this topic, and are no doubt considerably smarter, I'm skeptical.

There are two problems I see. The first is that just because other countries are picking up the slack doesn't mean there isn't slack. The second is that I'm not convinced that, in the long term, allowing all the best, most cutting-edge research to take place in other countries is really economically sound.

Being a Good Citizen

The article in question seems to imply that there is some amount of research, X, that needs to be done. If other countries are willing to do it, then no more is needed.

To make this concrete, as long as one new disease is cured, say, every five years, there's simply no reason to invest any additional energy into curing diseases. That's enough. And for people who have some other disease that hasn't been cured, they can wait their turn.

The concept is most clear when it comes to disease, but I think the same argument applies everywhere else. Basic science is what gives us new technology, and technology has been humanity's method of improving our quality of life since at least a few million years ago. Perhaps some people think quality of life is improving fast enough -- or too fast, thank you -- but I, at least, would like my Internet connection to be a bit faster now rather than later.

The fact that China, Taiwan, Singapore & co. are stepping up to the plate is not a reason for us to go on vacation.

Can We Really be Competitive as a Backwater?

The article casts "outsourcing" science as good business by noting that America is still the best at turning science into products. So let other countries do the expensive investment into research -- we'll just do the lucrative part that comes later.

Do they think other countries won't catch on?

I have to imagine that Singapore and similar countries are investing in research because they want to make money. Which means they will want their share of the lucrative research-to-product business. So America's business plan, then, would have to be to try to keep our advantage on that front while losing our advantage on basic research.

This may well be possible. But it has some challenges. It's no accident that the neighborhood around MIT is packed with tech start-ups. I'm not a sociologist, but I can speculate on why that is. First, many of those tech start-ups are founded by MIT graduates. They aren't necessarily Boston natives, but having been drawn to one of the world's great research universities, they end up settling there.

Second, Flat World or not, there are advantages to being close to the action. Many non-scientists don't realize that by the time "cutting-edge" research is published, it is often a year or even several years old. The way to stay truly current is to chat with the researchers over coffee about what they are doing right now, not about what they are writing right now.

Third, science benefits from community. Harvard's biggest advantage, as far as I can tell, is the existence of MIT two miles down the road, and visa versa. Waxing poetic about the free exchange of ideas may sound a bit abstract, but it has a real impact. I have multiple opportunities each week to discuss my current projects with some of the best minds in the field, and I do better work for it.

In short, I think any country that maintains the world's premier scientific community is going to have impressive structural advantages when it comes to converting ideas into money.

That Said...

That said, I think there are two really useful ideas that come out of that article. The first is the challenge against the orthodoxy that strong science = strong economy. Without challenges like these, we can't home in on what exactly is important about funding basic research (not saying I've been successful here, but it is a start, at least). The second is that even if the US maintains its lead in science, that lead is going to shrink no matter what we do, so it's important to think about how to capitalize on discoveries coming in from overseas.

Political Note

Those who are concerned about basic research in the US should note that while John McCain does not list science funding as a priority on his website -- unless you count non-specific support of NASA -- and did not mention it in his convention speech, Barack Obama did both (he supports doubling basic science funding).

Folks in Eastern Washington may be interested to know that a clinical psychologist is running for Congress against an incumbent. Though Mark Mays has been professionally more involved in treatment than in research, research is among his top priorities.

Research by one of the Progenitors of Web-based Research

As science goes, Web-based research is still fairly new. However, it is now at least a decade old, as the publication record of Ulf-Dietrich Reips shows. Reips was one of the first to turn to the Web and has been at the forefront of the field since then.

It appears that Reips and Uwe Matzat run a yearly journal devoted to Web-based experiments: the International Journal of Internet Science. This journal is, fittingly, open-source. Although the journal describes itself as "a peer reviewed open access journal for empirical findings, methodology, and theory of social and behavioral science concerning the Internet and its implications for individuals, social groups, organizations, and society," much of the research so far appears to concentrate on that middle goal: evaluation of Web-based experiments as a methodology (a project in which Reips has been involved in the past).

This is obviously important, since a number of people remain skeptical of Web-based research. A few of the papers that have come out already are worth mentioning and will appear in future posts.