Field of Science

Help! I need data!

Data collection keeps plugging along at GamesWithWords.org. Unfortunately, as usual, it's not the experiments for which I most need data that get the most traffic. Puntastic had around 200 participants in the last month. I'd like to get more than that, and I'd like to get more than that in all my experiments. But if I had to choose one to get 200 participants, it would be The Video Test, which only got 17.

The Video Test is the final experiment in a series that goes back to 2006. We submitted a paper in 2007, which was rejected. We did some follow-up experiments and resubmitted. More than once. Personally, I think we've simply had bad luck with reviewers, since the data are pretty compelling. Anyway, we're running one last monster experiment, replicating all our previous conditions several which-ways. It needs about 400 participants, though for really beautiful data I'd like about 800. We've got 140.

As I said, recruitment has been slow for this experiment.

So... if you have never done this experiment before (it involves watching a video and taking a memory test), please do. I'd love to get this project off my plate.

I liked "Salt," but...

What's with movies in which fMRI can be done remotely. In an early scene, the CIA do a remote brain scan of someone sitting in a room. And it's fully analyzed, too, with ROIs shown. I want that technology -- it would make my work so much easier!

UPDATE I'm not the only one with this complaint. Though Popular Mechanics goes a bit easy on the movie by saying fMRI is "not quite at the level Salt portrays." That's a bit like saying space travel is not quite at the level Star Trek portrays. There may someday be a remote brain scanner, but it won't be based on anything remotely like existing fMRI technology, which requires incredibly powerful, supercooled and loud magnets. Even if you solved the noise problems, there's nothing to be done about the fact that the knife embedded in the Russian spy's shoe (yes -- it is that kind of movie) would have gone flying to the center of the magnetic field, along with many of the other metal objects in the room.

What are the best cognitive science blogs?

If you look to your right, you'll see I've been doing some long-needed maintenance to my blog roll. As before, I'm limiting it to blogs that I actually read (though not all the blogs I read), and I have it organized by subject matter. As I did this, I noticed that the selection of cognitive science and language blogs is rather paltry. Most of the science blogs I read -- including many not included in the blog rolls -- are written by physical scientists.

Sure there are more of them than us, but even so it seems there should be more good cognitive science and language blogs. So I'm going to crowd-source this and ask you, dear readers, who should I be reading that I'm not?

Language Games

Translation Party

Idea: type in sentence in English. The site then queries Google Translator, translating into Japanese and then back again until it reaches "equilibrium," where the sentence you get out is the sentence you put in. Some sentences just never converge. Ten points to whoever finds the most interesting non-convergence.

Sounds of Silence

My lament that, with regards to discussion of education reform, a trace of small liberal arts colleges has disappeared into the ether appears to have, itself, disappeared into the ether. Seriously, readers, I expected some response to that one. There are parts of my post even I disagree with.

No tenure, no way!

The New York Times is carrying an interesting but misguided discussion of tenure today. As usual, the first commentator warns that without tenure, academic freedom will die:
As at-will employees, adjunct faculty members can face dismissal or nonrenewal when students, parents, community members, administrators, or politicians are offended at what they say. If you can be fired tomorrow, you do not really have academic freedom. Self-censorship often results. 
Mark Taylor of Columbia replies, essentially, "oh yah?"
To those who say the abolition of tenure will make faculty reluctant to be demanding with students or express controversial views, I respond that in almost 40 years of teaching, I have not known a single person who has been more willing to speak out after tenure than before.
Instead, tenure induces stasis, a point to which Richard Vedder, an economist at Ohio University, agrees:
The fact is that tenured faculty members often use their power to stifle innovation and change.
Money

You might, reading through these discussions, almost think that universities have been slowly doing weakening the tenure system because they want to increase diversity, promote a flexible workforce, and reduce the power of crabby old professors. Maybe some administrators do feel that way. But lurking behind all of this discussion is money. Here's Taylor:
If you take the current average salary of an associate professor and assume this tenured faculty member remains an associate professor for five years and then becomes a full professor for 30 years, the total cost of salary and benefits alone is $12,198,578 at a private institution and $9,992,888 at a public institution.
I'm not sure where he's getting these numbers. The numbers at Harvard for the same period is $6,320,500 for salary alone. Assuming benefits cost as much as the salary alone gets us up to our $12,000,000, but that's for Harvard, not the average university. Perhaps Taylor is assuming the professor starts today and includes inflation in future salaries, but 35 years of inflation is a lot. I'm using present-day numbers and assuming real salaries remain constant.

In any case, money seems to be the real factor, mentioned by more or less all the contributors. Here's Vedder:
My academic department recently granted tenure to a young assistant professor. In so doing, it created a financial liability of over two million dollars, because it committed the institution to providing the individual lifetime employment. With nearly double digit unemployment and universities furloughing and laying off personnel, is tenure a luxury we can still afford?
Adrianna Kezar of USC notes that non-tenured faculty are often not given offices or supplies, which presumably also saves the university money.

Professors make choices, too.

So universities save a lot of money by eliminating tenure. And certainly universities need to find savings where they can. What none of the contributors to the discussion acknowledge, beyond an oblique aside by Vedder, is that tenure has a financial value to professors as well as universities. Removing tenure in a sense is a pay cut, and both present and potential academics will respond to that pay cut.

Becoming a professor is not a wise financial decision. The starting salary of a lawyer leaving a top law school is greater than what most PhDs from the same schools will make at the height of their careers should they stay in academia. And lawyers' salaries, as I'm often reminded, can be similarly dwarfed by people with no graduate education that go straight into finance.

Most of us who nonetheless go into academia do so because we love it. The point is that we have options. Making the university system less attractive will mean fewer people will want to go into it. It's really that simple.

Garbage in, Garbage out

While watching television, have you ever had a fatal heart attack?

If you answered "yes" to this question, you would have been marked as a "bad participant" in Experimental Turk's recent study. The charitable assumption would be that you weren't paying attention. Importantly for those interested in using Amazon Mechanical Turk for research, participants recruited through AMT were no more likely to answer "yes" than participants tested in a traditional lab-based setting (neither group was likely to say "yes").

It's a nice post, though I think that Experimental Turk's analysis is over-optimistic, for reasons that I'll explain below. More interesting, though, is that Experimental Turk apparently does not always include such catch trials in their experiments. In fact, they find the idea so novel that they actually cited a 2009 paper from the Journal of Experimental Social Psychology that "introduces" the technique -- which means the editors and reviewers at this journal were similarly impressed with the idea.

That's surprising.

Always include catch trials

Including catch trials is often taught as basic experimental method, and for good reason. As Experimental Turk points out, you never know if your participants are paying attention. Inevitably, some aren't -- participants are usually paid or given course credit for participation, so they aren't always very motivated. Identifying and excluding the apathetic participants can clean up your results. But that's not the most important reason to include catch trials.

Even the best participant may not understand the instructions. I have certainly run experiments in which the majority of the participants interpreted the instructions differently from how I intended. A good catch trial is designed such that the correct answer can only be arrived at if you understand the instructions. It is also a good way of making sure you're analyzing your data correctly -- you'd be surprised how often a stray negative sign worms its way into analysis scripts.

Sometimes participants also forget instructions. In a recent study, I wasn't finding a difference between the control and experimental groups. I discovered in debriefing that most of the participants in the experimental group had forgotten the key instruction that made the experimental group the experimental group. No wonder there wasn't a difference! And good thing I asked. 

The catch trial -- the question with the obvious answer -- is just one tool in a whole kit of tricks used to validate one's results. There are other options, too. In reading studies, researchers often ask comprehension questions -- not because the answers themselves matter (the real interest is in what the participants do while reading), but simply to prove that the participants in fact did read and understand the material. 

Similar is the embedded experiment -- a mini experiment embedded into your larger experiment, the only purpose of which is to replicate a well-established result. For instance, in a recent experiment I included a vocabulary test (which you can also find in this experiment I'm running with Laura Germine at TestMyBrain.org). I also asked the participants for their SAT scores (these were undergraduates), not because I cared about their scores per se, but I was able to show that their Verbal SAT scores correlated nicely with performance on the vocabulary test (Math SAT scores less so), helping to validate the our vocab test. 


Beyond Surveys

Although I described catch trials mostly in terms of survey-format studies, the same techniques can be embedded into nearly any experiment. I've used them for reading-time, eye-tracking and ERP experiments as well. The practice isn't even specific to psychology/cognitive science. During my brief sojourn in a wet lab in high school, my job was to help genotype genetic knock-out mice to make sure that the genes in question really were missing from the relevant mice and not from the control mice. It probably wouldn't have occurred to the PIs in that lab to just assume the knock-out manipulation worked. Fail that, and none of the rest of the experiment is interpretable. 

A version of the catch trial is even seen in debugging software, where the programmer inserts code that isn't relevant to the function of the program per se, but the output of which helps determine whether the code is doing what it's supposed to.

It is true that some experiments resist checks of this sort. I have certainly run experiments where  by design I couldn't easily confirm that the participants understood the experiment, were paying attention, etc. But that is better avoided if possible -- which is why when I don't see such checks in an experimental write-up, I assume either (a) the checks were performed but deemed too unimportant/obvious to mention, or (b)

An Odd Omission

If catch trials are a basic aspect of good experimental design, how is it that Experimental Turk and the Journal of Experimental Social Psychology didn't know about it? I'm not sure. Part of it may be due to how experimental design is taught. It's not something you look up in an almanac, and though there are classes on technique (at least in psychology departments), they aren't necessarily that helpful since there are hundreds of types of experiments out there, each of which has its own quirks, and a class can only cover a few.

At least in my experience, experimental design is learned through a combination of the apprenticeship method (working with professors -- or, more often, more experienced graduate students) and figuring it out for yourself. The authors at Experimental Turk, it turns out, come from fields relatively new to experimental design (business, management, and political science), so it's possible they had less access to such institutional knowledge. 

As far as the Journal of Experimental Social Psychology... I'm not a social psychologist, and I hesitate to generalize about the field. A lot of social psychology uses questionnaires as instruments. They go to a great deal of difficulty to validate the questionnaires -- show that they are predictive of results on other tests or questionnaires, show that the questionnaires have good test-retest reliability, etc. Many of the techniques they use are ones I would like to learn better. But I actually haven't ever run across one (again, in my limited experience) that actually includes catch trials. Which in itself is interesting.

A clever idea 

I should add that while Experimental Turk cites said journal article for suggesting using questions with obvious answers, that's not actually what the paper suggests. Rather, it suggests using instructions telling participants to ignore certain questions. For instance: 
Sports Participation
Most modern theories of decision making recognize the fact that decisions do not take place in a vacuum. Individual preferences and knowledge, along with situational variables can greatly impact the decision process. In order to facilitate our research on decision making we are interested in knowing certain factors about you, the decision maker. Specifically, we are interested in whether you actually take the time to read the directions; if not, then some of our manipulations that rely on changes in the instructions will be ineffective. So, in order to demonstrate that you have read the instructions, please ignore the sports item below, as well as the continue button. Instead, simply click on the title at the top of this screen (i.e., "sports participation") to proceed to the next screen. Thank you very much.
That's a clever idea. One of my elementary school teachers actually wrote a whole test with instructions like that to teach the class a lesson about reading instructions carefully (and it worked -- I still do!). So it's a good idea I've never seen used in an experimental setting before, but that doesn't mean it hasn't been used. In any case, the discussion in the paper doesn't mention catch trials or other methods of validating data, so it's hard to know whether they did a thorough literature search.

More training

A bad movie can still make entertaining watching. A bad experiment is irredeemable. If the participants didn't understand the instructions, nothing can be gleaned from the data. And there are so many ways to run bad experiments -- I know, because I've employed many of them myself. There are a lot of datasets out there in psychology that have proven, shall we say, resistant to replication. Some of this has to be due to the fact that experimental design is not as good as it could and should be. 

Addendum

As I mentioned higher up, I think Experimental Turk is overly optimistic about the quality of data from AMT. I've run a couple dozen experiments on AMT now, and the percentage of participants that fail the catch trials varies a great deal, from as few as 0% to as many as 20-30%. I haven't made a systematic study of it, but there seem to be a number of contributing factors, some of which are general to all experimental venues (length of the experiment, how interesting it is, how complicated in the instructions are) and some of which are specific to AMT (the more related HITs, the more attractive a target the experiment is to spammers). 

All the more reason to always include catch trials.


-----------
Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power Journal of Experimental Social Psychology, 45, 867-872

Sell off Harvard Medical School!

Writing in the Chronicle of Higher Education, Andrew Hacker and Claudia Dreifus contend
Colleges are taking on too many roles and doing none of them well. They are staffed by casts of thousands and dedicated to everything from esoteric research to vocational training—and have lost track of their basic mission to challenge the minds of young people... Spin off medical schools, research centers, and institutes... For people who want to do research, plenty of other places exist—the Brookings Institution, the Rand Corporation, the Howard Hughes Medical Institute—all of which do excellent work without university ties.
Never mind that Howard Hughes is intimately tied to the present university system, let's say we're in favor: sell off Harvard Medical School, Harvard Law School, Harvard Kennedy School, etc., until all that's left is the College. That'd make it what? -- Wellesley + men? (This question is meant to be snarky, but not anti-Wellesley, for which I have the utmost respect, as will be clear in the rest of the post).


It's the money, stupid.


The blogosphere has been rising to the defense of the research university, with posters and commenters focusing on the (alleged) claim that universities use research dollars to fund the loss-leading undergraduate programs. Here's Mike the Mad Biologist:
[O]n a federal grant, usually somewhere between 30-40% of the total grant award doesn't go to the researcher for research costs (salaries, supplies, etc.), but to the institution. Now, some of that money is spent on actual administrative costs, but the rest goes to the university*. So if the university spins off $50 million, or $100, or, in the case of the University of Iowa, $169,175,021 of NIH funding alone (never mind other government sources), that's tens of millions of dollars that have to be recovered. Since I've called for more of a research institute model, I'm not opposed to spinning off research institutes. But I have no idea how universities that receive a lot of research dollars will make up the revenue shortfall.
There's an easy way of answering the question: write to any of the numerous, high-calliber exclusively-undergraduate institutions that makes the American education system so interesting: Wellesley, Swarthmore, Amherst, Grinnell, Oberlin, etc. For the last 150-200 years, such schools have focused on teaching, and teaching caliber is weighted heavily in tenure decisions. I had phenomenal professors. To name a few, Arlene Forman could have taught a turnip to speak Russian, and Jim Walsh delivered spellbinding lectures despite unpromising subject material (e.g., linear algebra). People who had never even attended Ron DiCenzo's classes nonetheless raved about the vicarious experience. 

Research University vs. Liberal Arts College

I loved the small liberal arts college experience and wouldn't have traded it for anything. But I have friends who feel the same way about the large research university: the inspirational presence of movers and shakers in the research world, they feel, is irreplaceable. I'm skeptical, but the great thing about the American education system is that it provides both options, something that many (all?) other countries lack. The only distressing thing is that so many students -- along with the Chronicle of Higher Education and essentially every blogger I read and all their commenters -- seem completely unaware that an alternative to the research university exists.

America has research-only institutes. It has undergraduate-only schools. And it has that fabulous hybrid institution: the research university. Arguing that we need to start founding undergraduate-only schools is like saying America really needs subways. Maybe we need more subways (I think we do!), but claiming they don't exist is just ignant, and it's an insult to the ones that exist and the people who made them possible.

Confusing verbs

The first post on universal grammar generated several good questions. Here's an extended response to one of them:
You said that a 1990's theory was dead wrong because sometimes emotion verbs CAN be prefixed with -un. Then you give examples of adjectives, not verbs, that have been prefixed: unfeared, unliked, unloved. I know these words are also sometimes used as verbs, but in the prefixed versions they are clearly adjectives. 
The theory I'm discussing wanted to distinguish between emotion verbs which have experiencers as subjects (fear, like) and those who have experiencers as objects (frighten, confuse). The claim was that the latter set of verbs are "weird" in an important way, one effect of which is that they can't have past participles.

This brings up the obvious problem that "frighten" and "confuse" appear to have past participles: "frightened" and "confused". The author then argued that these are not actually past participles -- they're adjectives. The crucial test is that you can add "un" to an adjective but not a participle (or so it's claimed). Thus, it was relevant that you can say "unfrightened" and "unconfused", suggesting that these are adjectives, but you can't say "unfeared" or "unliked", suggesting that these are participles, not adjectives.

The problem mentioned in the previous post was that there are also subject-experiencer verbs that have participles which can take the "un" prefix, such as "unloved". There are also object-experiencer verbs that have participles which can't be "un" prefixed, like "un-angered" (at least, it sounds bad to me; try also "ungrudged", "unapplauded", or "unmourned"). So the "un" prefixation test doesn't reliably distinguish between the classes of verbs. This becomes apparent once you look through a large number of both types of verbs (here are complete lists of subject-experiencer and object-experiencer verbs in English).

There is a bigger problem, which is that the theory assumes a lack of homophones. That is, there could be two words pronounced like "frightened" -- one is a past participle and one is an adjective. The one that can be unprefixed is the adjective. So the fact that "unfrightened" exists as a word doesn't rule out the possibility that "frighten" has a past participle.

To be completely fair to the theory, the claim that object-experiencer verbs are "weird" (more specifically, that they require syntactic movement) could be still be right (though I don't think it is). The point here was that the specific test proposed ("un" prefixation) turned out to provide different results. It actually took some time for people to realize this, and you still see the theory cited. The point is that getting the right analysis of a language is very difficult, and typically many mistakes are made along the way.

Universal meaning

My earlier discussion of Evans and Levinson's critique of universal grammar was vague on details. Today I wanted to look at one specific argument.

Funny words

Evans and Levinson briefly touch on universal semantics (variously called "the language of thought" or "mentalese"). The basic idea is that language is a way of encoding our underlying thoughts. The basic structure of those thoughts is the same from person to person, regardless of what language they speak. Quoting Pinker, "knowing a language, then, is knowing how to translate mentalese into strings of words and vice versa. People without a language would still have mentalese, and babies and many nonhuman animals presumably have simpler dialects."

Evans and Levinson argue that this must be wrong, since other languages have words for things that English has no word for, and similarly English has words that don't appear in other languages. This is evidence against a simplistic theory on which all languages have the same underlying vocabulary and differ only on pronunciation, but that's not the true language of thought hypothesis. Many of the authors cited by Evans and Levinson -- particularly Pinker and Gleitman -- have been very clear about the fact that languages pick and choose in terms what they happen to encode into individual words.

The Big Problems of Semantics

This oversight was doubly disappointing because the authors didn't discuss the big issues in language meaning. One classic problem, which I've discussed before on this blog, is the gavagai problem. Suppose you are visiting another country where you don't speak a word. Your host takes you on a hike, and as you are walking, a rabbit bounds across the field in front of your. Your host shouts "gavagai!" What should you think gavagai means?

There are literally an infinite number of possibilities, most of which you probably won't consider. Gavagai could mean "white thing moving," or "potential dinner," or "rabbit" on Tuesdays but "North Star" any other day of the week. Most likely, you would guess it means "rabbit" or "running rabbit" or maybe "Look!" This is a problem to solve, though -- given the infinite number of possible meanings, how do people narrow down on the right one?

Just saying, "I'll ask my host to define the word" won't work, since you don't know any words yet. This is the problem children have, since before explicit definition of words can help them learn anything, they must already have learned a good number of words.

One solution to this problem is to assume that humans are built to expect words of certain sorts and not others. We don't have learn that gavagai doesn't change it's meaning based on the day of the week because we assume that it doesn't.

More problems

That's one problem in semantics that is potentially solved by universal grammar, but not the only. Another famous one is the linking problem. Suppose you hear the sentence "the horse pilked the bear". You don't know what pilk means, but you probably think the sentence describes the horse doing something to the bear. If instead you find out it describes a situation in which the bear knocked the horse flat on its back,  you'd probably be surprised.

That's for a good reason. In English, transitive verbs describe the subject doing something to the object. That's not just true of English, it's true of almost every language. However, there are some languages where this might not be true. Part of the confusion is that defining "subject" and "object" is not always straightforward from language to language. Also, languages allow things like passivization -- for instance, you can say John broke the window or The window was broken by John. When you run into a possible exception to the subject-is-the-doer rule, you want to make sure you aren't just looking at a passive verb.

Once again, this is an example where we have very good evidence of a generalization across all languages, but there are a few possible exceptions. Whether those exceptions are true exceptions or just misunderstood phenomena is an important open question.

-------
Evans, N. and Levinson, S. (2009). The myth of language universals: Language diversity and its importance for cognitive science Behavioral and Brain Sciences, 32 (05) DOI: 10.1017/S0140525X0999094X

photo credit

Do Language Universals Exist?

Is there an underlying structure common to all languages? There are at least two arguments in favor of that position. One is an in principle argument, and one is based on observed data.

Since Chomsky, many researchers have noted that language would be impossible to learn if one approached it without preconceptions. It's like solving for 4 variables with only 3 equations -- for those of you who have forgotten your math, that can't be done. Quine pointed out the problem for semantics, but the problem extends to syntax.

The data-driven argument is based on the observation that diverse languages share many properties. All languages, it is claimed, have nouns and verbs. All languages have consonants and vowels. All languages put agents (the do-ers; Jane in Jane broke the window) in subject position and patients (the do-ees; the window in Jane broke the window) in object position. And so on. (Here's an extensive list.)

Though many researchers subscribe to this universal grammar hypothesis, it has always been controversial. Last year, Evans and Levinson published an extensive refutation of the hypothesis in Behavioral and Brain Sciences. They don't tackle the in principle argument (it's actually tough to argue against, since it turns out to be logically necessary), but they do take issue with the data-based argument.

Rare languages

Evans and Levinson point out that at best 10% of the world's 7,000 or so languages have been studied in any great detail, and that the bulk of all work on language has focused on English. They claim that researchers only believe in linguistic universals because they've only looked at a relatively small number of often closely-related languages, and they bring up counter-examples to proposed universals found in obscure languages.

This argument cuts both ways. The correct characterization of a language is very, very hard. Much of the work I have been doing lately has been an attempt to correctly characterize the semantics of about 300 related verbs in English. Hundreds of papers have been written about these verbs over the last half-century. Many of them have turned out to be wrong --  not because the researchers were bad, but because the problem is hard.

That's 300 verbs in the most-studied language on the planet, and we still have work to do. Evans and Levinson are basing their arguments on broad-scale phenomena in extremely rare, poorly-studied languages.

A friend of a friend told me...

The rare languages that Evans and Levinson make use of are not -- as they readily acknowledge -- well-understood. In arguing against recursion as a linguistic universal, they bring up Piraha, a language spoken in a handful of villages deep in the Amazon. Without discussing recursion in detail, the basic claim is that there are sentences that are ungrammatical in Piraha, and these sentences are ungrammatical because they require recursion.

To my knowledge, there is one Spanish-Piraha bilingual speaker, in addition to two English-speaking missionaries who, as adults, learned Piraha. The claim that Piraha doesn't have recursion is based on the work of one of those missionaries. So the data that sentences with recursion are ungrammatical in Piraha is based on a limited number of observations. It's not that I don't trust that particular researcher -- it's that I don't trust any single study (including my own), because it's easy to make mistakes.

Looking back at English, I study emotion verbs in which the subject of the verb experiences an emotion (e.g., fear, like, love). A crucial pillar of one well-known theory from the 1990s was that such verbs can't be prefixed with "un". That is, English doesn't have the words unfeared or unliked. While I agree that these words sound odd, a quick Google search shows that unfeared and unliked are actually pretty common. Even more problematic for the theory, unloved is a perfectly good English word. In fact, many of these verbs do allow "un" prefixation. The author, despite being an experienced researcher and a native speaker of English, was just wrong.

Even assuming that you are correct in claiming that a certain word or sentence doesn't appear in a given language, you could be wrong about why. Some years ago, Michael Tomasello (and others) noticed that certain constructions are more rare in child speech than one might naively expect. He assumed this was because the children didn't know those constructions were grammatical. For instance, in inflected languages such as Spanish or Italian, young children rarely use any verbs in all possible forms. A number of people (e.g., Charles Yang) have pointed out that this assumes that the children would want to say all those words. Take a look at this chart of all the forms of the Spanish verbs hablar, comer and vivir. The child might be excused for never using the form habriamos hablado ("we would have spoken") -- that doesn't mean she doesn't know what it is.

In short, even in well-studied languages spoken by many linguists, there can be a lot of confusion. This should give us pause when looking at evidence from a rare language, spoken by few and studied by fewer.

Miracles are unlikely, and rare

Some centuries ago, David Hume got annoyed at people claiming God must exist, otherwise how can you explain the miracles recorded in the Bible? Hume pointed out that by definition, a miracle is something that is essentially impossible. As a general rule, seas don't part, water doesn't turn into wine, and nobody turns into pillars of salt. Then consider that any evidence you have that a miracle did in fact happen could be wrong. If a friend tells you they saw someone turn into a pillar of salt, they could be lying. If you saw it yourself, you could be hallucinating. Hume concludes that however strong your evidence that a miracle happened is, that could never be as strong as the extreme unlikelihood of a miracle actually happening -- and, in any case, the chance that the Bible is wrong is way higher than the chance that Moses in fact did part the Sea of Reeds.

(For those of you who are worried, this isn't necessarily an argument against the existence of God, just an argument against gullibility.)

Back to the question of universals. Let's say you have a candidate linguistic universal, such as recursion, that has shown up in a large number of unrelated and well-studied languages. These facts have been verified by many, many researchers, and you yourself speak several of the languages in question. So the evidence that this is in fact a linguistic universal is very strong.

Then you come across a paper that claims said linguistic universal doesn't apply in some language X. Either the paper is right, and you have to toss out the linguistic universal, or it's wrong, and you don't. Evans and Levinson err on the side of tossing out the linguistic universal. Given the strength of evidence in favor of some of these universals, and the fact that the counter-examples involve relatively poorly-understood languages, I think one might rather err on the other side. As they say, extraordinary claims require extraordinary evidence.

The solution

Obviously, the solution is not to say something about "extraordinary claims" and wander on. Evans and Levinson's paper includes a plea to researchers to look beyond the usual suspects and start doing more research on distant languages. I couldn't agree more, particularly as many of the world's language are dying and the opportunity to study them is quickly disappearing.

-------
Evans, N. and Levinson, S. (2009). The myth of language universals: Language diversity and its importance for cognitive science Behavioral and Brain Sciences, 32 (05) DOI: 10.1017/S0140525X0999094X

Friends of science in government

The Democratic congress continues its support of American basic research. The House subcommittee recommended a 7.2% increase for NSF in the coming year, despite general belt-tightening hysteria. It's not as much as is needed, but it's still a nice change from 2001-2008. Hopefully it'll survive the rest of the legislative process.