Field of Science

Predicting my h-index

A new article in Nature presents a model for predicting a neuroscientist's future h-index based on current output. For those who don't know, the h-index is the largest N such that you have N papers each of which have at least N citations. It does this based on your current h-index, total number of published articles, years since first article, the total number of journals published in, and the number of papers in Nature, Science, Nature Neuroscience, PNAS and Neuron.

I'm not a neuroscientist (though I *am* in Neurotree, upon which the data are based), so I figured I'd see what it predicts for me. I ran into a problem right away, though: how do we count papers? Does my book chapter count? How about two editorials that were recently published (that is, non-empirical papers appears in empirical journals)? What about two papers that are in press but not yet published?

If we are conservative and count only empirical papers and only papers currently in print, my predicted h-index in 2022 is 12:

If we count everything I've published -- that is, including papers in press, book chapters, and non-empirical papers -- things improve somewhat (apparently it's a good thing that almost everything I publish is in different outlets):


Interestingly, if I pretend that I currently have an h-index of 9 (that is, all my papers have been cited at least 9 times), it doesn't actually do any good. I've increased my current h-index by 6 but only my predicted 2022 h-index by 3:

I guess the model has discovered regression to the mean.

(BTW I've noticed that neuroscientists really like h-index, probably because they cite each other so much. H-indexes in other fields, such as -- to take a random example -- psychology, tend to be much lower.)


Strange language fact of the day

Apparently in some languages/cultures, it is common to call an infant "Mommy". Even a boy infant. I am told by reliable sources this is true of Bengali and of Tzez. Reportedly Bengladeshi immigrants try to import this into English and get weird looks at the daycare.

I am told that this is actually relatively common and appears in many languages. And there are some phenomena in English that aren't so different. You can, for instance, say the pot is boiling when you in fact mean that the water in the pot is boiling, not the pot itself. You can ask for someone's hand in marriage, even though you probably want the entire person, not just the hand. So words can sometimes stand in for other words.

It still blows my mind, though. And I'd love to hear what a Whorfian had to say about it.

Around the Internet: What you missed last week (9/17/2012 edition)

Chomsky
OK, not technically last week, but here's a longish post critiquing Chomsky and a much longer, heated discussion in the comments, from BishopBlog.

Replication
A nice editorial on the important of replicating as a way of dealing with fraud.

The New Yorker Still Hates Science (esp. Evolution)
When I first heard the claim that the New Yorker was fundamentally anti-science, it came as a surprise. Then I thought back through what they publish, and it became less surprising. Now, reading this out-of-touch, anti-evolution tirade isn't surprising at all (my favorite part is where Gottlieb writes that understanding evolution is superfluous and a waste of time).

Pricing conundrum

Before I went to Riva del Garda for this year's AMLaP, I picked up a travel guide on my Kindle. (If only such things had existed the years I backpacked in Eurasia. My strongest memories are of how much my backpack weighed. Too many books.)

Oddly, the Lonely Planet Italian Lakes Region guide is the exact same price as the whole Italy guide. These local guides tend to be glorified excerpts of the country book, and since they both weigh the same...

Estimating replication rates in psychology

The Open Science Collaboration's interim report, which will come out shortly in Perspectives in Psychological Science, is available. We nearly pulled off the physics trick of having a paper where the author list is longer than the paper itself. I think there are nearly 70 of us (if you scroll down, you'll find me in the H's).

The abstract says it all:
Reproducibility is a defining feature of science. However, because of strong incentives for innovation and weak incentives for confirmation, direct replication is rarely practiced or published. The Reproducibility Project is an open, large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science. So far, 72 volunteer researchers from 41 institutions have organized to openly and transparently replicate studies published in three prominent psychological journals from 2008. Multiple methods will be used to evaluate the findings, calculate an empirical rate of replication, and investigate factors that predict reproducibility. Whatever the result, a better understanding of reproducibility will ultimately improve confidence in scientific methodology and findings.
If you are interested in participating, there is still time. Go to the website for more information.