Field of Science

New Grad School Rankings Don't Pass the Smell Test

The more I look at the new graduate school rankings, the more deeply confused I am. Just after publishing my last post, it suddenly dawned on me that something was seriously wrong with the number of publications per faculty data. Looking again at the Harvard data, the spreadsheet claims 2.5 publications per faculty for the time period 2000-2006. I think this is supposed to be per faculty per year, though the it's not entirely clear. As will be shown below, there's no way that number can be correct.

First, though, here's what the report says about how the number was calculated:
Data from the Thompson Reuters (formerly Institute for Scientific Information) list of publications were used to construct this variable. It is the average over seven years, 2000-2006, of the number of articles for each allocated faculty member divided by the total number of faculty allocated to the program. Data were obtained by matching faculty lists supplied by the programs to Thompson Reuters and cover publications extending back to 1981. For multi-authored articles, a publication is awarded for each author on the paper who is also on a faculty list. 
For computer science, refereed papers from conferences were used as well as articles. Data from résumés submitted by the humanities faculty were also used to construct this variable. They are made up of two measures: the number of published books and the number of articles published during the period 1986 to 2006 that were listed on the résumé. The calculated measure was the sum of five times the number of books plus the number of articles for each allocated faculty member divided by the faculty allocated to the program. In computing the allocated faculty to the program, only the allocations of the faculty who submitted résumés were added to get the allocation.
The actual data

I took a quick look through the CVs of a reasonable subset of faculty who were at Harvard during that time period. Here are their approximate publications per year (modulo any counting errors on my part -- I was scanning quickly). I should note that some faculty list book chapters separately on their CVs, but some do not. If we want to exclude book chapters, some of these numbers would go down, but only slightly.

Caramazza 10.8
*Hauser 13.6
Carey 4.7
Nakayama 5.9
Schacter 14.6
Kosslyn 10.3
Spelke 7.7
Snedeker 1.1
Wegner 2.3
Gilbert 4.0


One thing that pops out is that people doing work involving adult vision (Caramazza, Nakayama, Kosslyn) publish a lot more than developmental folk (Carey, Spelke, Snedeker). The other thing is that publication rates are very high (except for my fabulous advisor, who was not a fast publisher in her early days, but has been picking up speed since 2006, and Wegner, who for some reason in 2000-2002 didn't publish any papers).

What on Earth is going on? I have a couple hypotheses. First, I know the report used weights when calculating composite scores for the rankings, so perhaps 2.5 reflects a weighted number, not an actual number of publications. That would make sense except that nothing I've found in the spreadsheet itself, the description of variables, or the methodology PDF supports that view.

Another possibility is that above I accounted for only about 1/4-1/3 of the faculty. Perhaps I'm over-counting power publishers. Perhaps. But unless the people I left off this list weren't publishing at all, it would be very hard to get an average of 2.5 publications per faculty per year. And I know I excluded some other power publishers (Cavanagh was around then, for instance).

A possible explanation?

The best explanation I can think of is that the report actually is including a bunch of faculty who didn't publish at all. This is further supported by the fact that the report claims that only 78% of Harvard faculty had outside grants, whereas I'm pretty sure all professors in our department -- except perhaps brand new ones who are still on start-up funds -- have (multiple) outside grants.

But there are other faculty in our department who are not professors and do not (typically) do (much) research -- and thus do not publish or have outside grants. Right now our department lists 2 "lecturers" and 4 "college fellows." They're typically on short appointments (I think about 2 years). They're not tenure track, they don't have labs, they don't advise graduate students, and I'm not even sure they have offices. So in terms of ranking a graduate program, they're largely irrelevant. (Which isn't a slight against them -- I know two of the current fellows, and they're awesome folk.)

So of 33 listed faculty this year, 6 are not professors with labs and thus are publishing at much lowered rates (if at all) and don't have outside grants. That puts us in the ballpark of the grant data in the report (82%). I'm not sure if it's enough to explain the discrepancy in publication rates, but it certainly would get us closer.

Again, it is true that the lecturers and fellows are listed as faculty, and the report would be in within its rights to count them ... but not if the report wants to measure the right thing. The report is purporting to measure the quality and quantity of the research put out by the department, so counting non-research faculty is misleading at best.

Conclusion

Between this post and the last, I've found some serious problems in the National Academies' graduate school rankings report. Several of the easiest-to-quantify measures they include simply don't pass the smell test. They are either measuring the wrong thing, or they're complete bullshit. Either way, it's a problem.

(Or, as I said before, the numbers given have been transformed in some peculiar, undocumented way. Which I suppose would mean at least they were measuring the right thing, though reporting misleading numbers is still a problem.)
------
*Before anyone makes any Hauser jokes, he was on the faculty, so he would have been included in the National Academies' report, which is what we're discussing here. In any case, removing him would not drastically change the publications-per-faculty rate.

1 comment:

John Hawks said...

I agree, there's something weird about it. The report says that they counted publications in ISI, but their survey (which I filled out) requested faculty to report those numbers. Were the survey data simply not used? If not, why not?

If ISI, then how were books and book chapters counted versus articles? These aren't obscure topics, anybody who's had to quantify research records has to be able to answer these questions.