Field of Science

Showing posts with label On nonsense. Show all posts
Showing posts with label On nonsense. Show all posts

Science Mag studies science. Forgets to include control group.

Today's issue of Science carries the most meta sting operation I have ever seen. John Bohannon reports a study of open access journals, showing lax peer review standards. He sent 304 fake articles with obvious flaws to 304 open access journals, more than half of which were accepted.

The article is written as a stinging rebuke of open access journals. Here's the interesting thing: There's no comparison to traditional journals. For all we know, open access journals actually have *stricter* peer review standards than traditional journals. We all suspect not, but suspicion isn't supposed to count as evidence in science. Or in Science.

So this is where it gets meta: Science -- which is not open access -- published an obviously flawed article about open access journals publishing obviously flawed articles.

It would be even better if Bohannon's article had run in the "science" section of Science, rather than in the news section, where it actually ran, but hopefully we can agree that Science can't absolve itself of checking its articles for factualness and logical coherence just by labeling them "news".

International Journal of Lousy Research

Jeffrey Beall's blacklist of "predatory open-access journals" -- discussed in yesterday's New York Times -- provides evidence for my long-standing suspicion of any journal named "International Journal of ..." There probably are some good journals named "International Journal of...", but I don't know of any off-hand. And there seem to be an awful lot of bad ones, probably for good reason: An internationally-recognized journal doesn't have to say so. So almost by definition a journal that has to call itself "International Journal of" is probably not a well-known journal.

In general, nearly every journal on the list has some location in its name, such as South Asian Journal of Mathematics, which doubles down by referring to itself on its home page as an "international journal". Again, there are, of course, good journals with region-specific names. But there don't seem to be many. I'm less sure of the reason for this one.

[Future Post: Explaining why universities that market themselves as "The Harvard of" some region are frequently not even the most prestigious school in that region.]

Professor -- The Easiest Job in the World

There has been a small kerfuffle over Susan Adams's article at Forbes, titled "The least stressful jobs of 2013":
University professors have a lot less stress than most of us. Unless they teach summer school, they are off between May and September and they enjoy long breaks during the school year, including a month over Christmas and New Year's and another chunk of time in the spring. Even when school is in session they don't sped too many hours in the classroom ... Working conditions tend to be cozy and civilized and there are minimal travel demands...
She also mentions the great job prospects ("Universities are expected to add 305,700 adjunct and tenure-track professors by 2020").

To her credit, Adams has added a sizable addendum to her article, correcting -- but not apologizing for -- her mistakes. Unfortunately, this is far from the first time this kind of article has appeared in a major publication. Some time back, a columnist for the New York Times wrote an article suggesting that the solution to rising costs of higher education was to make professors work more than a few hours a week. An article in the New Yorker casually noted that the new head of a particular company was concerned that his employees worked "the hours of college professors" (I initially assumed they meant "way too hard" and that the boss wanted them to take a break!). What gives?

Scicurious suggests it's the curse of half-knowledge:
The vast majority of us aren't teachers or professors, but we've all been students, right? ... We thought that, because of what we saw of them in our classes, we knew what they did ... Because of this half-knowledge, people make assumptions about our jobs, assumptions that can really affect how we are perceived as people...
 That is no doubt part of it, but it also requires that people not think very hard. If I heard that someone made a pretty good living working only a few hours a week, it would immediately set off my implausibility alarm. I mean, what are the chances? And you'd only have to think for a moment to realize this can't be true.

Adams got hundreds of comments and letters pointing out that professors, in addition to giving a few lectures a week, also grade papers, advise students, write papers and books, go to conferences, give invited talks, etc. Adams presents this as if this came as a surprise, but that seems equally implausible. I'm going to assume she's read one or two articles about medicine or science, in which case the people discussed are inevitably professors. In fact, articles about politics occasionally cite professors as well. If she went to college, she knows that professors have office hours and grade papers. Many of the books on science and politics in the bookstore are written by faculty, as are essentially all college textbooks.

Even if she had never attended college, never interacted with a professor, and didn't read articles about higher education, a few minutes of Googling prior to writing her article would have corrected that mistake. My guess is that she didn't really think about her article before writing it and didn't consult either her own memory or Google because she -- and the others who write similar articles -- wanted this crazy claim about the lazy professor to be true. The interesting question is why she wanted it to be true. Anti-intellectualism? A desire to believe that such cushy jobs really exist? Or is this just an example of one of those ideas that are crazy enough that they inspire belief (like one of those many apocryphal "weird facts")?

*I do realize that some professors do very little work. Some people in all professions do very little work.

Is Psychology a science?: Redux

The third-most read post on this blog is "Is Psychology a science?". I was a few years younger then and still had strong memories of one of my friends complaining, when we were both undergraduates, that he had to take a psychology course as part of his science distributional requirements. "Psychology isn't a science," he said, "because they don't do experiments." Since he was telling me this over AIM as I was sitting in my psychology laboratory, analyzing an experiment, it didn't go over well.

It's been a popular post, but I haven't written about the subject much since in part because I started to suspect that the "psychology isn't a science" bias might actually be a thing of ignorant undergraduates and a few cranks. It's not something I've rarely heard in the last few years, and there's no need to write diatribes against a non-existant prejudice.

In retrospect, maybe I haven't come across these opinions because I mostly hang out with other psychologists. A colleague recently forwarded me this blog post ("Keep Psychology out of the science club"), which links to a few other similar pieces on blogs and in newspapers. So it seems the issue is alive and well.

Some articles one comes across are of the "psychologists don't do experiments" variety; these are easily explained by ignorance and an inability to use Google. But some folks raise some real concerns which, while I think they are misplaced, really are worth thinking about.


Psychology is too hard

One common theme that I came across is that psychology is simply too difficult. We'll never understand human behavior very well, so maybe we shouldn't even try. For instance, Gary Gutting, writing in the Opinionator at the New York Times, said:
Social sciences may be surrounded by the "paraphernalia" of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments. But when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events ... while the physical sciences produce many detailed and precise predictions, the social sciences do not ... Because of the many interrelated causes at work in social systems, many questions are simply "impervious to experimentation" ... even when we can get reliable experimental results, the causal complexity restricts us...
In a Washington Post editorial, Charles Lane wrote:
The NSF shouldn't fund any social science. Federal funding for mathematics, engineering and other "hard" sciences is appropriate. In these fields, researchers can test their hypotheses under controlled conditions; then those experiments can be repeated by others. Though quantitative methods may rule economics, political science and psychology, these disciplines can never achieve the objectivity of the natural sciences. Those who study social behavior -- or fund studies of it -- are inevitably influenced by value judgments, left, right, and center. And unlike hypotheses in the hard sciences, hypotheses about society usually can't be proven or disproven by experimentation. Society is not a laboratory.
Alex Berezow at the Newton Blog agrees:
Making useful predictions is a vital part of the scientific process, but psychology has a dismal record in this regard.
Is that a fair critique?

These writers don't entirely miss the mark. It really is true that psychology does not make as precise or as accurate predictions as, say, physics. That is not the same thing as saying that we can't make any predictions. Berezow complains about happiness research:
Happiness research is a great example of why psychology isn't a science. How exactly should "happiness" be defined? The meaning of the word differs from person to person, and especially between cultures. What makes Americans happy doesn't necessarily make Chinese people happy. How does one measure happiness? Psychologists can't use a ruler or a microscope, so they invent an arbitrary scale. Today, personally, I'm feeling about a 3.7 out of 5. How about you? ...  How can an experiment be consistently reproducible or provide any useful predictions if the basic terms are vague and unquantifiable?
That's a great question! Let's start with the facts. It is true that we don't know exactly what it means to be a 3.7 on a scale of 1-5. But we do know a few interesting things.

People's predictions of how happy they will rate themselves in the future are systematically biased. People will say that good things (like getting tenure) will make them very happy (a 5 out of 5) whereas bad things (like not getting tenure) will make them very sad (a 1 out of 5), whereas when you then ask those same people to rate their happiness a little while after the event, people generally rate themselves as not nearly so happy or unhappy as they predicted. (Similarly, people who lose a limb usually rate themselves as about as happy afterwards as before, provided you give them a little time to adjust.) People who have children normally see a drop in how happy they rate themselves. They only start to recover after their children leave the nest. There is also the "future ahedonia" effect: people think that good things (e.g., an ice cream sunday) will make them happier now (on our 1-5 scale) than those same good things would make them happy in the future, and conversely for bad things (e.g., doing my homework won't feel so bad if I do it tomorrow rather than today). And so on. (These and many other examples can be found in Dan Gilbert's Stumbling on Happiness.)

These and other findings are highly reliable, despite the fact that we don't have a direct, objective measurement of happiness. In fact, as Dan Gilbert has pointed out, we would only consider that "direct" measurement to be a measurement of happiness if it correlated really well with how happy people said they were. To the extent it diverged from how happy people claim to be, we would start to distrust the "direct" measurement.

I personally am glad that we know what we know about happiness, though I wish we knew more. I picked happiness to defend because I've noticed that even those who defend psychology in comments sections give up happiness research as a lost cause. I think it's pretty interesting, useful work. It would be even easier to defend, for instance, low-level vision research, which makes remarkably precise predictions, has clear theories of the relationship between the psychological phenomena and the neural implementations, etc. (See also this post for some psychology success stories.)

Just how good do you need your predictions to be?

Still, it is true that we can't always make the precise predictions that can be made in some other fields. Of course, other fields can't always make the precise predictions, either. While physicists are great at telling you what will happen to rigid objects moving through vacuums, predicting the motions of real objects in the real world has been traditionally a lot harder, and understanding fluid dynamics has been deeply problematic (though I understand this has been getting a lot better in recent years). And that's without pulling out the Heisenberg Uncertainty Principle, which should cause anyone who wants precise, deterministic predictions to declare physics a non-science.

Also, some parts of psychology are able to make much more precise predictions than others do. Anything amenable to psychophysics tends to be much more precise, and vision researchers, as already noted, have remarkably well worked-out theories of low- and mid-level vision.

This line of discussion also raises an interesting question: when exactly did physics become a science? Was it a science in Newton's day, when we still new squat about electromagnetism -- much less elementary particles -- and couldn't make even rough predictions about turbulent air or fluid systems? And to people from 350 years from now, will the physics of today seem like a "real" science (my guess: no).

Worries

Berezow ends his post with the following caution:
To claim [psychology] is a "science" is inaccurate. Actually, it's worse than that. It's an attempt to redefine science. Science, redefined, is no longer the empirical analysis of the natural world; instead, it is any topic that sprinkles a few numbers around. This is dangerous, because, under such a loose definition, anything can qualify as science. And when anything qualifies as science, science can no longer claim to have a unique grasp on secular truth.
I have a different worry. My worry is that someone gets ahold of a time machine, goes back in time to 1661 and convinces Newton to lay off that non-scientific "physics" crap. Pre-Newtonian physics was a hodgepodge of knowledge, little resembling what we think of science today. Making precise predictions about the messy, physical world we live in no doubt seemed an impossible pipe-dream to many. Luckily, folks like Newton kept plugging away, and three and a half centuries later, here we are.

We should keep in mind that the serious study of the mind only began in the mid-1800s; physics has a significant head-start. And, as the anti-psychology commentators are happy to point out, psychology is much, much harder than physics or chemistry. But the only reason I can see to pull the plug is if we are sure that (a) we have learned nothing in the last 150 years, and (b) we will never make any further progress. These are empirical claims and so subject to test (I think the first one has already been falsified). So here's a proposed experiment: psychologists keep on doing psychology, and people who don't want to don't have to. And we'll wait a few decades and see who knows more about the human mind.

Pricing conundrum

Before I went to Riva del Garda for this year's AMLaP, I picked up a travel guide on my Kindle. (If only such things had existed the years I backpacked in Eurasia. My strongest memories are of how much my backpack weighed. Too many books.)

Oddly, the Lonely Planet Italian Lakes Region guide is the exact same price as the whole Italy guide. These local guides tend to be glorified excerpts of the country book, and since they both weigh the same...

Another bunch of retractions

It appears that a series of papers, written by a German business professor, are being retracted. This particular scandal doesn't seem to involve data fabrication, though. Instead, he is accused of double-publishing (publishing the same work in multiple journals) and also of making errors in his analyses (this lengthy article -- already linked to above -- discusses the issues in detail).

It's possible that I was not paying attention before, but there seems to be more publication scandals lately than I remember. When working on my paper about replication, I actually had to look pretty hard to find examples of retracted papers in psychology. That wouldn't be so difficult at the moment, after Hauser, Smeeters and Sanna.

If there is an increase, it's hopefully due not to an increase in fraud but an increase in vigilance, given the attention the issue has been getting lately.

Making up your data

Having finished reading the Simonsohn paper on detecting fraud, I have come to two conclusions:

1. Making up high-quality data is really hard. Part of the problem with making up data is that you have to introduce some randomness into it. If your study involves asking people how much they are willing to pay for a black t-shirt, you can't just write down that they all were willing to pay the average (say $12). You have to write down some variation ($12, $14, $7, $9, etc.).

The problem is that humans are notoriously bad at generating random number sequences. Simonsohn discusses this in terms of Tversky and Kahneman's famous, tongue-in-cheek paper "Belief in the law of small numbers." People think that random sequences should look roughly "average", even if the sample is small: Flip a coin 4 times, you should get 2 heads and 2 tails, when in fact getting 4 heads isn't all that improbable.

So your best bet, if you are making up data, is to use a computer program to generate it from your favorite distribution (the normal distribution would be a good choice in most cases). The problem is that data can have funny idiosyncrasies. One of the problems with the string of numbers I suggested above ($12, $14, $7, $9, etc.) is that humans like round numbers. So when people say what they are willing to pay for a t-shirt, what you should see is a lot of $10s, $20s and maybe some $5s and $15s. The numbers in my list are relatively unlikely.

The paper goes on to describe other problems as well. What I get from this is that making up data in a way that is undetectable is a lot of work, and you might as well actually run the study. So even leaving aside other reasons you might want to not commit fraud (ethics, desire for / belief in importance of knowledge, etc.), it seems sheer laziness alone should steer you the other direction.

2. The Dark Knight Rises is awesome. Seriously. Technically there was nothing about that in the paper, but I was thinking about the movie while reading the paper. Since I saw the show this morning, it's been hard to think of anything else. The most negative thing I can say about it is that it wasn't better than the last one, which is grading on a pretty steep curve.

Detecting fraud

Uri Simonsohn has now posted a working paper describing how he detected those two recent cases of data fraud. Should my other writing projects progress fast enough, I'll write about it soon. I'll also post links to any interesting discussions I come across.

Graduate School Rankings

There have been a number of interesting posts in the last few days about getting tenure (1, 2, 3). One thing that popped out at me was the use of the National Research Council graduate school rankings in this post. I am surprised that these continue to be cited, due to the deep flaws in the numbers. Notice I said "numbers", not "methodology". I actually kind of link their methodology. Unfortunately, the raw numbers that they use to determine rankings are so error-ridden as to make the rankings useless.

For those who didn't see my original posts on the subject, cataloging the errors, see here and here.

Annoyed about taxes

It's not that I mind paying taxes per se. In fact, I consider it everyone's patriotic duty to pay taxes. I just wish it wasn't so damn complicated.

The primary confusion I have to deal with every year is that Harvard provides a series of mini-grants for graduate students, which they issue as scholarships. Scholarships are taxable as income, unless they are used to pay for tuition or required course supplies are not taxable, however. Scholarships which are used to I'm a graduate student, which means that the four courses I take every semester are "independent research," and obviously doing research is required. On the other hand, the IRS regulations specifically state that any scholarships used to pay for research are taxable. So if I use the mini-grant to pay for my research, is it taxable or not?

I actually asked an IRS representative a few years ago, and she replied that something counts are "required for coursework" only if everyone else taking that course has to buy it. If "everyone else" includes everyone else in the department doing independent research, then it's trivially the case that they are not required to do my research (though that would be really nice!), nor are they actually required to spend anything at all (some people's research costs more than others). If "everyone else" is only me -- this is independent research after all -- then the mini-grant is not taxable. This of course all hinges on whether or not "independent research" is actually a class. My understanding is that the federal government periodically brings action against Harvard, claiming that independent research is not a class.

Some people occasionally deduct the mini-grant expenditures as business expenses. This is not correct. According to the IRS, graduate students are not employees and have no business, and thus we have no business expenses (this reasoning also helps prevent graduate student unions -- you can't form a union if you aren't employed). And in any case, as I mentioned, we are specifically forbidden to write off the cost of doing research.

It's not just that the rules are confusing, they don't make sense. Why does the government want to tax students for the right to do research? How is that a good idea? Research benefits the public at large, and comes at a high opportunity cost for the researcher already (one could make more doing just about anything else). Why make us pay for it?

(It probably should be pointed out that Harvard could cough up the taxes itself, or they could administer the mini-grants as grants rather than as scholarships, though that would cost them more in terms of administrative overhead. Instead, Harvard specifically forbids using any portion of the mini-grant to pay for the incurred taxes. Though since they don't ask for any accounting, it's quite possible nobody pays any attention to that rule.)

The Best Graduate Programs in Psychology

UPDATE * The report discussed below is even more problematic than I thought.

The National Academies' just published an assessment of U.S. graduate research programs. Rather than compiling a single ranking, they rank programs in a number of different ways -- and also published data on the variables used to calculate those different rankings -- so you can sort the data however you like. Another aspect to like is that the methodology recognizes uncertainty and measurement error, so they actually estimate an upper-bound and lower-bound on all of the rankings (what they call the 5th and 95th percentile ranking, respectively).

Ranked, Based on Research

So how do the data come out? Here are the top programs in terms of "research activity" (using the 5th percentile rankings):

1. University of Wisconsin-Madison, Psychology
2. Harvard University, Psychology
3. Princeton University, Psychology
4. San Diego State University-UC, Clinical Psychology
5. University of Rochester, Social-Personality Psychology
6. Stanford University, Psychology
7. University of Rochester, Brain & Cognitive Sciences
8. University of Pittsburgh-Pittsburgh Campus, Psychology
9. University of Colorado at Boulder, Psychology
10. Brown University, Cognitive and Linguistic Sciences: Cognitive Sciences

Yes, it's annoying that some schools have multiple psychology departments and thus each is ranked separately, leading to some apples v. oranges comparisons (e.g., vision researchers publish much faster than developmental researchers, partly because their data is orders of magnitude faster/easier to collect; a department with disproportionate numbers of vision researchers is going to have an advantage).

What is nice is that these numbers can be broken down in terms of the component variables. Here are rankings in terms of publications per faculty per year and citations per publication:

Publications per faculty per year


1. State University of New York at Albany, Biopsychology
2. University of Wisconsin-Madison, Psychology
3. Syracuse University Main Campus, Clinical Psychology
4. San Diego State University-UC, Clinical Psychology
5. Harvard University, Psychology
6. University of Pittsburgh-Pittsburgh Campus, Psychology
7. University of Rochester, Social-Personality Psychology
8. Florida State University, Psychology
9. University of Colorado-Boulder, Psychology
10. State University of New York-Albany, Clinical Psychology

Average Citations per Faculty


1. University of Wisconsin-Madison, Psychology
2. Harvard University, Psychology
3. San Diego State University-UC, Clinical Psychology
4. Princeton University, Psychology
5. University of Rochester, Social_Personality Psychology
6. Johns Hopkins University, Psychological and Brain Sciences
7. University of Pittsburgh-Pittsburgh Campus, Psychology
8. University of Colorado-Boulder, Psychology
9. Yale University, Psychology
10. Duke University, Psychology

So what seems to be going on is that there are a lot of schools on the first list which publish large numbers of papers that nobody cites. If you combine the two lists in order to get average number of citations per year per faculty, here are the rankings. I'm including numbers this time so you can see the distance between the top few and the others. The #1 program doubles the rate of citations of the #10 program.

Average Citations per Faculty per Year


1. University of Wisconsin-Madison, Psychology - 13.4
2. Harvard University, Psychology - 12.7
3. San Diego State University-UC, Clinical Psychology - 11.0
4. Princeton University, Psychology - 10.6
5. University of Rochester, Social-Personality Psychology - 10.6
6. Johns Hopkins University, Psychological and Brain Sciences - 8.8
7. University of Pittsburgh-Pittsburgh Campus, Psychology - 8.3
8. University of Colorado-Boulder, Psychology - 8.0
9. Yale University, Psychology - 7.5
10. Duke University, Psychology - 6.9

The biggest surprise for me on these lists is that University of Pittsburg is on it (it's not a program I hear about often) and that Stanford is not.

Student Support

Never mind about the research, how do the students do? It's hard to say, partly because the variables measured aren't necessarily the ones I would measure, and partly because I don't believe their data. The student support & outcomes composite is build out of:

Percent of first year students with full financial support
Average completion rate within 6 years
Median time to degree
Percent of students with academic plans
Program collects data about post-graduation employment

That final variable is something that would only be included by the data-will-save-us-all crowd; it doesn't seem to have any direct relationship to student support or outcome. The fact that they focus on first year funding only is odd. I think it's huge that my program guarantees 5 years -- and for 3 of those we don't have to teach. Similarly, one might care whether funding is tied to faculty or given to the students directly. Or whether there are grants to attend conferences, mini-grants to do research not supported by your advisor, etc.

But leaving aside whether they measured the right things, did they even measure what they measured correctly? The number that concerns me is "percent of students with academic plans," which is defined in terms of the percentage that have lined up either a faculty position or a post-doctoral fellowship by graduations, and which is probably the most important variable of those they list in terms of measuring the success of a research program.

They find that no school has a rate of over 55% (Princeton). Harvard is at 26%. To put none to fine a point on it, hat's absurd. Occasionally our department sends out a list of who's graduating and what they are doing next. Unfortunately, I haven't saved any of them, but typically all but 1 or 2 people are continuing on to academic positions (there's often someone who is doing consulting instead, and occasionally someone who just doesn't have a job lined up yet). So the number should be closer to 90-95% -- not just at Harvard, but presumably at peer institutions.

This makes me worried about their other numbers. In any case, since the "student support" ranking is so heavily dependent on this particular variable, and that variable is clearly measured incorrectly, I don't think there's much point in looking at the "student support" ranking closely.

Slate's Report on Hauser Borders on Fraud

Love, turned sour, is every bit as fierce. I haven't written about the Hauser saga for a number of reasons. I know and like the guy, and I find nothing but sadness in the whole situation. Nonetheless, I've of course been following the reports, and I wondered why my once-favorite magazine had so long been silent.

Enjoying my fastest Wi-Fi connection in weeks here at the Heathrow Yotel, I finally found Slate's take on scandal, subtitled What went wrong with Marc Hauser's search for moral foundations. The article has a nice historical overview of Hauser's work, in context, and neatly describes several experiments. The article is cagey, but you could be excused for believing that (a) Hauser has done a lot of moral cognition research with monkeys, and (b) that work was fraudulent. The only problem is that nobody, to my knowledge, has called Hauser's moral cognition research into question -- in fact, most people have gone out of their way to say that that work (done nearly exclusively with humans) replicates very nicely. There was some concern about some work on intention-understanding in monkeys, which is probably a prerequisite for some types of moral cognition, but that's not the work one thinks of when talking about Hauser's Moral Grammar hypothesis.

I can't tell if this was deliberately misleading or just bad reporting, and I'm not sure which is more disturbing.

Slate's science reporting has always been weak (see here, here, here, and especially here), and the entire magazine has been on a steady decline for several years. Sigh. I need a new magazine.

I liked "Salt," but...

What's with movies in which fMRI can be done remotely. In an early scene, the CIA do a remote brain scan of someone sitting in a room. And it's fully analyzed, too, with ROIs shown. I want that technology -- it would make my work so much easier!

UPDATE I'm not the only one with this complaint. Though Popular Mechanics goes a bit easy on the movie by saying fMRI is "not quite at the level Salt portrays." That's a bit like saying space travel is not quite at the level Star Trek portrays. There may someday be a remote brain scanner, but it won't be based on anything remotely like existing fMRI technology, which requires incredibly powerful, supercooled and loud magnets. Even if you solved the noise problems, there's nothing to be done about the fact that the knife embedded in the Russian spy's shoe (yes -- it is that kind of movie) would have gone flying to the center of the magnetic field, along with many of the other metal objects in the room.

Lie detection: Part 2

I wrote recently about whether fMRI should be used for lie detection in court. US Magistrate Judge Tu Pham says "no". Science reports:
But while Judge Pham agreed that the technique had been subject to testing and peer review, it flunked on the other two points suggested by the Supreme Court to weigh cases like this one: the test of proven accuracy and general acceptance by scientists.
What I find interesting about this argument, as noted in my previous post, is that it's not clear that commonly-accepted "evidence" passes those tests: fingerprinting and eyewitness testimony are two good examples.

Vaccination and the Assault on Health

I had always though that refusal to get a flu vaccination was relatively harmless masochism. Refusal to vaccinate one's own children, on the other hand, should probably be prosecuted as child abuse, but at the least the negative consequences stay close to home.

Yesterday, however, I read two articles on vaccination. One in Slate looks at the risks the unvaccinated pose to people with immunity problems (she's unable to get childcare for her child, who is undergoing cancer treatment, because the risk of being around unvaccinated children is too high). If that seems like a parochial problem ("my kid doesn't have cancer; why should I worry about vaccination rates?"), the other article, appearing in Wired, is feature-length, and focuses on the anti-vaccine movement and the dangers it poses to the health of everyone.

Both note the rise in non-vaccination and the concomitant rise in outbreaks of the scourges of yesteryear. And they were scourges:
Just 60 years ago, polio paralyzed 16,000 Americans every year, while rubella caused birth defects and mental retardation in as many as 20,000 newborns. Measles infected 4 million children, killing 3,000 annually, and a bacterium called Haemophilus influenzae type b caused Hib meningitis in mor ehtan 15,000 children, leaving many with permanent brain damage...
But refusing to vaccinate is more than just a convenient way of decreasing the probability you'll have to pay for college (and that your neighbor's kid with leukemia will survive). This is because the un-vaccinated put the vaccinated at risk.

The Risk to Us All

As told in the Wired article, an unvaccinated 17-year-old Indiana girl picked up measles on a 2005 trip to Bucharest. When she returned, she went to a church gathering of 500 people. Of the 50 attendees who had not been vaccinated, 32 developed measles. Any adults who got measles had at least made the choice to take on that risk, but the children had not.

Even worse are the two people who had been vaccinated but nonetheless got sick. They had been responsible and protected themselves, but this reckless 17-year-old and her parents endangered their lives. First, though, three cheers for vaccines. Of the unvaccinated, 64% got sick. Of the vaccinated and those with natural immunity, only 0.8% got sick.

But still, vaccines don't always work. Sometimes they don't take. Sometimes your immune response may have weakened (for instance, through aging). Or you might just have bad luck. A 2002 study in The Journal of Infectious Diseases determined that you were safer as an unvaccinated person in a well-vaccinated country than as a vaccinated person in a largely un-vaccinated country.

People who refuse vaccines aren't just risking themselves, and parents who refuse vaccines for their children aren't just risking their children, they are risking you and me.

Baby-Killers

What makes this even worse is that every baby is initially unvaccinated. Children have to reach a certain age in order to get vaccines. What protects babies is that everyone older is healthy (i.e., vaccinated). So adult vaccine-refuseniks made it through infancy partly thanks to everyone else getting vaccinated. But they aren't willing to give other babies the same chance.

Do people have the right to choose for themselves whether they want vaccines? Sure -- as long as they live on top of a mountain or on a deserted island away from contact with anyone else. Mandatory vaccination**, and now!



(**With medical exceptions, of course)

A vote for McCain is a vote against science

Readers of this blog know that I have been skeptical of John McCain's support for science. Although he has said he supports increasing science funding, he appears to consider recent science funding budgets that have not kept pace with inflation to be "increases." He has also since called for a discretionary spending freeze.

In recent years vocally anti-science elements have hijacked the science policies of the Republican party -- a party that actually has a strong history of supporting science -- so the question has been where McCain stands, or at least which votes he cares about most. The jury is still out on McCain, but Palin just publicly blasted basic science research as wasteful government spending.

The project that she singled out, incidentally, appears to be research that could eventually lead to new treatments of Autism. Ironically, Palin brought up this "wasteful" research as a program that could be cut in order to fully fund the Individuals with Disabilities Education Act.

As ice melts, oceanography freezes

Nature reports that the US academic oceanographic fleet is scaling back operations due to a combination of budget freezes and rising fuel costs. This means that at least one of its 23 ships will sit out 2009, and two others will take extended holidays.

Even so, more cuts will probably necessary.

This is of course on top of the budgetary crisis at one of the USA's premier physics facitilities, Fermi Lab.

Autism and Vaccines

Do vaccines cause autism? It is a truism that nothing can ever be disproven (in fact, one of the most solid philosophical proofs is that neither science -- nor any other extant method of human discovery -- can prove any empirical claims either).

That said, the evidence for vaccines causing autism is about as good as the the evidence that potty-training causes autism. Symptoms of autism begin to occur around some time after the 2-year-old vaccinations, which is also about the same time potty-training typically happens. Otherwise, a number of studies have failed to find any link. Nonetheless, the believers in the vaccines-cause-autism theory have convinced some reasonably mainstream writers and even all three major presidential candidates that the evidence is, at the worst, only "inconclusive."

My purpose here is not to debunk the vaccine myth. Others have done it better than I can. My purpose is to point out that, even if the myth were true, not vaccinating your children would be a poor solution.

It has been such a long time since we've had to deal with polio and smallpox, that people have forgotten just how scary they were. In 1952, at the height of the polio epidemics, around 14 out of 100,000 of every Americans had paralytic polio. 300-500 million people died of smallpox in the 20th century. Add in hepatitis A, hepatitis B, mumps, measles, rubella, diptheria, pertussis, tetanus, HiB, chicken pox, rotavirus, meningococcal disease, pneumonia and the flu, and no wonder experts estimate that "fully vaccinating all U. S. children born in a given year from birth to adolescence saves an estimated 33,000 lives and prevents an estimated 14 million infections."

Thus, while current estimates are that 0.6% of American children develop autism, 0.8% would have died without vaccines -- and that's not counting blindness, paralysis, etc. It seems like a good trade, even if you assume that every single case of Autism is due to vaccines.

You like video games, but does your brain?

According to CBC in Canada:
Men are more rewarded by video games than women on a neural level, which explains why they're more likely to become addicted to them.
In other words, men like video games more because their brains like them more. Since only one's brain can like or dislike something, this could be rewritten: Men like video games more because they like video games more.

It's hard to blame CBC entirely for this one. I haven't tracked down the article itself, but the abstract remarks:
Males showed greater activation and functional connectivity compared to females in the mesocorticolimbic system... These gender differences may help explain why males are more attracted to, and more likely to become "hooked" on video games than females.
This is hard to parse, and given the authors work at Stanford Medical School, I'm inclined to give them the benefit of the doubt. However, the way this is phrased seems to have the natural order of investigation backwards. Men are more likely to be addicted to video games than are women. Given they show these particular brain differences during video game playing, we can make some intelligent guesses as to what those parts of the brain do.

Once we understand those parts of the brain much, much better than we do today, we may actually have a good structural model that explains this gender difference. That may be what the authors of the study meant, and they may spell this out in the full article. However, CBC's statement that men are more likely to get addicted to video games because they are "more rewarded on the neural level," is both repetitious and obvious.

See the original CBC article here.

Anonymice run wild through science

I recently mentioned Jack Shafer's long-standing irritation at the over-use of anonymous sources in journalism. Sometimes the irritation is at using anonymous sources to report banalities. In my favorite column in that series (which has unfortunately been moribund for the last year or two), Shafer calls out anonymous sources whose identities are transparent. Why pretend to be anonymous when a simple Google search will identify you?

I had a similar question recently when reading the method section of a psychology research paper. Here is the first paragraph from the method section:
Sixteen 4-year-olds (age: M = 4,7; range = 4,1-4,11), and 25 college students (age: M = 18,10; range = 18,4-19,6) participated in this study. All participants were residents of a small midwestern city. Children were recruited from university-run preschools and adults were undergraduate students at a large public university.
Small midwestern city? Large public university? I could Google the two authors, but luckily the paper already states helpfully on the front page that both authors work at the University of Michigan, located in Ann Arbor (a small midwestern city). Maybe the subject recruitment and testing was done in some other university town, but that's unlikely.

This false anonymity is common -- though not universal -- in psychology papers. I'm picking on this one not because I have any particular beef with these authors (which is why I'm not naming names), but simply because I happened to be reading their paper today.

This brings up the larger issue of the code of ethics under which research is done (here are the regulations at Harvard). After some notable ethical lapses in the early days of human research (for instance, Dr. Waterhouse trying out the smallpox vaccine on children), it became clear that regulations were needed. As with any regulations, however, form often wins over substance. A lab I used to work at had a very short consent form that said something to the effect that in the experiment, you'll read words, speak out loud, and it won't hurt. This was later replaced with a multi-page consent form, probably at the request of our university ethics board, but I'm not sure. The effect was that our participants stopped reading the consent form before signing it. This was entirely predictable, and I think it is an example of valuing the form -- in particular, having participants sign a form -- over substance -- protecting research participants.

Since most of the research in a psychology department is less dangerous than filling out a Cosmo quiz, this doesn't really keep me up at night. However, I think it's worth periodically rethinking our regulations in light of their purpose.