In a thoughtful and provocative piece in the The Wild Side blog at NYTimes.com, Stephen Quake takes up the issue of conflicts of interest in research. By "conflicts of interest," Quake means researchers who have a financial interest in the outcome of their research. It is becoming increasingly common for academic researchers to partner with businesses in developing new technology.
He raises many important and interesting questions, some of which I'll write about in the future. One point that deserves much more space was the fact that many researchers have conflicts of interest even when they aren't selling a product:
Nobody Sees Your Data But You
A baseball player is paid largely on his ability to perform on the field. If you are known as a power-hitter, you better produce home-runs.
The major difference between a baseball star and a research star, is that a baseball player's performance is public. Everybody at the game knows whether he hit a home run or struck out.
In contrast, the only person with access to a researcher's data is the researcher. It is as if a baseball player went to an automatic batting cage where nobody was looking, took a few swings, came out and told the team management how many home runs he hit and was paid accordingly.
A number of non-scientists I've talked to seem to be under the impression that during peer review, the reviewers check the data. They don't. They can't, really. In my experiments, I ask people questions and mark down whether they got it right or wrong. The reviewers weren't in the testing room with me, so they simply have to take my word for it.
Now let's say you are a researcher and it's time to renew your grant. You've been swinging and missing -- your data are uninterpretable or simply uninteresting. The easiest way to make the data more interesting is to "fix" them.
All Scientists Have Conflicts of Interest
Ultimately, all scientists have a conflict of interest, because our promotions and salaries are ultimately based on the research we have produced, whether directly or indirectly. And there are many non-financial incentives. Nobody wants to be seen as a failure.
I don't believe many people are simply making up their data (though it happens). A larger concern is selectively reporting results. Suppose you are in the situation in which you have run four different experiments. Three of them support one conclusion, but the fourth supports the opposite conclusion.
You simply can't publish that. No journal will take a paper showing conflicting results. You can either give up on the project and admit to having wasted perhaps 2-3 years of your life. You can continue doing research to try to figure out why you are getting conflicting results, perhaps succeeding, perhaps not, but in any case spending time and money you might not have. Or you can write a paper about the first three experiments and forget about the "bad" experiment. Some people take the latter route. This is best-known in pharmaceutical research, but it happens everywhere.
Even worse, perhaps you run an experiment, the results of which challenge the theory for you which you are known, an experiment which challenges the validity of your life's work. Who really wants to publish that?
What to Do
Quake's point is that you can't eliminate conflicts of interest from science. A baseball player needs to win games. A researcher needs to publish. As long as this is true, baseball players will have an incentive to take steroids and researchers will have incentives to "improve" their work. Any proposed solution that ignores these basic facts is doomed to fail
Are Scientists Really Cheating?
This article might have looked really gloomy. My point is simply that everybody has the incentive to cheat, not that everybody does. In the course of your life, there will be a number of instances in which it would be in your financial interests to murder somebody. Still, most people don't murder.
It is popular in some circles to assume most people are fundamentally bad and will do anything they can get away with, but my experience -- professionally and personally -- is otherwise.
Moreover, academic fraud is generally not in one's long-term interests. Even if it isn't exposed, others will fail to replicate your results and your theories will be disproved. Again, for a researcher, ultimately the best thing for your career is to be right. And fraud won't help you with that.
(Image sourced from blog.springsource.com)
He raises many important and interesting questions, some of which I'll write about in the future. One point that deserves much more space was the fact that many researchers have conflicts of interest even when they aren't selling a product:
What may not be clear to the casual reader is that research agencies like the NIH prefer to give grants to researchers with a history of success. Last time they gave you a grant, did you publish a series of important papers, or did all your projects end in failure? In the latter case, you may be out of a job.
Interestingly, it is not unusual for basic scientists with no commercial relationships to be dependent on grants for their salaries and therefore have a significant personal financial interest in preserving their grants. Although COI experts have assured me that that this is not a conflict that needs to be managed, I must confess that I have some difficulty with the distinction they are trying to draw. Who is under greater temptation to bias the results of their research: the financially comfortable academic entrepreneur, or the ivory tower scientist who may not be able to pay his mortgage if his grant is not renewed?
Nobody Sees Your Data But You
A baseball player is paid largely on his ability to perform on the field. If you are known as a power-hitter, you better produce home-runs.
The major difference between a baseball star and a research star, is that a baseball player's performance is public. Everybody at the game knows whether he hit a home run or struck out.
In contrast, the only person with access to a researcher's data is the researcher. It is as if a baseball player went to an automatic batting cage where nobody was looking, took a few swings, came out and told the team management how many home runs he hit and was paid accordingly.
A number of non-scientists I've talked to seem to be under the impression that during peer review, the reviewers check the data. They don't. They can't, really. In my experiments, I ask people questions and mark down whether they got it right or wrong. The reviewers weren't in the testing room with me, so they simply have to take my word for it.
Now let's say you are a researcher and it's time to renew your grant. You've been swinging and missing -- your data are uninterpretable or simply uninteresting. The easiest way to make the data more interesting is to "fix" them.
All Scientists Have Conflicts of Interest
Ultimately, all scientists have a conflict of interest, because our promotions and salaries are ultimately based on the research we have produced, whether directly or indirectly. And there are many non-financial incentives. Nobody wants to be seen as a failure.
I don't believe many people are simply making up their data (though it happens). A larger concern is selectively reporting results. Suppose you are in the situation in which you have run four different experiments. Three of them support one conclusion, but the fourth supports the opposite conclusion.
You simply can't publish that. No journal will take a paper showing conflicting results. You can either give up on the project and admit to having wasted perhaps 2-3 years of your life. You can continue doing research to try to figure out why you are getting conflicting results, perhaps succeeding, perhaps not, but in any case spending time and money you might not have. Or you can write a paper about the first three experiments and forget about the "bad" experiment. Some people take the latter route. This is best-known in pharmaceutical research, but it happens everywhere.
Even worse, perhaps you run an experiment, the results of which challenge the theory for you which you are known, an experiment which challenges the validity of your life's work. Who really wants to publish that?
What to Do
Quake's point is that you can't eliminate conflicts of interest from science. A baseball player needs to win games. A researcher needs to publish. As long as this is true, baseball players will have an incentive to take steroids and researchers will have incentives to "improve" their work. Any proposed solution that ignores these basic facts is doomed to fail
Are Scientists Really Cheating?
This article might have looked really gloomy. My point is simply that everybody has the incentive to cheat, not that everybody does. In the course of your life, there will be a number of instances in which it would be in your financial interests to murder somebody. Still, most people don't murder.
It is popular in some circles to assume most people are fundamentally bad and will do anything they can get away with, but my experience -- professionally and personally -- is otherwise.
Moreover, academic fraud is generally not in one's long-term interests. Even if it isn't exposed, others will fail to replicate your results and your theories will be disproved. Again, for a researcher, ultimately the best thing for your career is to be right. And fraud won't help you with that.
(Image sourced from blog.springsource.com)