A few years ago, science blog posts started decorating themselves with a simple green logo. This logo was meant to credential the blog post as being one about peer-reviewed research, and is supplied by Research Blogging. As ResearchBlogging.org explains:
Peer Review is Not Magic
One result of the culture wars is that scientists have needed a way of distinguishing real data from fantasy. If you look around the Internet, no doubt half or even more than half of what is written suggests there's no global warming, that vaccines cause autism, etc. Luckily, fanatics rarely publish in peer-reviewed journals, so once we restrict the debate to what is in peer-reviewed journals, pretty much all the evidence suggests global warming, no autism-vaccine link, etc. So pointing to peer-review is a useful rhetorical strategy.
That, at least, is what I assume has motivated all the stink about peer-review in recent years, and ResearchBlogging.org's methods. But it's out of place in the realm of science blogs. It's useful to think about what peer review is.
A reviewer for a paper reads the paper. The reviewer does not (usually) attempt to replicate the experiment. The reviewer does not have access to the data and can't check that the analyses were done correctly. At best, the reviewer evaluates the conclusions the authors draw, and maybe even criticizes the experimental protocol or the statistical analyses used (assuming the reviewers understand statistics, which in my field is certainly not always the case). But the reviewer doesn't can't check that the data weren't made up, that the experimental protocol was actually followed, that there were no errors in data analysis, etc.
In other words, the reviewer can do only and exactly what a good science blogger does. So good science blogging is, at its essence, a kind of peer review.
Drawbacks
Now, you might worry about the fact that the blogger could be anyone. There's something to that. Of course, ResearchBlogging.org has the same problem. Just because someone is blogging about peer-reviewed paper doesn't mean they understand it (or that they aren't lying about it, which happens surprisingly often with the fluoride fanatics).
So while peer review might be a useful way of vetting the paper, it won't help us vet the blog. We still have to do that ourselves (and science bloggers seem to do a good job of vetting).
A weakness
Ultimately, I think it's risky to put all our cards on peer review. It's a good system, but its possible to circumvent. We know that some set of scientists read the paper and thought it was worth publishing (with the caveats mentioned above). Of course, those scientists could be anybody, too -- it's up to the editor. So there's nothing really stopping autism-vaccine fanatics from establishing their own peer-reviewed journal, with reviewers who are all themselves autism-vaccine fanatics.
To an extent, that already happens. As long as there's a critical mass of scientists who think a particular way, they can establish their own journal, submit largely to that journal and review each other's submissions. Thus, papers that couldn't have gotten published at a more mainstream journal can get a home. I think anyone who has done a literature search recently knows there are a lot of bad papers out there (in my field, anyway, though I imagine the same is true in others).
Peer review is a helpful vetting process, and it does make papers better. But it doesn't determine fact. That is something we still have to find for ourselves.
****
Observant readers will have noticed that I use ResearchBlogging.org myself for it's citation system. What can I say? It's useful.
ResearchBlogging.org is a system for identifying the best, most thoughtful blog posts about peer-reviewed research. Since many blogs combine serious posts with more personal or frivolous posts, our site offers away to find only the most carefully-crafted about cutting-edge research, often written by experts in their respective fields.That's a good goal and one I support. If you read further down, you see that this primarily amounts to the following: if the post is about a peer-reviewed paper, it's admitted to the network. If it's not, it isn't. I guess the assumption is that the latter is not carefully-crafted or about cutting-edge research. And that's where I get off the bus.
Peer Review is Not Magic
One result of the culture wars is that scientists have needed a way of distinguishing real data from fantasy. If you look around the Internet, no doubt half or even more than half of what is written suggests there's no global warming, that vaccines cause autism, etc. Luckily, fanatics rarely publish in peer-reviewed journals, so once we restrict the debate to what is in peer-reviewed journals, pretty much all the evidence suggests global warming, no autism-vaccine link, etc. So pointing to peer-review is a useful rhetorical strategy.
That, at least, is what I assume has motivated all the stink about peer-review in recent years, and ResearchBlogging.org's methods. But it's out of place in the realm of science blogs. It's useful to think about what peer review is.
A reviewer for a paper reads the paper. The reviewer does not (usually) attempt to replicate the experiment. The reviewer does not have access to the data and can't check that the analyses were done correctly. At best, the reviewer evaluates the conclusions the authors draw, and maybe even criticizes the experimental protocol or the statistical analyses used (assuming the reviewers understand statistics, which in my field is certainly not always the case). But the reviewer doesn't can't check that the data weren't made up, that the experimental protocol was actually followed, that there were no errors in data analysis, etc.
In other words, the reviewer can do only and exactly what a good science blogger does. So good science blogging is, at its essence, a kind of peer review.
Drawbacks
Now, you might worry about the fact that the blogger could be anyone. There's something to that. Of course, ResearchBlogging.org has the same problem. Just because someone is blogging about peer-reviewed paper doesn't mean they understand it (or that they aren't lying about it, which happens surprisingly often with the fluoride fanatics).
So while peer review might be a useful way of vetting the paper, it won't help us vet the blog. We still have to do that ourselves (and science bloggers seem to do a good job of vetting).
A weakness
Ultimately, I think it's risky to put all our cards on peer review. It's a good system, but its possible to circumvent. We know that some set of scientists read the paper and thought it was worth publishing (with the caveats mentioned above). Of course, those scientists could be anybody, too -- it's up to the editor. So there's nothing really stopping autism-vaccine fanatics from establishing their own peer-reviewed journal, with reviewers who are all themselves autism-vaccine fanatics.
To an extent, that already happens. As long as there's a critical mass of scientists who think a particular way, they can establish their own journal, submit largely to that journal and review each other's submissions. Thus, papers that couldn't have gotten published at a more mainstream journal can get a home. I think anyone who has done a literature search recently knows there are a lot of bad papers out there (in my field, anyway, though I imagine the same is true in others).
Peer review is a helpful vetting process, and it does make papers better. But it doesn't determine fact. That is something we still have to find for ourselves.
****
Observant readers will have noticed that I use ResearchBlogging.org myself for it's citation system. What can I say? It's useful.
18 comments:
Yeah, the presumption that the icon must say something about quality always bugged me too. But then again, RB-tagged posts do tend to have something to do with science, so the system is nice for citing + finding other posts on academicky topics. They're not necessarily good or rigorous, although many times they are, but it's just another way to have a filtered blog feed. Otherwise, there's just too much stuff to follow.
Also, RB allows some lesser-known blogs to get noticed.
And peer review definitely isn't magic. And there's definitely some critical threshold beyond which a conviction, regardless how ill-founded or previously contested, automatically becomes 'true' by virtue of being believed in by enough people.
A good example is the absolutely horrid mangling of eukaryotic phylogeny/evolution in cell/biomedical research papers, where they still use a model that's a good 15-20 years out of date. But because the reviewers don't know any better either, the errors perpetrate indefinitely, and become fact in their field.
That's just a case I notice, and it's scary to think of all the other assumptions that go by unchallenged in various disciplines...
This post made me rethink a little about Reaearch Blogging posts. I suppose that blogging about science slightly blurs the line between the science and blogging such that you have to be awake to the bias of the given blog. But the bias of a given blog is sometimes what makes reading blogs fun. It is generally a good thing that a critical mass of people [scientists] who think a particular way can establish their own community [journal]. If you disagree with the community you can just ignore them, but if you think they have harmful ideas then you may have to actively oppose them simply because it is the right thing to do.
For me, finding a blog or group that is an expert in the field you can trust is vital. As Psi Wavefunction points out, there are a lot of errors out there that propagate. Trust certain blogs on their analysis when they are talking about their expertise, and take the rest of what they say as conversation and opinion.
I agree with what everyone has said. The RB icon can lend (unwarranted) authority to a blog, but as Psi notes, RB can be useful for new bloggers to be noticed.
I honestly wouldn't mind the RB icon somehow noting that I am not a professional scientist or a specialist. To compensate for this, I note that I am an undergrad interested in blogging cool research throughout my blog. Keeping things honest and transparent.
Perhaps RB could create a committee that evaluates submitted posts on some minimum requirements? I don't know if they have someone who checks posts or not.
The main idea of the system was to gather posts discussing and looking at published science papers. Peer review is not magic and that is appreciated, but it's a useful first filter system, the second filter system of which is the editors and the post-flagging system.
In answer to Kele we do check the posts (I'm one of the editors) but to check each and every post as it comes in would take more people-power and more time than volunteers have. Instead, each blog is checked as it comes in and at the end of the week I take a scan through the posts just to check there's no very bad ones. Readers can also flag posts they feel are inappropriate.
I know I have a personal interest in this (being an editor) but I *have* to take issue with the title here. This post is titled "Research blogging, get over yourself" and the main message is, "Yeah, peer review isn't that perfect." That is not a problem with research blogging, who are just using the existing system as a primary filter for scientific content.
Lab Rat -- the title was to catch attention. But seriously, I find this focus on peer review annoying a little fetishistic. Whether or not the research being discussed has been peer reviewed is entirely orthogonal to how good the blog post is. If anything, it may be counter-productive.
In my field, papers usually take several years to be published and circulate in manuscript form prior to that, so writing about peer-reviewed research and writing about cutting-edge research (RB's stated aim) are at cross-purposes. Moreover, much of the interesting theoretical work comes out in book chapters or books, which are often not peer reviewed.
I agree, peer-review isn't magic. However, as others have said in this comment thread, requiring discussion of peer-reviewed research turns out to be a good way to filter out the noise from an awful lot of signal.
We have over 1,300 member blogs at ResearchBlogging.org, so if each one of them averages one post per day, that would mean sifting through about 10,000 posts in a week. In fact, on average about 200 posts per week use our icon. That's a pretty good filter.
Yes, we miss some of the good stuff, and we'd love to have a way to catch more of that without picking up too much noise, but we haven't come up with a good way to do that yet.
Some not-so-good stuff does slip through the cracks, but as Lab Rat pointed out, our guidelines do ask that bloggers read and understand the paper in question, and our registered users can flag posts that don't meet our guidelines. In addition to our language-specific editors who try to keep an eye on all the posts, we also have discipline-specific content editors who read all the posts in their field each week and select the best posts on our news blog and the RBEditors Twitter feed.
Finally, once a week, I highlight the best of the best in my column on Seedmagazine.com.
Dave -- you say that your filter keeps most of the good and gets rid of most of the bad. I say the opposite is true. I guess we need some data.
Of course, if we did an empirical study of the issue, the discussion couldn't be reported on Research Blogging, since it wouldn't have been published in a peer-reviewed journal (I can't think of any journals that would carry such work)...which is not the same as saying it wouldn't be peer reviewed -- most blogs, in the end, get a good form of peer review in the comments section.
I'm not sure what's incongruous. ResearchBlogging identifies posts that are written about peer-reviewed research. The editors, each week, identify "the best" from within their respective fields-of-vision.
Those two facts alone would seem to hold up to the mission statement of ResearchBlogging.
Additionally, that readers and editors can flag posts as inappropriate adds to the filtering process.
I suppose, if the above is not enough, than your argument might be that the *mission statement* of RB.org is insufficient - but as it is currently written, I think the mission is fulfilled.
(full disclosure: I'm also an RB.org editor)
Speaking as a reader of, and poster to, RB.org, it wouldn't occur to me to treat the raw list of submitted posts as somehow authoritative. Maybe this is because I'm familiar with how little effort it takes to get onto that list. But I also think it doesn't take a lot of familiarity with popular science controversies (anti-vaxxing, climate denialism, creationism) to realize that the mere fact a person cites a peer-reviewed paper doesn't mean that (1) their position reflects scientific consensus or (2) even the contents of the paper they cite.
At the same time, the "peer review" of the blogosphere takes place across dozens of different sites, which is probably daunting to someone who just wants to find good posts. Maybe RB.org could implement a separate feed for posts selected by the editors each week, or track links to submitted posts a la Technorati.
GamesWithWords,
It's not really about identifying "most" of the good posts. It's about identifying enough to be a worthwhile endeavor. This isn't the only filter, there are many. I use a variety of methods, including ResearchBlogging, Twitter, FriendFeed, RSS, blog networks, and Google to find interesting posts.
As Psi Wavefunction points out, our site can be a great way to find out about blogs you wouldn't have noticed otherwise.
And actually, despite your skepticism, there have been papers published about ResearchBlogging.org.
GamesWithWords said: "In my field, papers usually take several years to be published and circulate in manuscript form prior to that, so writing about peer-reviewed research and writing about cutting-edge research (RB's stated aim) are at cross-purposes."
This is not necessarily true, because even if you are writing about cutting-edge unpublished research you are generally giving context for that research, which involves reference to already published literature. There are, of course, likely to be exceptions to this rule, but by and large discussion of unpublished work still benefits from published background.
I differ with much of what is said here, but as a comment would not have done justice have written a blog post about it. Please do read and we can discuss further either here or there.
I think some of the disconnect here is that GWW wants (or expects) RB.org to be about good peer-reviewed science, which is decidedly not the point.
The point of RB.org is to highlight good science blogging about peer-reviewed science. And often, good science blogging is explicitly about BAD science.
Games with Words - I do consider blogging a form of peer review, and debunking bad peer reviewed research falls under the RB mandate. On the other hand if bad blog posts appear on the RB site, other members of the network are free to flag them.
Those who submit posts through Research Blogging are (in essence) unpaid contributors to Seed Media Group, although we all benefit from the increased exposure. I don't want to sound too snarky or anything, but what's with the peer review within "blogging on peer-reviewed research" (i.e., Editor's Selections)? I say this despite having received the occasional designation myself. Who selects the editors in the first place? It's not exactly like being on the editorial board of a journal. But then again I haven't tried to volunteer, it does seem like a time consuming task. Plus, the credentials of those of us with pseudonyms aren't readily apparent...
Jason -- I'm interested in both good science and good science blogging. I think the peer review process does improve papers, but it's not the arbiter of truth. It's not fundamental to the scientific method, either (viz. the arguments about how to away with peer review, whereas there are no arguments about whether to do away with replication) Seeing it used as such is a little disturbing.
You see this a lot of blog mission statements as well: e.g., "this is a blog about peer-reviewed research"). What the blogger presumably means is that the blog is about science and not bologna, but what the blogger is saying is that this is about some subset of science that happens to have gone through a particular process that we're currently using to vet articles but isn't in any way a fundamental part of the scientific method.
Calden -- I agree that context is good, and a lot of that will happen to be published. However, I had understood RB's rule to be that the blog had to focus on a particular piece of peer-reviewed research, not that a piece of peer-reviewed research get mentioned somewhere. If the rule is the latter, then some of my objections wouldn't apply.
@Games with words: The peer reviewed study featured in the blog has to be mentioned and understood. It may be that the blogger spends half the post discussing the peer reviewed paper and then half discussing other (unpublished) research that stems from that, or their own opinion on the applications and future benefits of that blog. It can be either the sole purpose of the post, or a platform to discuss related scientific issues.
Researchblogging.org does not set out to provide the ultimate truth of science across many disciplines. It provides an aggregate system for people to read scientists views on science, which is filtered by peer review, as this is the method used for filtering science work.
An aggregate that focuses on all kinds of science whether peer reviewed or not might indeed be interesting, but that is not the function of researchlogging.org
(As a disclaimer I think this was a very good post that raised some interesting points. But there has been a lot of "all science bloggers are so up themselves" stuff floating around after the Pepsi-thing at Scienceblogs that I'm possibly more willing to jump on things than I otherwise would be)
Lab Rat -- Timing in that case was poor.
I think my frustration with the over-emphasis on peer-reviewed papers stems from two things. One is that my anecdotal evidence suggests that much of the public seems to fundamentally misunderstand peer review. My wife works in an area of law that involves a fair amount of science, and in law school, she quickly realized that most of her classmates thought what "peer review" meant was making sure the data are real, which of course is the one thing peer review doesn't do.
The downside is that when the public, which has been hearing people pontificate about peer review for so many decades, finally catches on that they've been "lied to" (and it will seem that way), it won't help science's credibility.
That of course, assumes that I'm right that most people think peer review means double-checking the data, for which I don't have any hard proof.
That's one source of my frustration with the peer review thing. I've realized in the course of this conversation that there's an additional beef I have with ResearchBlogging in particular. As I said above, I don't really like reading posts that are largely about a single paper. Some are good, but a lot I find too narrow. My experience with science is not one that happens one paper at a time, but rather groups of papers at a time. When I'm interested in a single project, it's often precisely because it isn't yet published. It doesn't help that mainstream media tends to report on findings as if they are punctate and out-of-the-blue, which I think misrepresents science (and confuses a lot of readers, since often 1 year later, another paper will out-of-the-blue say exactly the opposite).
So ResearchBlogging.org aggregates exactly those posts I find less interesting. To an extent, that's an issue of style, and I'm willing to agree to disagree. But then ResearchBlogging.org is very clear in its mission statements that these are the best of the best, rather than "the kind of blog posts we happen to like." Them's fightin' words! So I thought it was time to throw a punch back and stand up for the kind of science writing I think is best.
Post a Comment