Field of Science

Around the Internet - 8/31


Publication
A warning about the perils of preprint repositories.

Statistical evidence that writing book chapters isn't worth the effort. (Though caveat: the author also doesn't find evidence of higher citation rates for review papers in journals, which I had thought was well-established.)

One person who finds things to like in the publication process (I know, I don't link to these often).

Neuroskeptic argues that we don't necessarily want to increase replication, just replicability. (Agreed, but how do we know if replicability rates are high enough without conducting replications?)

Language
Did Chris Christie really talk about himself too much in Tampa? 

Other Cognitive Science
Cognitive load disrupts implicit theory of mind processing. So maybe the reason young children succeed at implicit tasks isn't because those tasks don't require executive processing (whether they require less is still up for grabs).

Lying with statistics

One of the most concise explanations of why your units of measurement matter, courtesy of XKCD:


Revision, Revision, Revision

I have finally been going through the papers in the Frontiers Special Topic on publication and peer review in which my paper on replication came out. One of the arguments that appears in many of these papers (like this one)* -- and many discussions of the review process, is that when papers are published, they should be published along with the reviews.

My experience with the process -- which I admit is limited -- is that you submit a paper, reviewers raise concerns, and you only get published if you can revise the manuscript so as to address those concerns (which may include new analyses or even new experiments). At that stage, the reviews are a historical document, commenting on a paper that no longer exists. This may be useful to historians of science, but I don't understand how it helps the scientific process (other than, I suppose, transparency is a good thing).

So these proposals only make sense to me if it is assumed that papers are *not* typically revised in any meaningful way based on review. That is, reviews are more like book reviews: comments on a finished product. Of my own published work, three papers were accepted more-or-less as is (and frankly I think the papers would have benefited from more substantial feedback from the reviewers). So there, the reviews are at least referring to a manuscript very similar to the one that appeared in print (though they did ask me to clarify a few things in the text, which I did).

Other papers went through more substantial revision. One remained pretty similar in content, though we added a whole slew of confirmatory analyses that were requested by reviewers. The most recent paper actually changed substantially, and in many ways is a different -- and much better! -- paper than what we originally submitted. Of the three papers currently under review, two of them have new experiments based on reviewer comments, and the other one has an entirely new introduction and general discussion (the reviewers convinced me to re-think what I thought the paper was about). So the reviews would help you figure out which aspects of the paper we (the authors) thought of on our own and which are based on reviewer comments, but even then that's not quite right, since I usually get comments from a number of colleagues before I make the first submission. There are of course reviews from the second round, but that's often just from one or two of the original reviewers, and mostly focuses on whether we addressed their original concerns or not.

So that's my experience, but perhaps my experience is unusual. I've posted a poll (look top right). Let me know what your experience is. Since this may vary by field, feel free to include comments to this post, saying what field you are in.

---
*To be fair, this author is describing a process that has actually been implemented for a couple Economics journals, so apparently it works to (at least some) people's satisfaction.

Have you seen me before?

I have been using PCA to correct blink artifact in an EEG study that I am presenting at AMLaP in a couple weeks. Generally, I think I've gotten pretty good at detecting blinks. I do see other things that look like artifact but which I don't understand as well. For instance, look at this channel plot:
(You should be able to increase the size of the picture by opening it in a new window). So this looks a bit like a blink, but it's in the wrong place entirely. This is a 128 electrode EGI cap, with the electrodes listed sequentially (the top one is electrode 1 and the bottom is electrode 124 -- I don't use electrodes 125-128 because they tend not to have good contact with the skin).

The way EGI is laid out, the low-numbered electrodes and high-numbered electrodes are in the front, whereas the middle-numbered electrodes are in the back (check this picture), So basically what I'm seeing is being generated in the back of the head. Actually, the back left of the head, according to my PCA:
In this, the top left panel shows the localization of the signal. The top right panel shows which trials the signal occurred in. The power spectrum (bottom panel) is also quite odd. I'm going ahead and removing this component, because it's clearly artifact (the amplitude looks way too large to be true EEG), and it affects so many trials that I can't just exclude them without excluding the participant. But I'd really like to know what this is. Because maybe I *should* be excluding the participant.

So...has anyone seen something like this before?

For those wondering...

Using PCA, I was able to get rid of this artifact fairly cleanly. Here's is an imagine before removal, with the 124 electrodes stacked on one another:
You can see that strange artifact -- which looks like a blink but not quite as smooth as your typical blink -- very easily in these four trials.

Here are the same four trials after I subtracted that component, plus another component that probably is blink-related (there were two, classic-looking blinks in my data; the component above found both of those *and* those two blinks; the other component found only the two classic blinks):
You can see that the odd artifact is gone from all four trials, both otherwise, things look very similar.

Around the Internet - 7/30/2012


Citations

There have been a bunch of posts lately on citations and the Impact Factor. I started with these two posts by DrugMonkey. These posts have links to others in the chain, which you can follow. Here's a slightly older post (from late July) on reasons to self-cite.

Next topic

So I didn't actually see anything else interesting this week. Possibly because I've been trying to streamline a bootstrapping analysis (which I may blog about when I finally get it done). Early in the process, I tried to estimate how long it would take for the script to run and realized it was about 1 week for each analysis, of which I have several to do. So I started hurriedly looking for ways to speed it up...

New source of post-doctoral funding

NSF has just announced what appears to be a new post-doctoral fellowship. The document linked to lists two different tracks: Broadening Participation and Interdisciplinary Research in Behavioral and Social Sciences. It is the second one that seems to be new. Here's the heart of the description:
Track 2. Interdisciplinary Research in Behavioral and Social Sciences (SPRF-IBSS): The SPRF-IBSS track aims to support interdisciplinary training where at least one of the disciplinary components is an SBE science ... The proposal must be motivated by a compelling research question (within the fields of social, behavioral and economic sciences) that requires an interdisciplinary approach for successful investigation. As a result, applicants should demonstrate the need for new or additional skills and expertise beyond his or her core doctoral experience to achieve advances in the proposed research. To acquire the requisite skills and competencies (which may or may not be within SBE sciences), a mentor in the designated field must be selected so that the postdoctoral research fellow and his or her mentor will complement, not reinforce, each other's expertise. 

What I get from this is that the fellowship will be particularly useful for someone with training in one field who wants to get cross-trained in another. Thinking close to home, this might be a psycholinguist who wants training in linguistics or computer science. This makes me think of the legendary IGERT program at UPenn, which trained a string of linguists to use psychological research methods, many of whom are now among my favorite researchers. Which is to say that this cross-training can be very productive.