Friday, February 27, 2009

What to do when you review?

ResearchBlogging.orgPeer review is one of those exercises that is vitally important to the promulgation of scientific knowledge. It is also one for which we get absolutely no training. and if that is not bad enough, it is cloaked in unnecessary secrecy and anonymity. The way it works is this:

In between irate meetings, technical emergencies, slipping deadlines, and normal workload, you get an unsolicited email from a total stranger. Assuming it doesn’t go directly to the spam bin, opening it reveals that it is Professor Joe Blogger, associate editor of the Journal of Acceptable Results, and he’d like you to review a paper.

Hopefully the paper is on a topic with which you are actually familiar.

Generally there is a multiple choice card or sheet, in addition to the actual brain-required part of the job- namely writing intelligent and useful commentary on somebody else’s research.

As far as I can tell, there are two basic approaches. These are ideally similar but in practice can be opposite. The first is imitation of reviews that one has received. The second is the golden rule.

For me, the first option would involve some combination of non-specific pleasantries, attacks based on a misunderstanding of the literature or techniques used, or multi-page lists of specific minor points which are not related to each other or the using-whole-sentences part of the review. So when asked to review, I tried the second method, but without any real sense of guidance or idea what I was working towards. My inability to find a type specimen of a quality review is probably due to the lack of transparency that surrounds peer review. So in order to illuminate this mysterious process, here is my reviewing experience.

I reviewed my first paper a bit less than two years ago. The finished paper was finally published last southern winter while I was in the field, so I was unaware that it had come out until this week. My main aims were:

1. Pay close attention to the methods. In this case, the methodology used in the paper included techniques that I used, albeit with technical support and a long time ago. I asked: Are they appropriate? Will they overcome noise and interference problems? Do the authors demonstrate their awareness of what the pitfalls are, and show that they were avoided?

2. Look at the data. If it’s good, proceed. If not, double check it. If it’s better than the described methods is capable of producing, get very suspicious.

3. Is the result important to the study? If so, try to convey this to the editor, who is probably not an expert in both the experimental application AND the topic for which it is being applied. (He may not know much about either). I did not google the handling editor, but in hind sight I probably should have, in order to ascertain my audience.

4. Is it worth publishing? It is possible for a study to be correct, but unimportant. This point generally forms the central thesis of the review. All of the above points are used to justify it.

I generally don’t pay a huge amount of attention to conclusions or interpretations, as long as they are consistent with the results and with previous work. When reading papers professionally, I am generally data-mining or method-hunting, so it is hard to care about part of a paper that I never read. If the introduction is adequate, then the meaning of the results should be self-evident once the data is presented. Also, the way I see it, if a person does the hard yards and produces a good data set, he or she earns the right to arm-wave about it a little bit.

Note that what I paid attention to when evaluating the paper was not the same as what I wrote in the review. In particular, although I spent a long time looking at the methodology, I assumed the editor wasn't interested in knowing what the nitty-gritty details were. I assumed he just wanted to know if they were appropriate. As a result, I put most of my effort into describing why the results they present were significant. Unfortunately, I was not able to incorporate all suggestions into a coherent essay, and reverted to bullet point suggestions towards the end. If anyone has any tips or suggestions as to how to review effectively, please let me know in comments.

In this particular case, I did check the references, but I probably won’t in the future. That doesn’t require specialist knowledge, so anyone can do it.

So what does this all boil down to? Well, here's the final copy. It is posted with the surviving author’s knowledge and permission:

Dear Handling Editor,

The paper “Infrared and Raman spectroscopic observations on Central African carbonado and the implication to its origin” presents new data that constitutes a significant step forward in the study of this material.

While there have been several previous Raman studies of carbonado, none of them have looked at the possibility of annealing caused by polishing the stone, so that part of the paper is important. But the main value of this paper is the FTIR data.

Science has been trying to determine the IR spectra and nitrogen aggregation of carbonado diamond for almost 20 years. There have been numerous studies that have shown the important 1000-1400 cm-1 region is cluttered with interfering inclusions, and Garai et al. (2006) recently claimed to have an answer by simply interpreting what appear to be inclusions as diamond signal.

However this is the first paper to demonstrate the successful removal of interfering silicates, and to show that the remaining diamond spectra are consistent. The determination of N aggregation state for carbonado and the demonstration that fluid inclusions are intrinsic to the diamond are both key developments towards understanding the early history of this type of diamond.

There are a few minor improvements that should be made to the typescript before publication, however.

Firstly, the abstract and introduction should be read and edited by a native English speaker. There are several awkward sentence constructions that make the paper more difficult to read than it ought to be. The Raman conclusions are occasionally hard to understand as well.

The carbonado spectra in figures 4, 5, and 6 should be labeled so that we know which of the samples in table 1 the spectra correspond to. Figures 5 and 6 should show more than one carbonado spectrum, and figures 4 and 5 might be combinable into one figure.

A data table containing the Raman peak positions and widths observed should be included in the paper.

It would be nice if data files for the spectra were included in the supplementary material.

The Walker 1979 reference is not cited.

Overall, it is a good paper, and an important achievement in the field of carbonado study.

Dr. Charles Magee

You can find the whole paper here, access permitting.

Hiroyuki Kagi, Satoshi Fukura (2008). Infrared and Raman spectroscopic observations of Central African carbonado and implications for its origin European Journal of Mineralogy, 20 (3), 387-393 DOI: 10.1127/0935-1221/2008/0020-1817


Anonymous said...

A couple of processes I follow in a review are:
*** Immediately check that the authors have included data table, *including* derived data for graphs. It's astounding the number of scientists - and presumably editors - that think that actual data isn't necessary for a peer-review. I have, and will continue to, summarily reject papers without data. There was a big meeting of journal editors at the 2008 Goldschmidt that agreed that data must be presented with a paper and even endorsed a formal 'template' for geochemistry data (quelle horreur!). Coming to a journal near you soon...
*** I skim through the paper first checking the citations in the paper and the references at the back match. With tools like EndNote there is no excuse for sloppiness anymore. I also think it gives a good indication of the authors own opinion of the paper. If they've tidied up all the details like references they have really put in the effort and so should you. I've had numerous papers where it was clear the authors expected the reviewers to do their proof-reading and final editing for them too. That sort of apathy doesn't help the reviewer see the manuscript in a good light.

C W Magee said...

The trouble with data tables is that high resolution spectra generate a data table that is (for example) 2 columns wide and 3000 rows long. And they aren't that useful as print, since only a lunatic would type them in by hand. That's why I suggested a data repository item. Dunno if it was included in the final version, though- can only check journals from work.

Wayfarer Scientista said...

I always keep a close eye on the stats.