You’re spending way too much time doing peer reviews. Unless you’re the guy who sends the one-sentence assessment: “looks good.” I’m not talking to you. You need to spend more time. No, I’m talking to you, the ones who spend all weekend correcting the grammar and writing careful Track Changes notes on a barely-translated student effort from Turkey. You’re conscientious and thorough, and you're spending your time on all the wrong things.
Here’s tip number one: no matter how bad it is, don’t correct grammar. It takes far too much time, and the journal has paid staff to do that. Furthermore, limiting your assessment to the authors’ command of written English (unless the English is so poor that the paper is unreadable) doesn’t help anyone. The editor wants to know about the science. We can judge the English for ourselves.
Your task as a reviewer has two parts: reading the paper, and writing your assessment for the editors. The first is the most important, and should take the most time. Read the paper, analyze the argument, observe the novelty and impact of the work, the elegance of the experiments, the thoroughness of the literature review. When necessary, check references, check your own mental library to assess the authors' grasp of the field, or do a Google search for potential plagiarism. Draw as completely as you can on your training and your expertise. That’s why the editors selected you.
However, don’t neglect the second part of reviewing, or the effort you spent on the first won’t matter. Make your assessment count.
Good reviews answer the big question:
“Interesting paper, presenting a good deal of new data. Technically sound, needs only some editorial changes.”
This provides a concise summary of the paper's merit (it's interesting and presents new data; it's also technically sound) and what it will take to make the paper acceptable.
“The paper as a meeting paper is generally well written and is suitable for publication.”
That’s what the editor wants to know: are this paper’s conclusions interesting enough, and is the science reported on good enough, to warrant publication?
Better reviews provide some examples, especially if the paper has potential but isn’t ready to publish.
“Paper appears to be entirely based on just three experiments, without any replication. Results are inconclusive, and do not provide any information that is not already available in the literature. There is insufficient information about how the experiments were actually done.”
Or
“In my opinion, this paper was poorly written and the results did not support the conclusions to the extent that I think the authors hoped. The objective was clear, but the methodology they used is questionable.”
It's important that the reviewer assesses the importance of the topic. It’s even more helpful to know how well the paper contributes to that topic – and if it is inadequate, why it is inadequate.
The best reviews offer a mix of big picture assessments and detailed feedback (identifying information removed or changed):
“This is a paper representing very important field, i.e., effects of water on flotation process. Unfortunately, the experimental techniques used in this study provided little relevant information and interpretation of the result has been insufficient in some parts of the study. These are my questions and comments:
- As xxy flotation is normally carried out with activation, why this study is based on non-activated xxy? The relevance of this study cannot be verified in ore flotation where activation is used. Of course mineral processors would like to see the effects of water recycling on real ore flotation.
- As the samples used in the experimental studies are quite heavily oxidized in grinding and storing stages it brings up a question of the relevance of the results of all techniques used. It is likely that similar surface products do not exist in some industrial beneficiation processes.
- Interpretation of different spectra is not clear. If I understood correctly, no change in xxy surface means a completely flat line. Nearly flat lines can only be seen in the case of aa and bb. There is only one case where negative peaks occur, indicating the result of the specific treatment. In all other cases, the treatment has produced more oxidation products. Perhaps the first spectrum in Fig C is a failure. To me identifying these peaks in the IR spectra is speculative and requires imagination. The role of the IR study has to be reconsidered by the authors.
- XPS studies suffer from similar basic problems as IR spectrocopy studies do: interpretations lie on an uncertain base. In the case of aa adsorption, lines should have been used in the interpretation based on current literature. Table Y presents too much numerical information that does not make understanding the interpretations much easier. I would rather see more figures to illustrate XPS interpretations.”
Remember that as a peer reviewer, your ultimate job is to develop the journal, and through the journal, the field itself: the object is not to criticize papers as to identify the best science for publication and aid the authors in presenting that science in the best possible way. Furthermore, you want to do so in a way that maximizes your own time and energy as well as that of the editor and authors.
Finally, whatever you do, write more than "looks good." That's not helpful to anybody.
Emily Wortman-Wunder is SME's technical editor, managing the editing and publication of Minerals & Metallurgical Processing, the Transactions of SME, and Mining Engineering's technical papers.