I've been reviewing a lot of papers lately. I get sucked into reviewing because the request usually comes from a former adviser or because the paper is written by a close colleague. I like to think that I'm a pretty reasonable reviewer, usually suggesting revisions to improve style or clarity. If I get a paper I'd like to see in that journal, I always accept with minor or major revisions requested. In the course of reviewing this latest string of papers, some common problems have emerged.
One problem is that the new results are not compared with literature. It is difficult to gauge how the performance of X is so much better than prior work if no hard data (or citations) are given. This brings an exhausting amount of legwork to the reviewer, who has to go and look up references and do the comparison themselves. Sometimes, the performance of X is worse than the literature (but the authors neglect to point that out). Then, my perception is that the authors were trying to pull the wool over my eyes. With this in mind, I re-read a draft of mine for a paper I'd like to submit soon. I, too, was guilty!
A second problem is that key papers are not cited. If the submitted manuscript is incremental work on a previously published paper, then one should cite the publication. Again - I feel as if the wool is getting pulled over my eyes.
A third problem is qualitative vs. quantitative interpretation. Quantitative is always best, although there are many situations where qualitative is the only option. My particular criticism is when one has data that is easily interpreted quantitatively, but the authors merely give a qualitative interpretation. It reflects laziness to me. Yes, I know that integrating that peak will take you an extra 5 min on Matlab, but it's worth it - trust me.
Of course, these are just the recent tiffs I've had with reviewing. There is always variations on plagiarism of data, words, images, etc. Double publication. Lack of control experiments. This could go on, but I'm done with reviewing for a while (I hope).