I mentioned the phobia against mere experimentation recently, but now I have some more fascinating glimpses into “peer review”. I have some theoretical work I’m trying to finish up and get published somewhere so someone smart can do something with it. It may be junk or not – that’s why we have “peer review” in science. So I send a paper to LATA as something of a first effort. The reviews were negative. Pravda, well Pravda said: it stinks. Oh wait, that was a Tom Lehrer lyric.

Have I mentioned how happy I am to have found a way of earning a living outside of Academia? In any case, here’s the most annoying of the three reviews:

OVERALL RATING: -2 (reject)

The authors discuss well-known methods for specifying Moore machines.
The referee did not find anything new in this paper.
The general direction of this research seems useful.

Certainly a fair comment and one that is pertinent except for one glaring omission – the citation. Perhaps the paper does repeat material that is well known to the expert reviewer and not the poor author. So where is the “for example, Perrin’s 1995 survey describes the same methods” ? Without a citation, the review has a strong flavor of humbug.  But it’s not just the reviewer who comes into question here. The proceedings editor received this review and did not ask for clarification. If I had not seen similar responses to papers written by other people, perhaps I’d be less skeptical.  But I have been in the position of a conference editor where I asked reviewers for citations – and most of the time, the reviewer was not able to substantiate. Why? Because if the reviewer  can substantiate, if she or he really thinks that the paper covers known material, the natural inclination is to point to the known material. Of course, I complained to the conference program committee chairs and got this response.

PS: At conferences, there is usually no discussion on acceptance /
non-acceptance with the authors.

I’m not sure about “usually”, but often there is discussion.  For example

In author response, also called rebuttal, reviewers enter their reviews, authors read the reviews, enter a response, and then reviewers make their final decisions. The purpose is to provide authors a forum to correct and directly address issues raised in the reviews.”Improving Publication Quality by Reducing Bias with Double-Blind Reviewing and Author Response” Kathryn S McKinley, The University of Texas at Austin ACM SIGPLAN Notices, 43(8):5–9, August 2008.

See, that is known as a “citation”. I made a statement and then made a reference to backing material so that the reader could look it up.  Was that so hard? Going back to the problem with articles on experimental systems, reviewers who find the data unsurprising should be required to point to the literature that provide the same information. “It’s well known” without backup is offensive to the whole scientific process. If it is well known, then point to where it is explained. If not, then be silent.

I gotta add that this is far from the worst quality review I ever received. The all time star, included something like this: “The author fails to understand that while in a set {a,b} [something], if the ordering is different and the set is {b,a} then ...”. But I’m just starting to send out papers again, there’s hope that something even more amusing will appear.

[updated with clarification Feb6]

Why Computer Science is a failed field #2
Tagged on:

One thought on “Why Computer Science is a failed field #2

Comments are closed.