Saturday, April 23, 2011

Technology Has Social Consequences

Technology Has Social Consequences | May 2011 | Communications of the ACM: The loss in quality of conference reviewing is just one result of the move to virtual PC meetings. Another outcome is the loss of socialization that took place in PC meetings. It is this lost socialization that contributed to a senior researcher being ignorant of one of the most basic rules of scholarly reviewing.

I quoted the paragraph in Moshe's article that best summarizes his main point. The article is worth reading in full. However, I disagree with both its alleged empirical claims and its main argument. Moshe claims that reviewing quality is down in CS conferences, and reviewers are less aware of good scholarly conduct. However, he provides no empirical evidence for these claims except for one anecdote and some vague impressions, which I could easily counter with old tales of reviewing incompetence and malice. He then proceeds to argue that those failings are the result of less face-to-face interaction among reviewers. Again, he provides no empirical evidence for the claim, and he does not consider alternative explanations. In particular, he does not consider the fact that many areas of CS have grown rapidly. For example, I just did a simple calculation to arrive at an estimate that computational linguistics as a field has grown at an average 6%/year over the last 28 years, making the field five times as big now as when I presented my first ACL paper. This growth forced conferences to adopt more complex structures, with areas, multiple tiers, electronic submission and review discussion, simply to scale up to the much larger population. We can argue about the specifics of reviewing mechanisms, but the old unitary program committee was already collapsing under the strain around 15 years ago for the first-tier conferences I have been involved in.

But there's an even bigger potential problem that Moshe does not discuss. As the field has matured, it takes longer and it is harder for someone to become a good reviewer because there's just more to know. Meanwhile, the number of people coming into the field continues to grow and the number of papers submitted grows in proportion. The result is then that the ratio of submissions to qualified reviewers increases. Less qualified reviewers are enrolled, or qualified reviewers are overloaded. Either way, review quality goes down. For all we know, it is this, and not Web-based program committees, that is the root cause of all the complaining about bad reviewing in the last few years.

1 comment:

Unknown said...

This is absolutely on target. As you know, in the NIPS community, there has been a lot of discussion of alternative mechanisms for publication and for reviewing. Many of these may have merit, but none of them will fix the fundamental problem that when a field is expanding exponentially, the set of qualified reviewers (unless it also grows exponentially) is a shrinking percentage of the total set of authors. So perhaps we should focus on mechanisms for helping more people become qualified reviewers.