But what does this have to do with program committees? In 2014, the Neural Information Processing Systems Foundation (NIPS) Conference split the program committee into two independent committees, and then subjected 10% of the submissions 166 papers to decision making by both committees. The two committees disagreed on 43 papers. Given the NIPS paper acceptance rate of 25%, this means that close to 60% of the papers accepted by the first committee were rejected by the second one and vice versa. … This high level of randomness came as a surprise to many people, but I have found it quite expected. My own experience is that in a typical program committee meeting there is broad agreement for acceptance about the top 10% of the papers, as well as broad agreement rejections about the bottom 25% of the papers. For the other 65% of the submissions, there is no agreement and the final accept/reject decision is fairly random. This is particularly true when the accept/reject decision pivots on issues such as significance and interestingness, which can be quite subjective. Yet, we seem to pretend that this random decision reflects the deep wisdom of the program committee. Communications of the ACM, Divination by Program Committee, September 2017.
I’ve often pushed the bounds of acceptable behavior by suggesting we replace decision making committees with a roll of the dice (e.g., military promotions, software defect prioritization). We can’t believe that a committee of experts that meets for hours or weeks could produce results that are indistinguishable from random numbers. However, on every occasion where it appeared that a committee’s activities were not adding value to the process and that I was then able to measure, guess what? Yup, the numbers showed that the output produced not a ripple in the trend the committee was trying to manage or looked indistinguishable from randomly selected choices.
Are your committee meetings producing real value or are they just held on the assumption that they must be adding value?