Gene Spafford’s short reply pretty much nails it.Gene Spafford's answer to How hard is it to submit a paper to a famous conference in computer science?On top of the low acceptance rate, there’s a lot of noise (random fluctuation) in the reviewing process. As a prime example, for the flagship NIPS conference in the neural-net/deep-learning area, the acceptance rate for oral presentation is very low, and even the acceptance rate for poster presentation is low.Meanwhile, given the current surge of interest in Deep Learning, the number of submissions is exploding. It’s very hard to find enough competent referees to put two or three of them on each paper, especially given that reviewing papers is perhaps the most thankless of tasks that a researcher can spend time on. Many of us feel obligated to do some amount of reviewing as a service to our field, but there’s no real reward (except the occasional chance to see a good paper before everyone else does, but you’re not supposed to make use of what you see), a lot of hassle, and a lot of time required to do it right. All of that time is taken away from our own research and writing our own papers.So we end up with stories of key points being missed or misunderstood by over-stressed, uninterested, or incompetent reviewers, or of grad students being pressed into last-minute service to review papers.The NIPS community, being data geeks at heart, have spent some time trying to quantify the amount of noise. There’s no “ground truth” ‡ that is, no oracle that can say whether a paper should or should not be accepted ‡ so these studies take some random subset of the papers, give them to two NIPS-typical teams of reviewers, and look to see whether the recommendations are the same for both groups. Some references are below, but googling can find you more.Basically, it seems that some really weak papers are rejected by both groups, a few stellar one are perhaps accepted reliably by both groups, and there’s a frightening level of disagreement on which of the others should be accepted. So if you could somehow submit a good paper to NIPS several times, with different randomly-chosen referees each time, it would sometimes be accepted for the conference and sometimes not.The NIPS experimentThe Nips Experimenthttps://arxiv.org/pdf/1708.09794...Because of problems like this, I personally think that the use of unpaid expert referees by conferences and journals as the gatekeepers and error-checkers, controlling what other researchers can easily see and what they cannot, is a system whose time has passed.I think that in the future, everyone will be able to put their papers into some permanent archive where it can reliably be found in the future. This should be free to readers and, ideally, free to authors. Simple storage and indexing of these papers is not very costly, and could be covered by the government or some foundation.A lot of that stuff will be garbage. So we need to build up a system for quality control, flagging of errors and issues, evaluation, and recommendation based on organizations or informal networks of respected people in the field. This is an extension of the way we evaluate and recommend products and services online now. The best papers will be noticed because a lot of people recommend them. That’s not an infallible system, but probably better than what we have now ‡ especially if we think carefully about ways to keep self-promotion, self-serving commercial interests, and website optimization out of the game.I don’t have a complete design in mind, but this is something we should work toward.If this is successful, promotion committees could use the more reliable markers of online reputation in place of the current “number of publications” metrics.Conferences could return to their original purpose of exchanging ideas, and would no longer be a check-mark for promotion.Journals could focus on long-form presentation of ideas, by invitation, with perhaps lots of theme issues and so on (as the better non-peer-reviewed journals and technical publications do today).The expensive journals that are abusing their position as arbiters of academic quality and advancement could achieve a well-earned extinction.