Scientific peer review is not broken. It is not in crisis, it is not - as one blogger would eloquently put it - f***ed up. It has not become any more flawed than it was when Henry Oldenburg created it (btw, an entrepreneur in me might wish to point out that old Henry had other, less altruistic, motives in addition to the wish to advance science). Peer review works and gets its objective done (and while old Henry never profited as much as he hoped, he ignited an industry now reaping 35% margins).
In the same way, writing with a typewriter, sending documents by fax, calling from a public pay-phone, or making a trans-continental travel with a ship or train, are not broken, in crisis, f***ed up, or flawed solutions in any way; they are just as solid and dependable as they always were, if you can find one for the task and can afford the cost in time and money. But they are not very good anymore at delivering what we expect when things need to get done efficiently.
While peer review is not broken and hence can carry on for some time without fixing, peer review can and should be better. The appearance of indefinite propagation does not mean that our current peer review practices have escaped senescence, or that they would not benefit from rejuvenation.
Is it really necessary that authors spend a year or more in the effort to find a suitable journal for their results, by trial and error? Or spend effort in debunking obviously unjustified criticism? Is there really no alternative to editors having to send minimum of five requests to get two opinions? Or having to make some publishing decisions with only two hastily scribbled opinions at hand, often from reviewers suggested by the authors themselves? Is it really the best use of editor's time (or publisher's resources) having to act as peer review manager for each manuscript, only to reject three out of four, or more if the journal happens to be a good one? Quality ratings fail to predict citation rates, so are editor's efforts to use peer review to improve journal's impact factor doomed? Should reviewers really be content with only the warm fuzzy feeling of being a responsible (but unrecognized) member of the community, and a thank-you note on the year's last issue of the journal, in return for investing their time and expertise?
I genuinely do think Peerage of Science offers better peer review, for everyone. However, putting those woes to rest makes new challenges raise their heads. One of the challenges that surprised me most should perhaps have been obvious from the start.
The challenge is manuscript rejections. Having a manuscript rejected is unpleasant for the authors, and I suspect editors don't particularly enjoy having their names signing rejection letters, and having to deal with possible calls of erroneous decision from authors, either. Publishers do recognize the inefficiency of rejecting technically sound manuscripts from one journal if they would be a good fit with another journal, and have recently come up with innovative solutions like the new journal Ecology and Evolution. So rejections are, if not a problem, at least an unavoidable nuisance in the system, right?
In Peerage of Science, there is no such thing as a rejection. The concept simply does not exist, in its current form, within the electronic walls of our little ivory tower. Editors "reject" a manuscript by simply choosing not to follow its peer review process further. Authors never know that a journal lost interest. Everything runs much more smoothly and efficiently, because editors have access to the whole pool of manuscripts under independent peer review, and offer publication for those that meet the standards of the journal, with authors then accepting the offer they consider best avenue for their research.
Surprisingly, the lack of rejections is a challenge, a difficult thing to sell. Because collectively, we just love rejections.
We love rejections because the rejection rate of the journal where your work is accepted equals prestige, perhaps even more than the impact factor of the journal. The journal Science was already a highly venerated podium for one's results when Eugene Garfield presented his groundbreaking ideas on its pages in 1955. Like it or not, your scientific result is largely rated by other scientists, your employer, the funding institutions and the society, by the number of other scientific results it displaces as inferior. Things that are hard to get are prestigious just because they are hard to get. Think of jewellery, those useless pieces of carbon DeBeers has enough in its vaults to make a pebble beach out of Norway if it wanted to. In the days of electronic publishing, getting published in Nature of Science is something like being handed a diamond (I am not saying that most of those diamonds of knowledge would not cut the rock of reality beautifully too, in addition to distorting light in a pretty way).
How can a journal maintain or create reputation as a journal employing rigorous standards and publishing only the best articles, if you never hear colleagues agonizing over being rejected, again, from its pages? How can authors get that all-important prestige-by-association (and thereby grants and jobs) for themselves from having a paper in journal X, if no one except the editors of X knows how hard it is to get a paper accepted in that journal?
One solution (or rather, an inevitable emergent feature) is that the beloved agony simply shifts its focus from rejections into lack of offers. I can almost hear myself in a university cafeteria, loudly fretting how my manuscript is almost through peer review in Peerage of Science, and Nature has not sent an offer yet, while that other guy got an offer from them as soon as the reviews were in, before even having to revise the manuscript (-"you know, it is only because he is doing those fashionable things, besides, I would not accept their offer even if I got one..."). And on the other end, you might set your account preferences to automatically filter away offers from that irritating author-pays-anything-goes journal because, as you and everyone else knows, everybody gets an offer from them irrespective of the quality of the research and publishing there is a bit embarrassing. So good old days can continue, pride and prejudice are not threatened.
However, there are additional ways for journals to establish quality, less dependent on human tendency to gossip and rant. The journal can publish, or have Peerage of Science publish on its behalf, the quantitative quality indices generated in the process. Rigorous standards are evident when the journal can show that it only publishes articles that attracted, say, four or more reviews, all of which had a review-quality score over 4/5, and the manuscript itself a score in the top 2% of all manuscripts ever evaluated. Peerage of Science would also be happy to publish lists of aggregate indices derived from these measures for journals opting to be included on such lists, similar to the Journal Citation Reports by Thomson Reuters. The fact that journals are judged by the quality of science they carry can only be further emphasized if Peerage of Science becomes commonly used.
Nonetheless, there is a clear danger that with Peerage of Science, each article is judged more based on its own merit (gasp!), than that of the carrying journal.
One of the founders of the service, and a postdoctoral researcher at University of Jyväskylä, Finland.
Without any education in the classics, and little knowledge of telomere research, my attempts at witty titling with Chthonic monsters and other such things of doubtful senescence are probably even more ill-advised than doing so with 80's rock lyrics.