A free service for scientific peer review and publishing

much ado about noting

For the usual fee - plus expenses

Janne-Tuomas Seppänen
09 Jan 2012

Whether you are a scientist - with your author, reviewer or editor hat on - or an editorial director employed by a publisher, or owner of that publishing company, or a board member in a scientific society, or a recruiting university administrator, you probably agree with the statement that scientific peer review is valuable. But all of these people probably have a different concepts and units of measurement for "value" in that statement.

A scientist might have a lofty concept of "truth" or "utility" as the value, or more likely a mundane "my reputation". Shareholders and business executives in publishing companies (and to some extent treasurers in societies publishing their own journals) benefit from having more easily quantifiable, ratio-scaled unit of measurement for value. Money. But what is the value of peer review in terms of money? Let's take a simple back-of-the-excelsheet exercise.

Simplified to basics, the revenue from publishing a journal equals circulation multiplied by price of subscription.

The circulation data are closely held secrets, but research indicates that price-per-citation is an important factor when librarians make their purchasing decisions [1]. That is, from two journals with equal price and number of articles, the librarian purchases the one that is cited more. A journal with sufficiently high number of citations can be preferred over a journal with higher impact factor. What this means for a publisher is that more citations equal more purchases, which equals more revenue.

The subscription prices are publicly displayed on publisher's websites (though bundling confounds the data). Given the purchasing patterns of libraries, it should not be a surprise that impact factor is not a very good predictor of subscription prices. The total number of citations on a given year is a much better predictor, having R2 around 0.5 (no citation here, it is easy to test yourself). Again, more citations equal higher price, which equals more revenue.

Based on the above, publisher revenue should be roughly proportional to citation count. Dividing the revenue from journal subscriptions by the number of citations that all journals of a publisher have accumulated during a year yields revenue-per-citation, which is showing interesting patterns (well, my sample size is 4 publishers and I argue here existence of two categories, but hey, even publishing decisions are made with a sample of 2 opinions...). Elsevier secures 169€ from each citation you make to an article published by it, Wiley-Blackwell 151€, Springer 295€ and Informa (which owns Taylor&Francis) 287€. It seems likely that the effect of citation count on revenue is one of diminishing returns (all possible customers are already subscribing the most prestigious journals, and are simply unable to pay the amount that the citation count for Cell or Lancet would warrant if the relationship was linear), perhaps explaining why the two publishers with more prestigious rosters stand together at a lower average revenue-per-citation point.

How is all this related to value of peer review? The distribution of citations across journals within publisher's roster, or across articles within a journal, is highly skewed. For example, how did articles published in 2008 contribute to the received citations (roughly proportional to revenue) in one of Elsevier's journals, Animal Behaviour, on 2010? The top 10% of articles in terms of citations generated 30% (394) of the citations while the bottom 10% had been cited... twice. Using the guestimates above, the three articles tied for top citation count alone secured 10 478€ for Elsevier on 2010, or about the same amount that 80 least cited articles did. The cost of peer review is similar for all peer-reviewed articles within a journal, so publishing the best article (22 citations) was seven times more lucrative investment than publishing the median article (3 citations). And coming to 2012, that has spread to almost ten-fold difference.

This tells that peer review should be extremely valuable for publishers, because choosing a mere "normal" manuscript instead of one that would have been highly cited translates to massive relative revenue loss. Even worse, the next journal that gets to have go at the missed manuscript is very likely published by the worst competitor (being on the same field, but just one step below on the prestige ladder). Unfortunately, peer review as it stands is quite poor at distinguishing would-be citation classics from the mediocre mass. One study [2] estimated the correlation between quantitative reviewer scores and eventual citation rate to be around 0.2, and what is worse, the size of the standard deviation was similar to the mean across the quality range. In practice, good editors are much, much more precious to publishers than the current peer review mechanism that is little better than a coin-toss in identifying most valuable manuscripts.

Publishers clearly should be eager to invest in a better peer review mechanism. And yes, that is a shameless advertisement for services Peerage of Science provides.

[1] Bergstrom & Bergstrom (2006). Front Ecol Environ 4(9): 488-495.
[2] Patterson & Harris (2009). Scientometrics 80(2): 345-351.

One of the founders of the service, and a postdoctoral researcher at University of Jyväskylä, Finland.

I was advised against attempts at witty titling early on, first by PhD supervisors, and when I did not listen to them, later by reviewers and editors. So I gave up doing that with scientific articles, but seem to be taking un-poetic licence again with these posts. Inspired by Chris Lortie's recent editorial in IEE.


« | »

peerage of science Facebook peerage of science twitter peerage of science twitter