Background The rise of electronic publishing, preprint archives, blogs, and wikis is raising concerns among publishers, editors, and scientists about the present day relevance of academic journals and traditional peer review. These concerns are especially fuelled by the ability of search engines to automatically identify and sort information. It appears that academic journals can only remain relevant if acceptance of research for publication within a journal allows readers to infer immediate, reliable information on the value of that research.

Methodology/Principal Findings Here, we systematically evaluate the effectiveness of journals, through the work of editors and reviewers, at evaluating unpublished research. We find that the distribution of the number of citations to a paper published in a given journal in a specific year converges to a steady state after a journal-specific transient time, and demonstrate that in the steady state the logarithm of the number of citations has a journal-specific typical value. We then develop a model for the asymptotic number of citations accrued by papers published in a journal that closely matches the data.

Conclusions/Significance Our model enables us to quantify both the typical impact and the range of impacts of papers published in a journal. Finally, we propose a journal-ranking scheme that maximizes the efficiency of locating high impact research.