¶ 1 Leave a comment on paragraph 1 0 Formally published papers that have been through a traditional prepublication peer review process remain the most important means of communicating science today. Researchers depend on them to learn about the latest advances in their fields and to report their own findings. The intentions of traditional peer review are certainly noble: to ensure methodological integrity and to comment on potential significance of experimental studies through examination by a panel of objective, expert colleagues. In principle, this system enables science to move forward on the collective confidence of previously published work. Unfortunately, the traditional system has inspired methods of measuring impact that are suboptimal for their intended uses.
¶ 2 Leave a comment on paragraph 2 0 Peer-reviewed journals have served an important purpose in evaluating submitted papers and readying them for publication. In theory, one could browse the pages of the most relevant journals to stay current with research on a particular topic. But as the scientific community has grown, so has the number of journals—to the point where over 800,000 new articles appeared in PubMed in 20081 and the total is now over 19 million2. The sheer number makes it impossible for any scientist to read every paper relevant to their research, and a difficult choice has to be made about which papers to read. Journals help by categorizing papers by subject, but there remain in most fields far too many journals and papers to follow.
¶ 3 Leave a comment on paragraph 3 0 As a result, we need good filters for quality, importance, and relevance to apply to scientific literature. There are many we could use but the majority of scientists filter by preferentially reading articles from specific journals—those they view as the highest quality and the most important. These selections are highly subjective but the authors’ personal experience is that most scientists, when pressed, will point to the Thomson ISI Journal Impact Factor3 as an external and “objective” measure for ranking the impact of specific journals and the individual articles within them.
¶ 4 Leave a comment on paragraph 4 0 Yet the impact factor, which averages the number of citations per eligible article in each journal, is deeply flawed both in principle and in practice as a tool for filtering the literature. It is mathematically problematic456, with around 80% of a journal impact factor attributable to around 20% of the papers, even for journals like Nature7. It is very sensitive to the categorisation of papers as “citeable” (e.g., research-based) or “front-matter” (e.g., editorials and commentary)8, and it is controlled by a private company that does not have any obligation to make the underlying data or processes of analysis available. Attempts to replicate or to predict the reported values have generally failed9101112.
¶ 5 Leave a comment on paragraph 5 0 Though the impact factor is flawed, it may be useful for evaluating journals in some contexts, and other more sophisticated metrics for journals are emerging13141516. But for the job of assessing the importance of specific papers, the impact factor—or any other journal-based metric for that matter—cannot escape an even more fundamental problem: it is simply not designed to capture qualities of individual papers.
¶ 6 Leave a comment on paragraph 6 0 If choosing which articles to read on the basis of journal-level metrics is not effective, then we need a measure of importance that tells us about the article. It makes sense that when choosing which of a set of articles to read, we should turn to “article-level metrics,” yet in practice data on individual articles are rarely considered, let alone seriously measured.
¶ 7 Leave a comment on paragraph 7 0 Perhaps the main reason for this absence is a practical one. Accurately determining the importance of an article takes years and is very difficult to do objectively. The “gold standard” of article impact is formal citations in the scholarly literature, but citation metrics have their own challenges. One is that citation metrics do not take the “sentiment” of the citation into account, so while an article that is heavily cited for being wrong is perhaps important in its own way17, using citation counts without any context can be misleading. The biggest problem, though, is the time-delay inherent in citations. The first citations to a paper will appear—at the earliest—months after it is first available; far too late to be useful in the days and weeks after it is published. If we are looking for a way to evaluate recently published papers of potential interest to us, citation-based metrics are not the answer.
The Trouble with Comments
¶ 8 Leave a comment on paragraph 8 0 A common solution proposed for getting rapid feedback on scientific publications is inspired by the success of many Web-based commenting forums. Sites like Stack Overflow, Wikipedia, and Hacker News each have an expert community that contributes new information and debates its value and accuracy. It is not difficult to imagine translating this dynamic into a scholarly research setting where scientists discuss interesting papers. A spirited, intelligent comment thread can also help raise the profile of an article and engage the broader community in a conversation about the science.
¶ 9 Leave a comment on paragraph 9 0 Unfortunately, commenting in the scientific community simply hasn’t worked, at least not generally. BioMedCentral, PLoS, and BMJ have all had commenting platforms for several years, and while certain papers have extensive discussions18192021, these are the exception rather than the rule. Moreover, highly commented papers tend to fall under the category of “front matter” rather than primary research (e.g., a recent Perspective article in PLoS Biology had 17 comments and over 20,000 page views on 4 October 200922). Attempts to apply a “Digg-like” mechanism for voting up or down on the basis of perceived value—on the ArXiv preprints service, for instance—have failed to gain traction23. At the same time new sites based on similar systems have flourished for communities as diverse as programmers and amateur knitters. Why is community participation apparently easy for some but not for researchers—whose work, after all, depends on constructive feedback from their peers?
¶ 10 Leave a comment on paragraph 10 0 Part of this resistance to commenting may relate to technical issues, but the main reason is likely social. For one thing, researchers are unsure how to behave in this new space. We are used to criticizing articles in the privacy of offices and local journal clubs, not in a public, archived forum. Junior researchers may be concerned about the potential repercussions on their own careers. Anonymity can support more direct and honest discussion but as anyone who has visited popular video or news sites knows, anonymity often degrades discussions to the point where they detract from the original piece.
¶ 11 Leave a comment on paragraph 11 0 Another issue is that the majority of people making hiring and granting decisions do not consider commenting a valuable contribution. Stack Overflow, which caters to programmers, works in part because its contributors build up “karma” through a points-based system that translates into greater influence within the community. More importantly, high Stack Overflow karma can be taken beyond the site to add credibility to your resume and directly enhance career opportunities. There is currently no analogous credit system for post-publication commenting in the scientific community. And if there is no reward for quality contribution then people will struggle to justify the time involved in generating high quality comments. It is interesting in this sense that the one commenting system that does appear to obtain reasonable amounts of researcher input is the Faculty of 1000, where senior researchers are selected to become “Faculty” and contribute their opinions on papers they believe are important. Being able to place “Member: Faculty of 1000” on your CV is incentive enough to encourage contributions of sufficient quantity and quality.
¶ 12 Leave a comment on paragraph 12 0 Then there is simply the size of the community. There are many more people broadly involved in programming (or knitting) than there are scientists. For community-based social media efforts, there is a well-known 90-9-1 rule: 90% of people merely observe, 9% make minor contributions, and 1% are responsible for the vast majority of original content. This breakdown need not be a bad thing—on any given article you want the people who care and who have the expertise to be providing critical commentary. But it also means that if only 100 people read a paper, it will be lucky if even one of them leaves a comment.
Technical Solutions to Social Problems
¶ 13 Leave a comment on paragraph 13 0 Given the lack of incentive, are there ways of capturing article-level metrics from what researchers do anyway? A simple way of measuring interest in a specific paper might be via usage and download statistics; for example, how many times a paper has been viewed or downloaded, how many unique users have shown an interest, or how long they lingered. This method can certainly provide a rapid means of assessing interest in a paper by comparing the trend in downloads and page views against the average. A common objection is that these statistics can be artificially inflated by faking or automating downloads, but this problem has been largely solved by the online advertising industry, which relies on trusted page view and download metrics provided by third parties. These statistics may not be completely accurate but they are consistent, comparable, and considered sufficiently immune to cheating to be the basis for a billion dollar Web advertising industry.
¶ 14 Leave a comment on paragraph 14 0 A more important criticism of download statistics is that it is a crude measure of actual use. How many of the downloaded papers are even read, let alone digested in detail and acted upon? What we actually want to measure is how much influence an article has, not how many people clicked on the download button thinking they “might read it later.” A more valuable metric might be the number of people who have actively chosen to include the paper in their own personal library. Endnote, Refworks, and libraries in BibTex format have been the traditional tools for managing personal reference libraries, but there is a growing set of tools with some significant advantages: the tools are free, easy to use, and can help to provide high value article-level metrics without requiring any additional effort on the part of the researchers.
¶ 15 Leave a comment on paragraph 15 0 Examples of such tools are Zotero, Citeulike, Connotea, and Mendeley, which all allow the researcher to collect papers into their library while they are browsing on the Web, often in a single click using convenient “bookmarklets.” The user usually has the option of adding tags, comments, or ratings as part of the bookmarking process. From this point on the tools differ in a variety of ways: Zotero and Mendeley allow formatting citations within manuscripts, whereas Citeulike and Connotea are more focused on using tags and feeds to share information. Importantly, however, they all provide information on how many people have bookmarked a specific paper. Citeulike, Connotea, and Zotero currently go further by providing information on precisely who has bookmarked a specific paper, a piece of information that may be very valuable, although people can choose not to make that information public. Some of these tools may also eventually be able to track the amount of time users spend viewing papers within their interface24.
¶ 16 Leave a comment on paragraph 16 0 Metrics collected by reference management software are especially intriguing because they offer a measure of active interest without requiring researchers to do anything more than what they are already doing. Scientists collect the papers they find interesting, take notes on them, and store the information in a place that is accessible and useful to them. A significant question is why would they share that valuable information with the rest of the world? The data would still be useful even without identities attached, but researchers are more likely to share openly if appropriate incentive structures exist, as in the example of Faculty of 1000.
¶ 17 Leave a comment on paragraph 17 0 Part of the solution to encouraging valuable contributions, then, may simply be that the default settings involve sharing and that people rarely change them. A potentially game-changing incentive, however, may be the power to influence peers. By broadcasting what papers they think are important, researchers are directly influencing the research community’s choice of reading and discussion material. This type of influence can be both a good thing, in providing the type of recognition that drives career prospects and will enhance the quality of contributions, but also potentially bad in as much as it concentrates power in the hands of the few. In a sense it is a shift in power from one set of editors, those currently in charge of journals, to a new set of “editors” who curate published papers in a different, but possibly just as useful way.
¶ 18 Leave a comment on paragraph 18 0 It is too early to tell whether any specific tools will last, but they already demonstrate an important principle: a tool that works within the workflow that researchers are already using can more easily capture and aggregate useful information. With researchers constantly pressured for time and attention, approaches that gather information from processes that are already part of the typical research workflow are also much more likely to succeed.
The Great Thing about Metrics…Is That There Are So Many to Choose From
¶ 19 Leave a comment on paragraph 19 0 There are numerous article-level metrics and each has its own advantages and problems. Citation counts are an excellent measure of influence and impact but are very slow to collect. Download statistics are rapid to collect but may be misleading. Comments can provide valuable and immediate feedback, but are currently sparse and require a change in the research reward culture to become more widespread and to improve quality. Bookmarking statistics can be both rapid to collect and contain high quality information but are largely untested and require the widespread adoption of unfamiliar tools. Alongside these we have “expert ratings” by services such as Faculty of 1000 and simple rating schemes.
¶ 20 Leave a comment on paragraph 20 0 The metrics are also useful in different contexts. Indeed, the fundamental problem of which paper to read can also have different contexts. Which new papers are relevant to you? Which papers should you read if you are going to pursue research question X? Which papers do you need to read before submitting your paper? Are you a funder interested in media coverage of work you have paid for or a textbook writer aiming to assess the most important contributions in a field? All of these questions require different information at different times and may be better determined using different measures of article impact. As recently shown25, scientific impact is not a simple concept that can be described by a single number. The key point is that journal impact factor is a very poor measure of article impact. And, obviously, the fact that an article is highly influential by any measure does not necessarily mean it should be.
¶ 21 Leave a comment on paragraph 21 0 Many researchers will continue to rely on journals as filters, but the more you can incorporate effective filtering tools into your research process, the more you will stay up-to-date with advancing knowledge. The question is not whether you should take article-level metrics seriously but how you can use them most effectively to assist your own research endeavours. We need sophisticated metrics to ask sophisticated questions about different aspects of scientific impact and we need further research into both the most effective measurement techniques and the most effective uses of these in policy and decision making. For this reason we strongly support efforts to collect and present diverse types of article-level metrics without any initial presumptions as to which metric is most valuable. Different users will need different information and it will take time before any metric proves useful.
¶ 22 Leave a comment on paragraph 22 0 As Clay Shirky famously said26, you can complain about information overload but the only way to deal with it is to build and use better filters. It is no longer sufficient to depend on journals as your only filter; instead, it is time to start evaluating papers on their own merits. Our only options are to publish less or to filter more effectively, and any response that favours publishing less doesn’t make sense, either logistically, financially, or ethically. The issue is not how to stop people from publishing, it is how to build better filters, both systematically and individually. At the same time, we can use available tools, networks, and tools built on networks to help with this task.
¶ 24 Leave a comment on paragraph 24 0 Notes: This was originally published as Neylon C, Wu S (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol 7(11): e1000242. doi:10.1371/journal.pbio.1000242 on 17 September 2009. It is republished here with minor modifications under the terms of a Creative Commons Attribution v2.5 License.
- ¶ 25 Leave a comment on paragraph 25 0
- Numbers can be determined by this search URL: http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&term=2008:2008%5Bdp%5D ↩
- Search URL: http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&term=1800:2009%5Bdp%5D ↩
- Garfield E (1972) Citation analysis as a tool in journal evaluation. Science 178: 471–479. ↩
- Bollen J, Rodriquez M. A, Van de Sompel H (2006) Journal status. Scientometrics 69: 669–687. ↩
- Bergstrom C (2007) Eigenfactor: measuring the value and prestige of scholarly journals. C&RL News 68(5) ↩
- Bollen J, Van de Sompel H, Hagberg A, Chute R (2009a) A principal component analysis of 39 scientific impact measures. PLoS One 4: e6022. doi:10.1371/journal.pone.0006022. ↩
- Rossner M, Van Epps H, Hill E (2007) Show me the data. J Cell Biol 179: 1091–1092. ↩
- Rossner M, Van Epps H, Hill E (2008) Irreproducible results: a response to Thomson Scientific. J Cell Biol 180: 254–255. ↩
- Rossner et al (2007) ↩
- Rossner et al., (2008) ↩
- The PLoS Medicine Editors (2006) The impact factor game. PLoS Med 3: e291. doi:10.1371/journal.pmed.0030291. ↩
- Campbell P (2008) Escape from the impact factor. Ethics Sci Environ Polit 8: 5–7. ↩
- Bergstrom (2007) ↩
- Bollen et al., (2009a) ↩
- Chen P, Xie H, Maslov S, Redner S (2007) Finding scientific gems with Google. J Informet 1: 8–15. ↩
- Bollen J, Van de Sompel H, Hagberg A, Bettencourt L, Chute R, et al. (2009b) Clickstream data yields high-resolution maps of science. PLoS One 4: e4803. doi:10.1371/journal.pone.0004803. ↩
- Bollen J, Rodriquez M. A, Van de Sompel H (2006) Journal status. Scientometrics 69: 669–687. ↩
- Reeves W. C, Wagner D, Nisenbaum R, et al. (2005) Chronic fatigue syndrome – a clinically empirical approach to its definition and study. BMC Medicine 3: 19. ↩
- Bourne P. E, Fink J. L, Gerstein M (2008) Open Access: taking full advantage of the content. PLoS Comput Biol 4: e1000037. doi:10.1371/journal.pcbi.1000037. ↩
- Oleksyk T. K, Zhao K, De La Vega F. M, et al. (2008) Identifying selected regions from heterozygosity and divergence using a light-coverage genomic dataset from two human populations. PLoS One 3: e1712. doi:10.1371/journal.pone.0001721. ↩
- de Ruijter W, Westendorp R. G. J, Assendelft W. J. J, et al. (2009) Use of Framingham risk score and new biomarkers to predict cardiovascular mortality in older people: population bases cohort study. BMJ 338: a3083. ↩
- Lawrence P. A (2009) Real lives and white lies in the funding of scientific research. PLoS Biol 7: e1000197. doi:10.1371/journal.pbio.1000197. ↩
- Nielsen M (2009) Doing science in the open. Physics World, May 2009: 30–35. ↩
- Henning V, Hoyt J, Reichelt J (2009) Usage-based impact metrics in research. In: Masum H, Tovey M, Zhang Y-C, Zimmer M,, editors. The reputation society. Boston: MIT Press. ↩
- Bollen et al., (2009b) ↩
- Shirky C (2008) It’s not information overload. It’s filter failure. Available: http://www.web2expo.com/webexny2008/public/schedule/detail/4817. Accessed 1 October 2009. ↩