Mendeley readership altmetrics for medical articles: An analysis of 45 fields
Mike Thelwall
Statistical Cybermetrics Research Group, School of Mathematics and Computer Science, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1LY UK
Search for more papers by this authorPaul Wilson
Statistical Cybermetrics Research Group, School of Mathematics and Computer Science, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1LY UK
Search for more papers by this authorMike Thelwall
Statistical Cybermetrics Research Group, School of Mathematics and Computer Science, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1LY UK
Search for more papers by this authorPaul Wilson
Statistical Cybermetrics Research Group, School of Mathematics and Computer Science, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1LY UK
Search for more papers by this authorAbstract
Medical research is highly funded and often expensive and so is particularly important to evaluate effectively. Nevertheless, citation counts may accrue too slowly for use in some formal and informal evaluations. It is therefore important to investigate whether alternative metrics could be used as substitutes. This article assesses whether one such altmetric, Mendeley readership counts, correlates strongly with citation counts across all medical fields, whether the relationship is stronger if student readers are excluded, and whether they are distributed similarly to citation counts. Based on a sample of 332,975 articles from 2009 in 45 medical fields in Scopus, citation counts correlated strongly (about 0.7; 78% of articles had at least one reader) with Mendeley readership counts (from the new version 1 applications programming interface [API]) in almost all fields, with one minor exception, and the correlations tended to decrease slightly when student readers were excluded. Readership followed either a lognormal or a hooked power law distribution, whereas citations always followed a hooked power law, showing that the two may have underlying differences.
References
- Adamic, L.A., & Huberman, B.A. (2000). Power-law distribution of the world wide web. Science, 287(5461), 2115.
- Adie, E., & Roe, W. (2013). Altmetric: Enriching scholarly content with article-level discussion and metrics. Learned Publishing, 26(1), 11–17.
- Bar-Ilan, J. (2012). JASIST@mendeley. In P. Groth, J. Priem, & D. Taraborelli (Eds.), ACM Web Science Conference 2012 Workshop. Retrieved from http://altmetrics.org/altmetrics12/bar-ilan/
- Bornmann, L., & Leydesdorff, L. (2013). The validation of (advanced) bibliometric indicators through peer assessments: A comparative study using data from InCites and F1000. Journal of Informetrics, 7(2), 286–291.
- Borrego, Á., & Fry, J. (2012). Measuring researchers' use of scholarly information through social bookmarking data: A case study of BibSonomy. Journal of Information Science, 38(3), 297–308.
- Chubin, D.E., & Moitra, S.D. (1975). Content analysis of references: Adjunct or alternative to citation counting? Social Studies of Science, 5(4), 423–441.
- Clauset, A., Shalizi, C.R., & Newman, M.E. (2009). Power-law distributions in empirical data. SIAM Review, 51(4), 661–703.
- Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.
- Costas, R., Zahedi, Z., & Wouters, P. (2014). Do altmetrics correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. arXiv preprint arXiv:1401.4321.
- De Vaus, D. (2002). Analyzing social science data: 50 key problems in data analysis. Oxford, UK: Sage.
- Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4), e123.
- Franceschet, M., & Costantini, A. (2011). The first Italian research assessment exercise: A bibliometric perspective. Journal of Informetrics, 5(2), 275–291.
- Haustein, S., & Siebenlist, T. (2011). Applying social bookmarking data to evaluate journal usage. Journal of Informetrics, 5(3), 446–457.
- Haustein, S., Larivière, V., Thelwall, M., Amyot, D., & Peters, I. (2014). Tweets vs. Mendeley readers: How do these two social media metrics differ. IT-Information Technology, 56(5), 207–215.
10.1515/itit-2014-1048 Google Scholar
- Heck, T., Peters, I., & Stock, W.G. (2011). Testing collaborative filtering against co-citation analysis and bibliographic coupling for academic author recommendation. In B. Mobasher & J. Burke (Eds.), Proceedings of the 3rd ACM Workshop on Recommender Systems and the Social Web (RecSys 11) (pp. 16–23). New York: ACM Press.
- Hemphill, J.F. (2003). Interpreting the magnitudes of correlation coefficients. American Psychologist, 58(1), 78–79.
- Henning, V., & Reichelt, J. (2008). Mendeley—A Last.fm for research? In G. Fox (Ed.), IEEE Fourth International Conference on eScience (eScience'08) (pp. 327–328). Menlo Park, CA: IEEE.
10.1109/eScience.2008.128 Google Scholar
- Kousha, K., & Thelwall, M. (2007). Google Scholar citations and Google Web/URL citations: A multi-discipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055–1065.
- Kraker, P., Schlögl, C., Jack, K., & Lindstaedt, S. (2014). Visualization of co-readership patterns from an online reference management system. arXiv preprint arXiv:1409.0348.
- Kryl, D., Allen, L., Dolby, K., Sherbon, B., & Viney, I. (2012). Tracking the impact of research on policy and practice: Investigating the feasibility of using citations in clinical guidelines for research evaluation. BMJ Open, 2(2), e000897.
- Lewison, G. (1998). Gastroenterology research in the United Kingdom: Funding sources and impact. Gut, 43(2), 288–293.
- Li, X., & Thelwall, M. (2012). F1000, Mendeley and traditional bibliometric indicators. In É. Archambault, Y. Gingras, V. Larivière (Ed.), Proceedings of the 17th International Conference on Science and Technology Indicators (Vol. 2, pp. 451–551). Montréal: University of Montréal.
- Li, X., Thelwall, M., & Giustini, D. (2012). Validating online reference managers for scholarly impact measurement. Scientometrics, 91(2), 461–471.
- MacRoberts, M.H., & MacRoberts, B.R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342–349.
- Maflahi, N., & Thelwall, M. (in press). When are readership counts as useful as citation counts? Scopus vs. Mendeley for LIS journals. Journal of the Association for Information Science and Technology.
- Mas-Bleda, A., Thelwall, M., Kousha, K., & Aguillo, I.F. (2014). Do highly cited researchers successfully use the social web? Scientometrics, 101(1), 337–356.
- Merton, R.K. (1973). The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago Press.
- Moed, H.F. (2006). Citation analysis in research evaluation. Berlin: Springer.
- Mohammadi, E. (2014). Identifying the invisible impact of scholarly publications: A multi-disciplinary analysis using altmetrics. Wolverhampton, UK: University of Wolverhampton.
- Mohammadi, E., & Thelwall, M. (2013). Assessing non-standard article impact using F1000 labels. Scientometrics, 97(2), 383–395.
- Mohammadi, E., & Thelwall, M. (2014). Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows. Journal of the Association for Information Science and Technology, 65(8), 1627–1638.
- Mohammadi, E., Thelwall, M., & Kousha, K. (in press). Can Mendeley bookmarks reflect readership? A survey of user motivations. Journal of the Association for Information Science and Technology.
- Mohammadi, E., Thelwall, M., Haustein, S., & Larivière, V. (in press). Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology.
- Murphy, K.M., & Topel, R.H. (2003). Diminishing returns?: The costs and benefits of improving health. Perspectives in Biology and Medicine, 46(3), S108–S128.
- Pennock, D.M., Flake, G.W., Lawrence, S., Glover, E.J., & Giles, C.L. (2002). Winners don't take all: Characterizing the competition for links on the web. Proceedings of the National Academy of Sciences, 99(8), 5207–5211.
- Piwowar, H., & Priem, J. (2013). The power of altmetrics on a CV. Bulletin of the American Society for Information Science and Technology, 39(4), 10–13.
10.1002/bult.2013.1720390405 Google Scholar
- Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: A manifesto. Retrieved from http://altmetrics.org/manifesto/
- Priem, J., Piwowar, H.A., & Hemminger, B.M. (2012). Altmetrics in the wild: Using social media to explore scholarly impact. arXiv preprint arXiv:1203.4745.
- Seglen, P.O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628–638.
- Seglen, P.O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ (Clinical Research Ed.), 314(7079), 497.
10.1136/bmj.314.7079.497 Google Scholar
- Shema, H., Bar-Ilan, J., & Thelwall, M. (2012). Research blogs and the discussion of scholarly information. PLoS One, 7(5), e35869.
- Shema, H., Bar-Ilan, J., & Thelwall, M. (2014). Do blog citations correlate with a higher number of future citations? Research blogs as a potential source for alternative metrics. Journal of the Association for Information Science and Technology, 65(5), 1018–1027.
- Shuai, X., Pepe, A., & Bollen, J. (2012). How the scientific community reacts to newly submitted preprints: Article downloads, Twitter mentions, and citations. PLoS One, 7(11), e47523.
- Sud, P., & Thelwall, M. (2014). Evaluating altmetrics. Scientometrics, 98(2), 1131–1143.
- Thelwal, M., & Maflahi, N. (in press-a). Guideline references and academic citations as evidence of the clinical value of health research. Journal of the Association for Information Science and Technology.
- Thelwall, M., & Maflahi, N. (in press-b). Are scholarly articles disproportionately read in their own country? An analysis of Mendeley readers. Journal of the Association for Information Science and Technology.
- Thelwall, M., & Wilson, P. (2014). Distributions for cited articles from individual subjects and years. Journal of Informetrics, 8(4), 824–839.
- Thelwall, M., Klitkou, A., Verbeek, A., Stuart, D., & Vincent, C. (2010). Policy-relevant webometrics for individual scientific fields. Journal of the American Society for Information Science and Technology, 61(7), 1464–1475.
- Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. (2013). Do altmetrics work? Twitter and ten other candidates. PLoS One, 8(5), e64841.
- Thelwall, M., Tsou, A., Weingart, S., Holmberg, K., & Haustein, S. (2013). Tweeting links to academic articles. Cybermetrics: International Journal of Scientometrics, Informetrics and Bibliometrics, 17(1). paper 1.
- Vaughan, L., & Shaw, D. (2003). Bibliographic and web citations: What is the difference? Journal of the American Society for Information Science and Technology, 54(14), 1313–1322.
- Vuong, Q.H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica: Journal of the Econometric Society, 57(2), 307–333.
- Waltman, L., & Costas, R. (2014). F1000 Recommendations as a potential new data source for research evaluation: A comparison with citations. Journal of the Association for Information Science and Technology, 65(3), 433–445.
- Wardle, D.A. (2010). Do “Faculty of 1000” (F1000) ratings of ecological publications serve as reasonable predictors of their future impact? Ideas in Ecology and Evolution, 3(1), 11–15.
- Wilks, S.S. (1938). The large-sample distribution of the likelihood ratio for testing composite hypotheses. The Annals of Mathematical Statistics, 9(1), 60–62.
10.1214/aoms/1177732360 Google Scholar
- Wilson, P. (2015). The misuse of the Vuong test for non-nested models to test for zero-inflation. Economics Letters, 127(1), 51–53.
- Wouters, P., & Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. In É. Archambault, Y. Gingras, & V. Larivière (Eds.), Proceedings of the 17th International Conference on Science and Technology Indicators (Vol. 2, pp. 487–497). Montreal: University of Montreal.
- Zahedi, Z., Costas, R., & Wouters, P.F. (2013). What is the impact of the publications read by the different Mendeley users? Could they help to identify alternative types of impact? PLoS ALM Workshop. Retrieved from https://openaccess.leidenuniv.nl/handle/1887/23579
- Zahedi, Z., Costas, R., & Wouters, P. (2014). How well developed are altmetrics? A cross-disciplinary analysis of the presence of “alternative metrics” in scientific publications. Scientometrics, 101(2), 1491–1513.
- Zahedi, Z., Haustein, S., & Bowman, T. (2014). Exploring data quality and retrieval strategies for Mendeley reader counts. Presentation at SIGMET Metrics 2014 workshop, 5 November 2014. Retrieved from http://www.slideshare.net/StefanieHaustein/sigme-tworkshop-asist2014
- Zaugg, H., West, R.E., Tateishi, I., & Randall, D.L. (2011). Mendeley: Creating communities of scholarly inquiry through research collaboration. TechTrends, 55(1), 32–36.
10.1007/s11528-011-0467-y Google Scholar