Error rates of human reviewers during abstract screening in systematic reviews
Resource type
Journal Article
Authors/contributors
- Wang, Zhen (Author)
- Nayfeh, Tarek (Author)
- Tetzlaff, Jennifer (Author)
- O’Blenis, Peter (Author)
- Murad, Mohammad Hassan (Author)
Title
Error rates of human reviewers during abstract screening in systematic reviews
Abstract
Background Automated approaches to improve the efficiency of systematic reviews are greatly needed. When testing any of these approaches, the criterion standard of comparison (gold standard) is usually human reviewers. Yet, human reviewers make errors in inclusion and exclusion of references. Objectives To determine citation false inclusion and false exclusion rates during abstract screening by pairs of independent reviewers. These rates can help in designing, testing and implementing automated approaches. Methods We identified all systematic reviews conducted between 2010 and 2017 by an evidence-based practice center in the United States. Eligible reviews had to follow standard systematic review procedures with dual independent screening of abstracts and full texts, in which citation inclusion by one reviewer prompted automatic inclusion through the next level of screening. Disagreements between reviewers during full text screening were reconciled via consensus or arbitration by a third reviewer. A false inclusion or exclusion was defined as a decision made by a single reviewer that was inconsistent with the final included list of studies. Results We analyzed a total of 139,467 citations that underwent 329,332 inclusion and exclusion decisions from 86 unique reviewers. The final systematic reviews included 5.48% of the potential references identified through bibliographic database search (95% confidence interval (CI): 2.38% to 8.58%). After abstract screening, the total error rate (false inclusion and false exclusion) was 10.76% (95% CI: 7.43% to 14.09%). Conclusions This study suggests important false inclusion and exclusion rates by human reviewers. When deciding the validity of a future automated study selection algorithm, it is important to keep in mind that the gold standard is not perfect and that achieving error rates similar to humans may be adequate and can save resources and time.
Publication
PLOS ONE
Volume
15
Issue
1
Pages
e0227742
Date
14 Jan 2020
Journal Abbr
PLOS ONE
Language
en
ISSN
1932-6203
Accessed
18/01/2024, 22:41
Library Catalogue
PLoS Journals
Extra
Publisher: Public Library of Science
Citation
Wang, Z., Nayfeh, T., Tetzlaff, J., O’Blenis, P., & Murad, M. H. (2020). Error rates of human reviewers during abstract screening in systematic reviews. PLOS ONE, 15(1), e0227742. https://doi.org/10.1371/journal.pone.0227742
Link to this record