Performance and usability of machine learning for screening in systematic reviews a comparative evaluation of three tools

BACKGROUND: Machine learning tools can expedite systematic review (SR) completion by reducing manual screening workloads, yet their adoption has been slow. Evidence of their reliability and usability may improve their acceptance within the SR community. We explored the performance of three tools whe...

Full description

Bibliographic Details
Main Author: Gates, Allison
Corporate Authors: United States Agency for Healthcare Research and Quality, University of Alberta Evidence-based Practice Center
Format: eBook
Language:English
Published: Rockville, MD Agency for Healthcare Research and Quality November 2019, 2019
Series:Methods research report
Online Access:
Collection: National Center for Biotechnology Information - Collection details see MPG.ReNa
LEADER 03817nam a2200265 u 4500
001 EB002002172
003 EBX01000000000000001165073
005 00000000000000.0
007 tu|||||||||||||||||||||
008 210907 r ||| eng
100 1 |a Gates, Allison 
245 0 0 |a Performance and usability of machine learning for screening in systematic reviews  |h Elektronische Ressource  |b a comparative evaluation of three tools  |c Allison Gates [and 6 others] 
260 |a Rockville, MD  |b Agency for Healthcare Research and Quality  |c November 2019, 2019 
300 |a 1 PDF file (viii, 22 pages)  |b illustrations 
505 0 |a includes bibliographical references 
710 2 |a United States  |b Agency for Healthcare Research and Quality 
710 2 |a University of Alberta Evidence-based Practice Center 
041 0 7 |a eng  |2 ISO 639-2 
989 |b NCBI  |a National Center for Biotechnology Information 
490 0 |a Methods research report 
856 4 0 |u https://www.ncbi.nlm.nih.gov/books/NBK550175  |3 Volltext 
082 0 |a 610 
520 |a BACKGROUND: Machine learning tools can expedite systematic review (SR) completion by reducing manual screening workloads, yet their adoption has been slow. Evidence of their reliability and usability may improve their acceptance within the SR community. We explored the performance of three tools when used to: (a) eliminate irrelevant records (Automated Simulation) and (b) complement the work of a single reviewer (Semi-automated Simulation). We evaluated the usability of each tool. METHODS: We subjected three SRs to two retrospective screening simulations. In each tool (Abstrackr, DistillerSR, and RobotAnalyst), we screened a 200-record training set and downloaded the predicted relevance of the remaining records. We calculated the proportion missed and the workload and time savings compared to dual independent screening. To test usability, eight research staff undertook a screening exercise in each tool and completed a survey, including the System Usability Scale (SUS).  
520 |a RESULTS: Using Abstrackr, DistillerSR, and RobotAnalyst respectively, the median (range) proportion missed was 5 (0 to 28) percent, 97 (96 to 100) percent, and 70 (23 to 100) percent in the Automated Simulation and 1 (0 to 2) percent, 2 (0 to 7) percent, and 2 (0 to 4) percent in the Semi-automated Simulation. The median (range) workload savings was 90 (82 to 93) percent, 99 (98 to 99) percent, and 85 (85 to 88) percent for the Automated Simulation and 40 (32 to 43) percent, 49 (48 to 49 percent), and 35 (34 to 38 percent) for the Semi-automated Simulation. The median (range) time savings was 154 (91 to 183), 185 (95 to 201), and 157 (86 to 172) hours for the Automated Simulation and 61 (42 to 82), 92 (46 to 100), and 64 (37 to 71) hours for the Semi-automated Simulation. Abstrackr identified 33-90% of records erroneously excluded by a single reviewer, while RobotAnalyst performed less well and DistillerSR provided no relative advantage.  
520 |a Based on reported SUS scores, Abstrackr fell in the usable, DistillerSR the marginal, and RobotAnalyst the unacceptable usability range. Usability depended on six interdependent properties: user friendliness, qualities of the user interface, features and functions, trustworthiness, ease and speed of obtaining predictions, and practicality of the export file(s). CONCLUSIONS: The workload and time savings afforded in the Automated Simulation came with increased risk of erroneously excluding relevant records. Supplementing a single reviewer's decisions with relevance predictions (Semi-automated Simulation) improved upon the proportion missed in some cases, but performance varied by tool and SR. Designing tools based on reviewers' self-identified preferences may improve their compatibility with present workflows