학술논문

Screening Smarter, Not Harder: A Comparative Analysis of Machine Learning Screening Algorithms and Heuristic Stopping Criteria for Systematic Reviews in Educational Research
Document Type
Journal Articles
Reports - Research
Author
Diego G. Campos (ORCID 0000-0002-8820-5881); Tim Fütterer (ORCID 0000-0001-5399-9557); Thomas Gfrörer (ORCID 0000-0002-4817-2296); Rosa Lavelle-Hill (ORCID 0000-0002-1767-9828); Kou Murayama (ORCID 0000-0003-2902-9600); Lars König (ORCID 0009-0000-3525-771X); Martin Hecht (ORCID 0000-0002-5168-4911); Steffen Zitzmann (ORCID 0000-0002-7595-4736); Ronny Scherer (ORCID 0000-0003-3630-0710)
Source
Educational Psychology Review. 2024 36(1).
Subject
Artificial Intelligence
Algorithms
Computer System Design
Natural Language Processing
Education
Educational Psychology
Time on Task
Semantics
Context Effect
Classification
Language
English
ISSN
1040-726X
1573-336X
Abstract
Systematic reviews and meta-analyses are crucial for advancing research, yet they are time-consuming and resource-demanding. Although machine learning and natural language processing algorithms may reduce this time and these resources, their performance has not been tested in education and educational psychology, and there is a lack of clear information on when researchers should stop the reviewing process. In this study, we conducted a retrospective screening simulation using 27 systematic reviews in education and educational psychology. We evaluated the sensitivity, specificity, and estimated time savings of several learning algorithms and heuristic stopping criteria. The results showed, on average, a 58% (SD = 19%) reduction in the screening workload of irrelevant records when using learning algorithms for abstract screening and an estimated time savings of 1.66 days (SD = 1.80). The learning algorithm random forests with sentence bidirectional encoder representations from transformers outperformed other algorithms. This finding emphasizes the importance of incorporating semantic and contextual information during feature extraction and modeling in the screening process. Furthermore, we found that 95% of all relevant abstracts within a given dataset can be retrieved using heuristic stopping rules. Specifically, an approach that stops the screening process after classifying 20% of records and consecutively classifying 5% of irrelevant papers yielded the most significant gains in terms of specificity (M = 42%, SD = 28%). However, the performance of the heuristic stopping criteria depended on the learning algorithm used and the length and proportion of relevant papers in an abstract collection. Our study provides empirical evidence on the performance of machine learning screening algorithms for abstract screening in systematic reviews in education and educational psychology.