학술논문

Five Points to Check when Comparing Visual Perception in Humans and Machines
Document Type
Working Paper
Source
Journal of Vision 21, no. 3 (2021): 16-16
Subject
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Quantitative Biology - Neurons and Cognition
Statistics - Machine Learning
Language
Abstract
With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed towards comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect how we interpret results, and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously-observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design or inference.
Comment: V3: minor changes like in published JOV version (https://doi.org/10.1167/jov.21.3.16) V2: New title; added general section (checklist); manuscript restructured such that each case study is one chapter; adversarial examples in first study replaced by different analysis