학술논문

Proto-object based saliency for event-driven cameras
Document Type
Conference
Source
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Intelligent Robots and Systems (IROS), 2019 IEEE/RSJ International Conference on. :805-812 Nov, 2019
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Language
ISSN
2153-0866
Abstract
Autonomous robots can rely on attention mechanisms to explore complex scenes and select salient stimuli relevant for behaviour. Stimulus selection should be fast to efficiently allocate available (and limited) computational resources to process in detail a subset of the otherwise overwhelmingly large sensory input. The amount of processing required is a product of the amount of data sampled by a robot’s sensors; while a standard RGB camera produces a fixed amount of data for every pixel of the sensor, an event-camera produces data only for where there is a contrast change in the field of view, and does so with a lower latency. In this paper, we describe the implementation of a state-of-the-art bottom-up attention model, based on structuring the visual scene in terms of proto-objects. As an event-camera encodes different visual information compared to frame-based cameras, the original algorithm must be adapted and modified. We find that the event-camera’s inherent detection of edges removes the need for some early stages of processing in the model. We describe the modifications, compare the event-driven algorithm to the original, and validate the potential for use on the iCub humanoid robot.