학술논문

Where and When: Event-Based Spatiotemporal Trajectory Prediction from the iCub’s Point-Of-View
Document Type
Conference
Source
2020 IEEE International Conference on Robotics and Automation (ICRA) Robotics and Automation (ICRA), 2020 IEEE International Conference on. :9521-9527 May, 2020
Subject
Robotics and Control Systems
Trajectory
Robots
Predictive models
Cameras
Target tracking
Data models
Pipelines
Language
ISSN
2577-087X
Abstract
Fast, non-linear trajectories have been shown to be more accurately visually measured, and hence predicted, when sampled spatially (that is when the target position changes) rather than temporally, i.e. at a fixed-rate as in traditional frame-based cameras. Event-cameras, with their asynchronous, low latency information stream, allow for spatial sampling with very high temporal resolution, improving the quality of the data and the accuracy of post-processing operations. This paper investigates the use of Long Short-Term Memory (LSTM) networks with event-cameras spatial sampling for trajectory prediction. We show the benefit of using an Encoder-Decoder architecture over parameterised models for regression on event-based human-to-robot handover trajectories. In particular, we exploit the temporal information associated to the events stream to predict not only the incoming spatial trajectory points, but also when these will occur in time. After having studied the proper LSTM input/output sequence length, the network performance are compared to other regression models. Then, prediction behavior and computational time are analysed for the proposed method. We carry out the experiment using an iCub robot equipped with event-cameras, addressing the problem from the robot perspective.