학술논문

Deep learning object-recognition in a design-to-robotic-production and -operation implementation
Document Type
Conference
Source
2017 IEEE Second Ecuador Technical Chapters Meeting (ETCM) Ecuador Technical Chapters Meeting (ETCM), 2017 IEEE. :1-6 Oct, 2017
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Fields, Waves and Electromagnetics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Robots
Object recognition
Cameras
Shape
Visualization
Concrete
Machine learning
Design-to-Robotic-Production & Operation
Wireless Sensor and Actuator Network
Ambient Intelligence
Computer Vision
Object-Recognition
Language
Abstract
This paper presents a new instance in a series of discrete proof-of-concept implementations of comprehensively intelligent built-environments based on Design-to-Robotic-Production and -Operation (D2RP&O) principles developed at Delft University of Technology (TUD). With respect to D2RP, the featured implementation presents a customized design-to-production framework informed by optimization strategies based on point clouds. With respect to D2RO, said implementation builds on a previously developed highly heterogeneous, partially meshed, self-healing, and Machine Learning (ML) enabled Wireless Sensor and Actuator Network (WSAN). In this instance, a computer vision mechanism based on open-source Deep Learning (DL) / Convolutional Neural Networks (CNNs) for object-recognition is added to the inherited ecosystem. This mechanism is integrated into the system's Fall-Detection and -Intervention System in order to enable decentralized detection of three types of events and to instantiate corresponding interventions. The first type pertains to human-centered activities / accidents, where cellular- and internet-based intervention notifications are generated in response. The second pertains to object-centered events that require the physical intervention of an automated robotic agent. Finally, the third pertains to object-centered events that elicit visual / aural notification cues for human feedback. These features, in conjunction with their enabling architectures, are intended as essential components in the on-going development of highly sophisticated alternatives to existing Ambient Intelligence (AmI) solutions.