학술논문

Actuation Confirmation and Negation via Facial-Identity and -Expression Recognition
Document Type
Conference
Source
2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM) Ecuador Technical Chapters Meeting (ETCM), 2018 IEEE Third. :1-6 Oct, 2018
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Fields, Waves and Electromagnetics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Face recognition
Computer architecture
Google
Apertures
Cloud computing
Ventilation
Face
Design-to-Robotic-Production & -Operation
Wireless Sensor and Actuator Networks
Facial Recognition
Ambient Intelligence
Adaptive Architecture
Language
Abstract
This paper presents the implementation of a facial-identity and -expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain’s TensorFlow (as regards facial identity recognition) and Google Cloud Platform’s Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production & -Operation (D2RP&O), conceived at Delft University of Technology (TUD). The present work builds on the inherited technological ecosystem and technical functionality of the Design-to-Robotic-Operation (D2RO) component of said framework; and its implementation is validated via two scenarios (physical and computational). In the first scenario—and building on an inherited adaptive mechanism—if building-skin components perceive a rise in interior temperature levels, natural ventilation is promoted by increasing degrees of aperture. This measure is presently confirmed or negated by a corresponding facial expression on the part of the user in response to said reaction, which serves as an intuitive override / feedback mechanism to the intelligent building-skin mechanism’s decision-making process. In the second scenario—and building on another inherited mechanism—if an accidental fall is detected and the user remains consciously or unconsciously collapsed, a series of automated emergency notifications (e.g., SMS, email, etc.) are sent to family and/or care-takers by particular mechanisms in the intelligent built-environment. The precision of this measure and its execution are presently confirmed by (a) identity detection of the victim, and (b) recognition of a reflexive facial gesture of pain and/or displeasure. The work presented in this paper promotes a considered relationship between the architecture of the built-environment and the Information and Communication Technologies (ICTs) embedded and/or deployed.