학술논문

Effective Activation Functions for Homomorphic Evaluation of Deep Neural Networks
Document Type
Periodical
Source
IEEE Access Access, IEEE. 8:153098-153112 2020
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Training
Cryptography
Neurons
Biological neural networks
Task analysis
Approximation methods
Private AI
homomorphic encryption
activation function
deep neural networks
Language
ISSN
2169-3536
Abstract
CryptoNets and subsequent work have demonstrated the capability of homomorphic encryption (HE) in the applications of private artificial intelligence (AI). In convolutional neural networks (CNNs), many computations are linear functions such as the convolution layer which can be homomorphically evaluated. However, there are layers such as the activation layer which is comprised of non-linear functions that cannot be homomorphically evaluated. One of the most commonly used methods is approximating these non-linear functions using low-degree polynomials. However, using the approximated polynomials as activation functions introduces errors which could have a significant impact on accuracy in classification tasks. In this paper, we present a systematic method to construct HE-friendly activation functions for CNNs. We first determine what properties in a good activation function contribute to performance by analyzing commonly used functions such as Rectified Linear Units (ReLU) and Sigmoid. Then, we compare polynomial approximation methods and search for an optimal range of approximation for the polynomial activation. We also propose a novel weighted polynomial approximation method tailored to the output distribution of a batch normalization layer. Finally, we demonstrate the effectiveness of our method using several datasets such as MNIST, FMNIST, CIFAR-10.