학술논문

L2NAS : Learning to Optimize Neural Architectures via Continuous-Action Reinforcement Learning
Document Type
Conference
Source
Proceedings of the 30th ACM International Conference on Information & Knowledge Management. :1284-1293
Subject
deep deterministic policy gradient
neural architecture search
Language
English
Abstract
Neural architecture search (NAS) has achieved remarkable results in deep neural network design. Differentiable architecture search converts the search over discrete architectures into a hyperparameter optimization problem which can be solved by gradient descent. However, questions have been raised regarding the effectiveness and generalizability of gradient methods for solving non-convex architecture hyperparameter optimization problems. In this paper, we propose L2NAS, which learns to intelligently optimize and update architecture hyperparameters via an actor neural network based on the distribution of high-performing architectures in the search history. We introduce a quantile-driven training procedure which efficiently trains L2NAS in an actor-critic framework via continuous-action reinforcement learning. Experiments show that L2NAS achieves state-of-the-art results on NAS-Bench-201 benchmark as well as DARTS search space and Once-for-All MobileNetV3 search space. We also show that search policies generated by L2NAS are generalizable and transferable across different training datasets with minimal fine-tuning.

Online Access