학술논문

Learning ℓ1-penalized logistic regressions with smooth approximation
Document Type
Conference
Source
2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA) INnovations in Intelligent SysTems and Applications (INISTA), 2017 IEEE International Conference on. :126-130 Jul, 2017
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Robotics and Control Systems
Art
Reactive power
classification
logistic regression
regularization
Language
Abstract
The paper presents comparison of learning logistic regression model with different penalty terms. Main part of the paper concerns sparse regression, which includes absolute value function. This function is not strictly convex, thus common optimizers cannot be used directly. In the paper we show that in those cases smooth approximation of absolute value function can be effectively used either in the case of lasso regression or in fussed-lasso like case. One of examples focuses on two dimensional analogue of fussed-lasso model. The experimental results present the comparison of our implementations (in C++ and Python) on three benchmark datasets.