KOR

e-Article

Ensembled sparse‐input hierarchical networks for high‐dimensional datasets.
Document Type
Article
Source
Statistical Analysis & Data Mining. Dec2022, Vol. 15 Issue 6, p736-750. 15p.
Subject
*GENE expression
*MODELS & modelmaking
*DEEP learning
*MULTILAYER perceptrons
Language
ISSN
1932-1864
Abstract
In high‐dimensional datasets where the number of covariates far exceeds the number of observations, the most popular prediction methods make strong modeling assumptions. Unfortunately, these methods struggle to scale up in model complexity as the number of observations grows. To this end, we consider using neural networks because they span a wide range of model capacities, from sparse linear models to deep neural networks. Because neural networks are notoriously tedious to tune and train, our aim is to develop a convenient procedure that employs a minimal number of hyperparameters. Our method, Ensemble by Averaging Sparse‐Input hiERarchical networks (EASIER‐net), employs only two L1‐penalty parameters, one that controls the input sparsity and another for the number of hidden layers and nodes. EASIER‐net selects the true support with high probability when there is sufficient evidence; otherwise, it performs variable selection with uncertainty quantification, where strongly correlated covariates are selected at similar rates. On a large collection of gene expression datasets, EASIER‐net achieved higher classification accuracy and selected fewer genes than existing methods. We found that EASIER‐net adaptively selected the model complexity: it fit deep networks when there was sufficient information to learn nonlinearities and interactions and fit sparse logistic models for smaller datasets with less information. [ABSTRACT FROM AUTHOR]