Poster (GatherTown)
in
Workshop: GroundedML: Anchoring Machine Learning in Classical Algorithmic Theory
K-level SLOPE: Simplified and Adaptive Variable Selection for Optimization of Estimation Risk
Zhiqi Bu · Rachel Wu
Abstract:
Among sparse linear models, SLOPE generalizes the LASSO via an adaptive regularization that applies heavier penalties to larger entries of the estimator. To achieve such adaptivity in problem, SLOPE requires a penalty sequence in in contrast to a single penalty scalar as in the LASSO. Tuning the SLOPE penalty in high dimension poses a challenge as the brute force search for the optimal penalty is computationally infeasible. In this work, we formally propose the \textbf{-level SLOPE} as a convex optimization problem, which is an important sub-class of SLOPE (which we term as the -level SLOPE) and only have hyperparameters. We further develop a projected gradient descent to search the optimal -level SLOPE penalty under the Gaussian random data matrix. Interestingly, our experiments demonstrate that even the simplest 2-level SLOPE may give amazing improvement over the LASSO and be comparable to -level SLOPE, suggesting its usefulness for practitioners.
Chat is not available.