Skip to yearly menu bar Skip to main content


Poster (GatherTown)
in
Workshop: GroundedML: Anchoring Machine Learning in Classical Algorithmic Theory

K-level SLOPE: Simplified and Adaptive Variable Selection for Optimization of Estimation Risk

Zhiqi Bu · Rachel Wu


Abstract: Among sparse linear models, SLOPE generalizes the LASSO via an adaptive l1 regularization that applies heavier penalties to larger entries of the estimator. To achieve such adaptivity in n×p problem, SLOPE requires a penalty sequence in \Rp in contrast to a single penalty scalar as in the LASSO. Tuning the \Rp SLOPE penalty in high dimension poses a challenge as the brute force search for the optimal penalty is computationally infeasible. In this work, we formally propose the \textbf{K-level SLOPE} as a convex optimization problem, which is an important sub-class of SLOPE (which we term as the p-level SLOPE) and only have (2K1)p hyperparameters. We further develop a projected gradient descent to search the optimal K-level SLOPE penalty under the Gaussian random data matrix. Interestingly, our experiments demonstrate that even the simplest 2-level SLOPE may give amazing improvement over the LASSO and be comparable to p-level SLOPE, suggesting its usefulness for practitioners.

Chat is not available.