COMPARISON OF REGRESSION METHODS WITH NON-CONVEX PENALTIES

Dec 1, 2019·
Brandon P. Pipher
Brandon P. Pipher
· 0 min read
Abstract
We examine a solution to the problem of sparse selection in linear models. The method used is a mixed norm $\mathcal{l}_p-\mathcal{l}_q$ algorithm with a focus on non-convex, $q < 1$, penalty parameters. Classical regression, Ordinary Least Squares, has low bias but high variance and prediction accuracy can sometimes be improved by increasing bias to decrease variance. By inducing sparsity we can improve model interpretability, especially in the setting of high-dimensional data. These methods of penalized regression also provide solutions when the Ordinary Least Squares solution is ill-posed under a high-dimensional setting, and have a history of producing accurate and parsimonious models. A simulation study is conducted utilizing another method of penalized regression using non-convex penalties, the SparseNet algorithm, which had previously been compared independently against several other proposed sparsity inducing non-convex solutions. We also include a comparison with other more common penalties such as LASSO, Ridge/Tikhonov, and Elastic Net.
Type