Sparse group lasso and high dimensional multinomial classification

Abstract

The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso algorithm is available in the R package msgl. Its performance scales well with the problem size as illustrated by one of the examples considered—a 50 class classification problem with 10 k features, which amounts to estimating 500 k parameters.

Publication
Computational Statistics & Data Analysis, 71, 771-786