Bayesian Penalty Mixing with the Spike and Slab Lasso
Despite the wide adoption of spike-and-slab methodology for Bayesian variable selection, its potential for penalized likelihood estimation has largely been overlooked. We bridge this gap by cross-fertilizing these two paradigms with the Spike-and-Slab Lasso, a procedure for simultaneous variable selection and parameter estimation in linear regression. A mixture of two Laplace distributions, the Spike-and-Slab Lasso prior induces a new class of self-adaptive penalty functions that arise from a fully Bayes spike-and-slab formulation, ultimately moving beyond the separable penalty framework. A virtue of these non-separable penalties is their ability to borrow strength across coordinates, adapt to ensemble sparsity information and exert multiplicity adjustment. With a path-following scheme for dynamic posterior exploration, efficient EM and coordinatewise implementations, the fully Bayes penalty is seen to mimic oracle performance, providing a viable alternative to cross-validation. Further elaborations of the Spike-and-Slab Lasso for fast Bayesian factor analysis illuminate its broad potential. (This is joint work with Veronika Rockova).