Presentation Title

Bayesian L1 Lasso for High Dimensional Data

Location

Atrium

Session Format

Poster Presentation

Research Area Topic:

Natural & Physical Sciences - Mathematics

Co-Presenters, Co- Authors, Co-Researchers, Mentors, or Faculty Advisors

Daniel Linder, Assistant Professor of Biostatistics, Georgia Southern University

Abstract

The necessity to perform variable selection and estimation in the high dimensional situation is increased with the introduction of new inventions and technologies, capable of generating enormous amounts of data on individual observations. In frequent settings, the sample size (n) is smaller than number of variables (p) which hinders the performance of traditional regression methods. It has been demonstrated that the shrinkage methods such as the least absolute shrinkage and selection operator (LASSO) (Tibshirani, 1996; Hastie, Tibshirani & Friedman, 2009) outclass the traditional least squares estimates in the high dimensional situation; however, they suffer from convex optimization problem. In our study, we produce a Gibbs sampler, identical to the Gibbs sampler for the Bayesian Lasso (Park & Casella, 2008), but we introduce the absolute deviation loss function (L1 loss) describing as modified Bayesian lasso with L1 loss. It is demonstrated that the proposed method outperforms the LASSO and the Bayesian LASSO in terms of prediction accuracy and variable selection. Our method is also implemented on a real high dimensional data set.

Keywords

LASSO, Regression, High dimensional data, Loss function

Presentation Type and Release Option

Presentation (Open Access)

Start Date

4-24-2015 2:45 PM

End Date

4-24-2015 4:00 PM

This document is currently not available here.

Share

COinS
 
Apr 24th, 2:45 PM Apr 24th, 4:00 PM

Bayesian L1 Lasso for High Dimensional Data

Atrium

The necessity to perform variable selection and estimation in the high dimensional situation is increased with the introduction of new inventions and technologies, capable of generating enormous amounts of data on individual observations. In frequent settings, the sample size (n) is smaller than number of variables (p) which hinders the performance of traditional regression methods. It has been demonstrated that the shrinkage methods such as the least absolute shrinkage and selection operator (LASSO) (Tibshirani, 1996; Hastie, Tibshirani & Friedman, 2009) outclass the traditional least squares estimates in the high dimensional situation; however, they suffer from convex optimization problem. In our study, we produce a Gibbs sampler, identical to the Gibbs sampler for the Bayesian Lasso (Park & Casella, 2008), but we introduce the absolute deviation loss function (L1 loss) describing as modified Bayesian lasso with L1 loss. It is demonstrated that the proposed method outperforms the LASSO and the Bayesian LASSO in terms of prediction accuracy and variable selection. Our method is also implemented on a real high dimensional data set.