TILJ5101 Penalized and Bayesian Models for Variable Selection in Linear Regression: Inference, Model averaging, and Prediction (JSS28) (3 op)

Arvosteluasteikko
±á²â±¹Ã¤°ì²õ²â³Ù³Ù²â-³ó²â±ôä³Ù³Ù²â
Opetuskieli/-kielet
englanti
³Õ²¹²õ³Ù³Ü³Ü³ó±ð²Ô°ì¾±±ôö(³Ù)
Juha Karvanen, Matti Vihola

Osaamistavoitteet

Student can evaluate, compare and utilize most common variable selection methods in their own work and will be aware of common properties of different methods. Student can also understand some basic philosophical differences between the methods.

Suoritustavat

Lectures, practicals, coursework

³§¾±²õä±ô³Ùö

Common methods to do simultaneous variable selection and parameter estimation in linear regression models will be covered in settings which typically have more unknown parameters than data points (i.e., high-dimensional problems). These are methods such as LASSO, Bayesian LASSO, Elastic-net, SSVS, Spike-and-Slab and many more. We will compare penalized least squares / maximum likelihood framework to Bayesian estimation under Markov chain Monte Carlo (MCMC) and faster Maximum-a-posteriori estimation. We will especially pay attention to the differencies between two cases: if we want to use the model for prediction or to formally identify important covariates out of large number of candidates. Many of the variable selections methods covered during the course can be generalized to general linear models.

Esitietovaatimukset

Background in linear models & regression, basic course in Bayesian statistics (& knowledge about Markov chain Monte Carlo)