Girolami, M. (2001) A variational method for learning sparse and overcomplete representations. Neural Computation, 13(11), pp. 2517-2532. (doi: 10.1162/089976601753196003)
Full text not currently available from Enlighten.
Abstract
An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients.
Item Type: | Articles |
---|---|
Status: | Published |
Refereed: | Yes |
Glasgow Author(s) Enlighten ID: | Girolami, Prof Mark |
Authors: | Girolami, M. |
College/School: | College of Science and Engineering > School of Computing Science |
Journal Name: | Neural Computation |
ISSN: | 0899-7667 |
ISSN (Online): | 1530-888X |
Published Online: | 13 March 2006 |
University Staff: Request a correction | Enlighten Editors: Update this record