The Bayesian evidence scheme for regularizing probability-density estimating neural networks

Husmeier, D. (2000) The Bayesian evidence scheme for regularizing probability-density estimating neural networks. Neural Computation, 12(11), pp. 2685-2717. (doi: 10.1162/089976600300014890)

[img]
Preview
Text
85654.pdf - Published Version

329kB

Abstract

Training probability-density estimating neural networks with the expectation-maximization (EM) algorithm aims to maximize the likelihood of the training set and therefore leads to overfitting for sparse data. In this article, a regularization method for mixture models with generalized linear kernel centers is proposed, which adopts the Bayesian evidence approach and optimizes the hyperparameters of the prior by type II maximum likelihood. This includes a marginalization over the parameters, which is done by Laplace approximation and requires the derivation of the Hessian of the log-likelihood function. The incorporation of this approach into the standard training scheme leads to a modified form of the EM algorithm, which includes a regularization term and adapts the hyperparameters on-line after each EM cycle. The article presents applications of this scheme to classification problems, the prediction of stochastic time series, and latent space models.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Husmeier, Professor Dirk
Authors: Husmeier, D.
College/School:College of Science and Engineering > School of Mathematics and Statistics > Statistics
Journal Name:Neural Computation
Publisher:MIT Press
ISSN:0899-7667
ISSN (Online):1530-888X
Copyright Holders:Copyright © 2000 Massachusetts Institute of Technology
First Published:First published in Neural Computation 12(11):2685-2717
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher

University Staff: Request a correction | Enlighten Editors: Update this record