A universal approximator network for learning conditional probability densities

Husmeier, D. , Allen, D. and Taylor, J.G. (1997) A universal approximator network for learning conditional probability densities. In: Ellacott, S.W., Mason, J.C. and Anderson, I.J. (eds.) Mathematics of Neural Networks. Series: Operations research/computer science interfaces series, 8 (8). Springer-Verlag: New York, NY, USA, pp. 198-203. ISBN 9781461377948 (doi: 10.1007/978-1-4615-6099-9_32)

Full text not currently available from Enlighten.

Publisher's URL: http://dx.doi.org/10.1007/978-1-4615-6099-9_32

Abstract

A general approach is developed to learn the conditional probability density for a noisy time series. A universal architecture is proposed, which avoids difficulties with the singular low-noise limit. A suitable error function is presented enabling the probability density to be learnt. The method is compared with other recently developed approaches, and its effectiveness demonstrated on a time series generated from a non-trivial stochastic dynamical system.

Item Type:Book Sections
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Husmeier, Professor Dirk
Authors: Husmeier, D., Allen, D., and Taylor, J.G.
College/School:College of Science and Engineering > School of Mathematics and Statistics
Journal Name:A Universal Approximator for Learning Conditional Probability Densities
Publisher:Springer-Verlag
ISSN:1387-666X
ISBN:9781461377948

University Staff: Request a correction | Enlighten Editors: Update this record