Predicting conditional probability densities of stationary stochastic time series

Husmeier, D. and Taylor, J.G. (1997) Predicting conditional probability densities of stationary stochastic time series. Neural Networks, 10(3), pp. 479-497. (doi: 10.1016/S0893-6080(96)00062-7)

Full text not currently available from Enlighten.


Feedforward neural networks applied to time series prediction are usually trained to predict the next time step x(t + 1) as a function of m previous values, x(t) := (x(t), x(t + 1),…, x(t − m + 1)), which, if a sum-of-squares error function is chosen, results in predicting the conditional mean 〈y|x(t)〉. However, further information about the distribution is lost, which is a serious drawback especially in the case of multimodality, where the conditional mean alone turns out to be a rather insufficient or even misleading quantity. The only satisfactory approach in the general case is therefore to predict the whole conditional probability density for the time series, P(x(t + 1)|x(t), x(t − m 1),…, x(t − + 1)). We deduce here a two-hidden-layer universal approximator network for modelling this function, and develop a training algorithm from maximum likelihood. The method is tested on three time series of different nature, which will demonstrate how state-space dependent variances and multimodal transitions can be learned. We will finally note comparisons with other recent neural network approaches to this problem, and will state results on a benchmark problem.

Item Type:Articles
Glasgow Author(s) Enlighten ID:Husmeier, Professor Dirk
Authors: Husmeier, D., and Taylor, J.G.
College/School:College of Science and Engineering > School of Mathematics and Statistics > Statistics
Journal Name:Neural Networks

University Staff: Request a correction | Enlighten Editors: Update this record