Feature discovery under contextual supervision using mutual information

Kay, J. (1992) Feature discovery under contextual supervision using mutual information. In: Proceedings of the 1992 International Joint Conference on Neural Networks: Baltimore, Maryland, June 7-11, 1992. IEEE: New York, pp. 79-84. ISBN 9780780305595 (doi: 10.1109/IJCNN.1992.227286)

Full text not currently available from Enlighten.


The author considers a neural network in which the inputs may be divided into two groups, termed primary inputs and contextual inputs. The goal of the network is to discover those linear functions of the primary inputs that are maximally related to the information contained in the contextual units. The strength of the relationship between the two sets of inputs is measured by using their average mutual information. In the situation where the inputs follow a multivariate, elliptically symmetric probability model, this is equivalent to performing a canonical correlation analysis. A stochastic algorithm is introduced to achieve this analysis. Some theoretical details including a convergence results are presented. Some possible nonlinear extensions are discussed.

Item Type:Book Sections
Glasgow Author(s) Enlighten ID:Kay, Dr James
Authors: Kay, J.
College/School:College of Science and Engineering > School of Mathematics and Statistics > Statistics

University Staff: Request a correction | Enlighten Editors: Update this record