Publications

An Information Maximization approach to overcomplete and recurrent representations

The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model con- sists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is de- terministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feed- forward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent con- nections. The application of these new learning rules is illustrated on a simple two-dimensional input example.

Authors: Shriki O., Sompolinsky H., Lee D.
Year of publication: 2001
Journal: Part of: Advances in Neural Information Processing Systems 13 (NIPS 2000)

Link to publication:

Labs:

“Working memory”