Learning from Examples in a Single-Layer Neural Network

Learning from examples to classify inputs according to their Hamming distance from a set of prototypes, in a single-layer network, is studied analytically. Using a statistical mechanical analysis, we calculate the average error, ε, made by the system in classifying novel inputs, as a function of the number of learnt examples. The importance of introducing errors in the learning of the examples is demonstrated. When the number, P, of learnt examples is large, ε decreases as a power law in 1/P, reflecting the absence of a gap in the spectrum of ε.

Authors: D. Hansel and H. Sompolinsky
Year of publication: 1990
Journal: Europhysics Letters 11, 687

Link to publication:


“Working memory”