Learning from examples to classify inputs according to their Hamming distance from a set of prototypes, in a single-layer network, is studied analytically. Using a statistical mechanical analysis, we calculate the average error, ε, made by the system in classifying novel inputs, as a function of the number of learnt examples. The importance of introducing errors in the learning of the examples is demonstrated. When the number, P, of learnt examples is large, ε decreases as a power law in 1/P, reflecting the absence of a gap in the spectrum of ε.