Publications

On-line Gibbs learning

We propose a new model of on-line learning which is appropriate for learning realizable and unrealizable, smooth as well as threshold, functions. Following each presentation of an example the new weights are chosen from a Gibbs distribution with an on-line energy that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. We show that this algorithm finds the weights that minimize the generalization error in the limit of an infinite number of examples. The asymptotic rate of convergence is similar to that of batch learning.

Authors: Kim JW, Sompolinsky H.
Year of publication: 1996
Journal: Phys Rev Lett. 1996 Apr 15;76(16):3021-3024.

Link to publication:

Labs:

“Working memory”