The mechanism by which humans and animals store retrievable information for lifelong time scales has been a key and hitherto unresolved problem in neuroscience. Most current models of long-term memory exhibit “catastrophic forgetting,” as new memories overwrite old ones. The consolidation of memories (in part by the replay of old memories) is well documented. However, no existing neuronal circuit model shows how consolidation leads to lifelong learning of memories at the capacity scale expected in humans and animals. Our article in Scientific Reports introduces a recurrent neural network model that enables lifelong memory.
Our model can store a continuous stream of memories by Hebbian learning with synaptic weight decay (synaptic recycling). Consolidation is characterized as a regenerative stochastic process, whereby memories are revisited randomly, with a probability that increases with the effect that the memory has on network connectivity. This mechanism gives rise to realistic, gracefully decaying forgetting curves and memory lifetime that are orders of magnitude longer than the characteristic synaptic recycling time scales. Our article includes analyses of the memory capabilities and intrinsic network properties, with novel outcomes such as a power-law relation between memory capacity and the number of neurons in the circuit. Finally, by perturbing our model, we study effects similar to known human memory deficits.
Figures:

Bottom: Distribution of memory efficacies. The distribution of efficacies at equilibrium consists of two modes: The first is the contribution of the forgotten memories, below A_c, which diverges at small A as one over A. The second, above A_c, is a mode around A_fp representing the retrievable memories. The number of retrievable memories stays approximately constant, but the identity of the retrievable memories keeps changing- new ones are inserted and consolidated while others are forgotten.
b: The forgetting curve. The probability of memory retrieval as a function of memory age. Blue dots: full network simulation results. Red solid lines: results of a mean field approximation. An exponential fit is shown in green (dash-dot line). The retrieval probability for pure forgetting (a model with forgetting but without rehearsals) is shown in black (dashed line).