The minimum information principle and its application to neural code analysis

The study of complex information processing systems requires appropriate theoretical tools to help unravel their underlying design principles. Information theory is one such tool, and has been utilized extensively in the study of the neural code. Although much progress has been made in information theoretic methodology, there is still no satisfying answer to the question: “What is the information that a given property of the neural population activity (e.g., the responses of single cells within the population) carries about a set of stimuli?” Here, we answer such questions via the minimum mutual information (MinMI) principle. We quantify the information in any statistical property of the neural response by considering all hypothetical neuronal populations that have the given property and finding the one that contains the minimum information about the stimuli. All systems with higher information values necessarily contain additional information processing mechanisms and, thus, the minimum captures the information related to the given property alone. MinMI may be used to measure information in properties of the neural response, such as that conveyed by responses of small subsets of cells (e.g., singles or pairs) in a large population and cooperative effects between subunits in networks. We show how the framework can be used to study neural coding in large populations and to reveal properties that are not discovered by other information theoretic methods.

Authors: Amir Globerson, Eran Stark, Eilon Vaadia, and Naftali Tishby
Year of publication: 2009
Journal: PNAS March 3, 2009 106 (9) 3490-3495;

Link to publication:


“Working memory”