Representation of actions in auditory cortex: insights from reinforcement learning and deep neural networks

The RIFF operates as a Markov Decision Process (MDP) with the rat as the agent, allowing us to apply reinforcement learning (RL) theory to study rat behavior. In collaboration with the late Naftali Tishby, we’ve developed a theory of optimal policies under information constraints (Amir et al. 2020). Rat behavior is well-approximated by these policies, enabling us to develop measures for task engagement. Intriguingly, neurons in an insular auditory field show a strong correlation between their firing rates and task alignment as measured by our theory (Niediek et al., 2024, in preparation).

In a parallel project, we trained a deep neural network using RL to solve our sound localization task. We observed the emergence of RIFF symmetries along the network’s layers. By analyzing neural activity recorded from real rats, we quantified the level of invariance to RIFF symmetries, revealing that the auditory cortex exhibits a level of invariance corresponding to late layers of the deep neural network (Kazakov et al., 2024, in preparation).

 

“Working memory”