ELSC Special Seminar: Dr. Daniel Soudry, Wednesday 01/02 at 16:00

February 1, 2017

You are cordially invited to the lecture given by:

 

Dr. Daniel Soudry 

Department of Statistics, Columbia University

 

On the topic of:

 

“Inferring activity, connectivity and learning rules from neural data

 

  

The lecture will be held on Wednesday February 1, at 16:00

at ELSC

 Silberman Bldg., 3rd Wing, 6th Floor,

 Edmond J. Safra Campus 

Light refreshments at 15:45



Abstract:

Activity inference. Large scale neural data is commonly recorded using calcium imaging. This kind of data is typically very noisy, neural signals can be heavily mixed, and the fluorescence transients are typically much slower than the neural spikes. To overcome this, we develop automated extraction methods that simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. These methods (based on constrained non-negative matrix factorization) are applied to various datasets (e.g., zebrafish whole brain), improving over previous state-of-the-art. 

 

Connectivity inference. Inferring neural connectivity from activity data is a key challenge in statistical neuroscience. However, unobserved “common inputs” to observed neurons typically prevent consistent estimation of network connectivity. To alleviate this problem, we develop the first scalable method (based on the expected Loglikelihood approximation) to consistently construct generalized linear model-based neural connectivity from highly sub-sampled “Shotgun” observations of a neural network activity, in which only a small part of the network is observed at a time (e.g., 10%). We demonstrate numerically that our method works efficiently in a simulated network with highly sub-sampled data and thousands of neurons — orders of magnitude faster than previous inference approaches for fully observed data. 

 

Learning rule inference. Neural connectivity is constantly updated via local rules. Inferring these rules is a key aspect in any neural network model. We investigated the learning rules of `grid cells', a type of neuron in the entorhinal cortex exhibiting a peculiar hexgonal grid like firing patterns (with respect to the animal location), which have received much attention recently. We show that a simple learning rule (rectified Oja rule) can reproduce the observed hexgonal firing patterns, and provide an analytical explanation of this phenomenon.