I use machine-learning-based methods to quantify how EEG encodes acoustic features and perceptual biases when people listen to continuous sounds. For the ELSC-SWC fellowship, I will be studying how EEG encodes sound statistics and represents short- and long-term perceptual biases. I will also explore how these effects may differ in autistics.
Before joining Hebrew University, I was a postdoctoral researcher working with Edmund Lalor at the University of Rochester and Trinity College Dublin. I received a BS in Biomedical Engineering at the University of Rochester, and a PhD from the Harvard-MIT Program in Speech and Hearing Bioscience and Technology. My thesis work focused on the neural encoding in the midbrain of spatially moving sounds, where I was advised by Bertrand Delgutte at the Massachusetts Eye and Ear in Boston.