If you are at NeurIPS I encourage you to check out Gasser's poster showing his ongoing work on models of speech perception.
06.12.2025 19:45 β π 0 π 0 π¬ 0 π 0@joshhmcdermott.bsky.social
Working to understand how humans and machines hear. Prof at MIT; director of Lab for Computational Audition. https://mcdermottlab.mit.edu/
If you are at NeurIPS I encourage you to check out Gasser's poster showing his ongoing work on models of speech perception.
06.12.2025 19:45 β π 0 π 0 π¬ 0 π 0Excited to announce that our latest paper is now out: www.cell.com/neuron/fullt...
Here we uncover a simple cortical code for loudness and leverage it reverse sound hypersensitivity disorders in mice.
#neuroskyence
Two weeks left to apply! MIT's Department of Brain and Cognitive Sciences is seeking a tenure-track Assistant Professor, working with nonhuman animals in some way. Application deadline is Dec. 1. Full posting: academicjobsonline.org/ajo/jobs/30586
18.11.2025 15:09 β π 23 π 18 π¬ 2 π 2Skyline of Madison, WI
π¨I am looking for a POSTDOC, LAB MANAGER/TECH and GRAD STUDENTS to join my new lab in beautiful Madison, WI.
We study how our brains perceive and represent the physical world around us using behavioral, computational, and neuroimaging methods.
paulunlab.psych.wisc.edu
#VisionScience #NeuroSkyence
82. A model of continuous speech recognition reveals the role of context in human speech perception - Gasser Elbanna
96. Machine learning models of hearing demonstrate the limits of attentional selection of speech heard through cochlear implants - Annesya Banerjee
58. Optimized models of uncertainty explain human confidence in auditory perception - Lakshmi Govindarajan
64. Source-location binding errors in auditory scene perception - Sagarika Alavilli
If you are at APAN today, check out these posters from members of our lab:
1. Cross-culturally shared sensitivity to harmonic structure underlies some aspects of pitch perception - Malinda McPherson-McNato
Lakshmi happens to be on the job market right now, so I will end by recommending that everyone try to hire him. He is a wonderful colleague, and I expect he will do many other important things. (end)
09.11.2025 21:34 β π 4 π 0 π¬ 0 π 0Added bonus: expressing stimulus-dependent uncertainty enables models that perform regression (i.e., yielding continuous valued outputs), which has previously not worked very well. (15/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0Lots more in the paper. The approach is applicable to any perceptual estimation problem. We hope it will enable the study of confidence to be extended to more realistic conditions, via models that can operate on images or sounds. (14/n)
09.11.2025 21:34 β π 2 π 0 π¬ 1 π 0He measured human confidence for pitch, and found that confidence was higher for conditions with lower discrimination thresholds. The model reproduced this general trend. (13/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0Lakshmi used the same framework to build models of pitch perception that represent uncertainty. The models generate a distribution over fundamental frequency. (12/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0By contrast, simulating bets using the softmax distribution of a standard classification-based neural network does not yield human-like confidence, presumably because the distribution is not incentivized to have correct uncertainty. (11/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0The model can also be used to select natural sounds whose localization is certain or uncertain. When presented to humans, humans place higher bets on the sounds with low model uncertainty, and vice versa. (10/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0The model replicates patterns of localization accuracy (like previous models) but also replicates the dependence of confidence on conditions. Here confidence is lower for sounds with narrower spectra, and at peripheral locations: (9/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0To simulate betting behavior from the model, he mapped a measure of the model posterior spread to a bet (in cents). (8/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0Lakshmi then tested whether the modelβs uncertainty was predictive of human confidence judgments. He ran experiments in which people localized sounds and then placed bets on their localization judgment: (7/n)
09.11.2025 21:34 β π 2 π 0 π¬ 1 π 0The model was trained on spatial renderings of lots of natural sounds in lots of different rooms. Once trained, it produces narrow posteriors for some sounds, and broad posteriors for others: (6/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0He first applied this idea to sound localization. The model takes binaural audio as input and estimates parameters of a mixture distribution over a sphere. Distributions can be narrow, broad, or multi-modal, depending on the stimulus. (5/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0Lakshmi realized that models could be trained to output parameters of distributions, and that by optimizing models with a log-likelihood loss function, the model is incentivized to correctly represent uncertainty. (4/n)
09.11.2025 21:34 β π 3 π 0 π¬ 1 π 0Standard neural networks are not suitable for several reasons, including because cross-entropy loss is known to induce over-confidence. (3/n)
09.11.2025 21:34 β π 4 π 0 π¬ 1 π 0Uncertainty is inevitable in perception. It seems like it would be useful for people to explicitly represent it. But it has been hard to study in realistic conditions, as we havenβt had stimulus-computable models. (2/n)
09.11.2025 21:34 β π 1 π 0 π¬ 1 π 0New pre-print from our lab, by Lakshmi Govindarajan with help from Sagarika Alavilli, introducing a new type of model for studying sensory uncertainty. www.biorxiv.org/content/10.1...
Here is a summary. (1/n)
if you see this post, your actions are:
- if you have a spare buck, give it to Wikipedia, then repost this
- if you don't have a spare buck, just repost
your action is mandatory for the world's best source of information to survive
Excited that this work discovering cross-species signatures of stabilizing foot placement control is now out in PNAS!
pnas.org/doi/10.1073/...
@antoinecomite.bsky.social
Want to make publication-ready figures come straight from Python without having to do any manual editing? Are you fed up with axes labels being unreadable during your presentations? Follow this short tutorial including code examples! ππ§΅
16.10.2025 08:26 β π 156 π 44 π¬ 2 π 4Excited to share that I'm joining WashU in January as an Assistant Prof in Psych & Brain Sciences! π§ β¨!
I'm also recruiting grad students to start next September - come hang out with us! Details about our lab here: www.deckerlab.com
Reposts are very welcome! π Please help spread the word!
Brownβs Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 π apply.interfolio.com/173939
#AI #CognitiveScience #AcademicJobs #BrownUniversity
Variance partitioning is used to quantify the overlap of two models. Over the years, I have found that this can be a very confusing and misleading concept. So we finally we decided to write a short blog to explain why.
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
πNew paper! Recomposer allows editing sound events within complex scenes based on textual descriptions and event roll representations. And we discuss the details that matter!
Work by the Sound Understanding folks
@GoogleDeepMind
arxiv.org/abs/2509.05256