These results suggest that perceptual strategies are shaped by the reliability of encoding at early stages of the auditory system. π§΅5/5
07.02.2026 08:56 β π 1 π 0 π¬ 0 π 0@mkachlicka.bsky.social
Postdoctoral Researcher @unibe.ch https://neuro.inf.unibe.ch & Honorary Research Fellow @birkbeckpsychology.bsky.social @audioneurolab.bsky.social | speech + sounds + brains π§ cogsci, audio, neuroimaging, language, methods https://mkachlicka.github.io
These results suggest that perceptual strategies are shaped by the reliability of encoding at early stages of the auditory system. π§΅5/5
07.02.2026 08:56 β π 1 π 0 π¬ 0 π 0We find that neural tracking of pitch is linked to pitch cue weighting during word emphasis and lexical stress perception. Specifically, higher pitch weighting is linked to increased tracking of pitch at early latencies within the neural response, from 15 to 55 ms. π§΅4/5
07.02.2026 08:55 β π 0 π 0 π¬ 1 π 0Here, we tested the hypothesis that the reliability of early auditory encoding of a given dimension is linked to the weighting placed on that dimension during speech categorization. We tested this in 60 first language speakers of Mandarin learning English as a second language. π§΅3/5
07.02.2026 08:55 β π 1 π 0 π¬ 1 π 0Linguistic categories are conveyed in speech by many acoustic cues at the same time, but not all of them are equally important. There are clear and replicable individual differences in how people use those cues during speech perception, but the underlying mechanisms are unclear. π§΅2/5
07.02.2026 08:55 β π 1 π 0 π¬ 1 π 0π¨New paperπ¨about mechanisms underlying individual differences in cue weighting doi.org/10.1162/IMAG... from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social with @ashleysymons.bsky.social, Kazuya Saito, Fred Dick, and @adamtierney.bsky.social #psychscisky #neuroskyence π§΅1/5
07.02.2026 08:54 β π 12 π 3 π¬ 1 π 0ππ Our project on aperiodic neural activity during sleep, led by the wonderful @mosameen.bsky.social, is now published!
This project shows how time-resolved measures of aperiodic neural activity track changes of sleep stages + lots of other analyses in iEEG & EEG!
www.nature.com/articles/s44...
Together, these results suggest that the precision with which people perceive and remember sound patterns plays a major role in how well they understand accented speech, and that auditory training may help listeners who struggle. π§΅5/5
03.02.2026 09:44 β π 1 π 0 π¬ 0 π 0Native English speakers who were better at understanding the accent were also better at detecting pitch differences, remembering sound patterns, and attending to pitch. Musical training also helped. Better speech perception was also linked to stronger neural encoding of speech harmonics. π§΅4/5
03.02.2026 09:44 β π 1 π 0 π¬ 1 π 0In this study, we asked L1 English speakers to listen to the prosody of Mandarin-accented English. We found that some listeners are better at understanding accented speech than others. π§΅3/5
03.02.2026 09:42 β π 1 π 0 π¬ 1 π 0Non-native speakers of English speak with varying degrees of accent. So far, research has focused more on factors that help learners communicate more effectively. But what about the listeners? Are there factors that make it easier for native listeners to understand accented speech? π§΅2/5
03.02.2026 09:42 β π 1 π 0 π¬ 1 π 0π¨New paperπ¨ about accented speech perception doi.org/10.1016/j.ba... by brilliant (MSc student at the time!) Amir Ghooch Kanloo accompanied by myself, Kazuya Saito and @adamtierney.bsky.social from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social π§΅1/5
03.02.2026 09:40 β π 11 π 4 π¬ 1 π 0"The Human Insula Reimagined: Single Neurons Respond to Simple Sounds during Passive Listening"
Single neuron activity in the insula
#iEEG
in #JNeurosci @sfnjournals.bsky.social
www.jneurosci.org/content/46/4...
New work from our lab showing the human frontal lobe receives fast, low-level speech information in **parallel** with early speech areas!
π§ π£οΈ
doi.org/10.1038/s414...
"Human cortical dynamics of auditory word form encoding"
by the Chang lab @changlabucsf.bsky.social, published in @cp-neuron.bsky.social
www.cell.com/neuron/fullt...
#iEEG #ECOG
#NeuroJobs
01.12.2025 17:11 β π 0 π 0 π¬ 0 π 0If you haven't, you should, it's brilliant!
18.11.2025 10:03 β π 7 π 2 π¬ 0 π 0New preprint by Mika Nash and others on how selective attention affects neural tracking of prediction during ecologically valid music listening: www.biorxiv.org/content/10.1...
04.11.2025 16:09 β π 3 π 1 π¬ 0 π 0As it's hiring season again I'm resharing the NeuroJobs feed. Add #NeuroJobs to your post if you're recruiting or looking for an RA, PhD, Postdoc, or faculty position in Neuro or an adjacent field.
bsky.app/profile/did:...
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.
In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.
Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
My PhD student Yue Li is looking for L1 speakers of Chinese and Spanish for her online English experiment! Please see below for details!
14.08.2025 15:01 β π 12 π 25 π¬ 2 π 0Can you think of examples of books, films, TV shows, etc. featuring earworms or other types of imagined music? Please share them here! musicinmyhead.org/inner-music-...
06.08.2025 19:45 β π 4 π 6 π¬ 0 π 0π§ Join us for some fun listening tasks!
π§ Researchers at the University of Manchester want to recruit normal hearing volunteers aged 18-50 who are native English speakers to take part in research, which will help us to understand different aspects of listening in noise.
#hearinghealth #research
A β¨bittersweetβ¨ moment β after 5 years at UCL, my final first-author project with @smfleming.bsky.social is ready to read as a preprint! π₯²
25.07.2025 09:23 β π 31 π 8 π¬ 2 π 1Nice review, but why "controversies"? Evidence isnβt controversial. Like "epiphenomenon," it often just means, "doesnβt fit my hypothesis." Thatβs ad hominem science.
Brain rhythms in cognition -- controversies and future directions
arxiv.org/abs/2507.15639
#neuroscience
Delighted to have our newest paper out in #Jneurosci ! We looked at how much a single cell contributes to an auditory-evoked EEG signal. Big thanks to my co-authors Ira Kraemer, Christine KΓΆppl, Catherine Carr and Richard Kempter (all not in Bsky). Hereβs how: (1/13)
bsky.app/profile/sfnj...
Children are incredible language learning machines. But how do they do it? Our latest paper, just published in TICS, synthesizes decades of evidence to propose four components that must be built into any theory of how children learn language. 1/
www.cell.com/trends/cogni... @mpi-nl.bsky.social
π¨ New preprint π¨
Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out π§΅
In what way is the frontoparietal network domain general? We show it uses the same neural resources to represent rules in auditory and visual tasks but does so with independent codes doi.org/10.1162/IMAG..., thanks to A Rich, D Moerel, @linateichmann.bsky.social, J Duncan @alexwoolgar.bsky.social
24.06.2025 09:27 β π 13 π 5 π¬ 1 π 1What makes humans similar or different to AI? In a paper out in @natmachintell.nature.com led by @florianmahner.bsky.social & @lukasmut.bsky.social, w/ Umut GΓΌclΓΌ, we took a deep look at the factors underlying their representational alignment, with surprising results.
www.nature.com/articles/s42...
Music is universal. It varies more within than between societies and can be described by a few key dimensions. Thatβs because brains operate by using the raw materials of music: oscillations (brainwaves).
www.science.org/doi/10.1126/...
#neuroscience