Magdalena Kachlicka's Avatar

Magdalena Kachlicka

@mkachlicka.bsky.social

Postdoctoral Researcher @unibe.ch https://neuro.inf.unibe.ch & Honorary Research Fellow @birkbeckpsychology.bsky.social @audioneurolab.bsky.social | speech + sounds + brains 🧠 cogsci, audio, neuroimaging, language, methods https://mkachlicka.github.io

874 Followers  |  1,324 Following  |  43 Posts  |  Joined: 19.09.2023  |  1.9927

Latest posts by mkachlicka.bsky.social on Bluesky

These results suggest that perceptual strategies are shaped by the reliability of encoding at early stages of the auditory system. 🧡5/5

07.02.2026 08:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We find that neural tracking of pitch is linked to pitch cue weighting during word emphasis and lexical stress perception. Specifically, higher pitch weighting is linked to increased tracking of pitch at early latencies within the neural response, from 15 to 55 ms. 🧡4/5

07.02.2026 08:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Here, we tested the hypothesis that the reliability of early auditory encoding of a given dimension is linked to the weighting placed on that dimension during speech categorization. We tested this in 60 first language speakers of Mandarin learning English as a second language. 🧡3/5

07.02.2026 08:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Linguistic categories are conveyed in speech by many acoustic cues at the same time, but not all of them are equally important. There are clear and replicable individual differences in how people use those cues during speech perception, but the underlying mechanisms are unclear. 🧡2/5

07.02.2026 08:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Early neural encoding of pitch drives cue weighting during speech perception Abstract. Linguistic categories are conveyed in speech by several acoustic cues simultaneously, so listeners need to decide how to prioritize different potential sources of information. There are robu...

🚨New paper🚨about mechanisms underlying individual differences in cue weighting doi.org/10.1162/IMAG... from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social with @ashleysymons.bsky.social, Kazuya Saito, Fred Dick, and @adamtierney.bsky.social #psychscisky #neuroskyence 🧡1/5

07.02.2026 08:54 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Temporally resolved analyses of aperiodic features track neural dynamics during sleep - Communications Psychology Sleep involves dynamic changes in brain activity that unfold over time, reflected in the brain’s aperiodic EEG patterns. Incorporating the spectral β€˜knee’—a bend in the EEG power spectrumβ€”reveals stag...

πŸ“œπŸŽ‰ Our project on aperiodic neural activity during sleep, led by the wonderful @mosameen.bsky.social, is now published!

This project shows how time-resolved measures of aperiodic neural activity track changes of sleep stages + lots of other analyses in iEEG & EEG!

www.nature.com/articles/s44...

20.11.2025 17:07 β€” πŸ‘ 21    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Together, these results suggest that the precision with which people perceive and remember sound patterns plays a major role in how well they understand accented speech, and that auditory training may help listeners who struggle. 🧡5/5

03.02.2026 09:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Native English speakers who were better at understanding the accent were also better at detecting pitch differences, remembering sound patterns, and attending to pitch. Musical training also helped. Better speech perception was also linked to stronger neural encoding of speech harmonics. 🧡4/5

03.02.2026 09:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this study, we asked L1 English speakers to listen to the prosody of Mandarin-accented English. We found that some listeners are better at understanding accented speech than others. 🧡3/5

03.02.2026 09:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Non-native speakers of English speak with varying degrees of accent. So far, research has focused more on factors that help learners communicate more effectively. But what about the listeners? Are there factors that make it easier for native listeners to understand accented speech? 🧡2/5

03.02.2026 09:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Redirecting

🚨New paper🚨 about accented speech perception doi.org/10.1016/j.ba... by brilliant (MSc student at the time!) Amir Ghooch Kanloo accompanied by myself, Kazuya Saito and @adamtierney.bsky.social from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social 🧡1/5

03.02.2026 09:40 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Preview
The Human Insula Reimagined: Single Neurons Respond to Simple Sounds during Passive Listening The insula is critical for integrating sensory information from the body with that arising from the environment. Although previous studies suggested that posterior insula is sensitive to sounds, these...

"The Human Insula Reimagined: Single Neurons Respond to Simple Sounds during Passive Listening"

Single neuron activity in the insula
#iEEG

in #JNeurosci @sfnjournals.bsky.social

www.jneurosci.org/content/46/4...

29.01.2026 09:58 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
Parallel encoding of speech in human frontal and temporal lobes - Nature Communications Whether high-order frontal lobe areas receive raw speech input in parallel with early speech areas in the temporal lobe is unclear. Here, the authors show that frontal lobe areas get fast low-level sp...

New work from our lab showing the human frontal lobe receives fast, low-level speech information in **parallel** with early speech areas!

πŸ§ πŸ—£οΈ

doi.org/10.1038/s414...

22.01.2026 02:27 β€” πŸ‘ 13    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
Human cortical dynamics of auditory word form encoding Zhang, Leonard, et al. use high-density direct human brain recordings to reveal how the superior temporal gyrus (STG) detects word boundaries in natural speech and encodes whole auditory word forms. N...

"Human cortical dynamics of auditory word form encoding"

by the Chang lab @changlabucsf.bsky.social, published in @cp-neuron.bsky.social

www.cell.com/neuron/fullt...

#iEEG #ECOG

08.01.2026 16:14 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

#NeuroJobs

01.12.2025 17:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If you haven't, you should, it's brilliant!

18.11.2025 10:03 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Neural tracking of melodic prediction is pre-attentive Music’s ability to modulate arousal and manipulate emotions relies upon formation and violation of predictions. Music is often used to modulate arousal and mood while individuals focus on other tasks,...

New preprint by Mika Nash and others on how selective attention affects neural tracking of prediction during ecologically valid music listening: www.biorxiv.org/content/10.1...

04.11.2025 16:09 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

As it's hiring season again I'm resharing the NeuroJobs feed. Add #NeuroJobs to your post if you're recruiting or looking for an RA, PhD, Postdoc, or faculty position in Neuro or an adjacent field.

bsky.app/profile/did:...

03.09.2025 15:25 β€” πŸ‘ 46    πŸ” 28    πŸ’¬ 3    πŸ“Œ 0
Video thumbnail

Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!

19.08.2025 01:12 β€” πŸ‘ 52    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1

My PhD student Yue Li is looking for L1 speakers of Chinese and Spanish for her online English experiment! Please see below for details!

14.08.2025 15:01 β€” πŸ‘ 12    πŸ” 25    πŸ’¬ 2    πŸ“Œ 0
Inner Music in Fiction and Biography - The Inner Music and Wellbeing Network Inner Music in Fiction and Biography β€˜Inner music’ or β€˜musical imagery’ refers to the music that one hears in one’s own head. For example, an β€˜earworm’ is a catchy piece of music that is stuck in one’...

Can you think of examples of books, films, TV shows, etc. featuring earworms or other types of imagined music? Please share them here! musicinmyhead.org/inner-music-...

06.08.2025 19:45 β€” πŸ‘ 4    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0
Post image

🎧 Join us for some fun listening tasks!

🧠 Researchers at the University of Manchester want to recruit normal hearing volunteers aged 18-50 who are native English speakers to take part in research, which will help us to understand different aspects of listening in noise.

#hearinghealth #research

23.07.2025 13:11 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

A ✨bittersweet✨ moment – after 5 years at UCL, my final first-author project with @smfleming.bsky.social is ready to read as a preprint! πŸ₯²

25.07.2025 09:23 β€” πŸ‘ 31    πŸ” 8    πŸ’¬ 2    πŸ“Œ 1

Nice review, but why "controversies"? Evidence isn’t controversial. Like "epiphenomenon," it often just means, "doesn’t fit my hypothesis." That’s ad hominem science.

Brain rhythms in cognition -- controversies and future directions
arxiv.org/abs/2507.15639
#neuroscience

25.07.2025 15:25 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Delighted to have our newest paper out in #Jneurosci ! We looked at how much a single cell contributes to an auditory-evoked EEG signal. Big thanks to my co-authors Ira Kraemer, Christine KΓΆppl, Catherine Carr and Richard Kempter (all not in Bsky). Here’s how: (1/13)
bsky.app/profile/sfnj...

28.06.2025 14:18 β€” πŸ‘ 21    πŸ” 6    πŸ’¬ 3    πŸ“Œ 0
Preview
Constructing language: a framework for explaining acquisition Explaining how children build a language system is a central goal of research in language acquisition, with broad implications for language evolution, adult language processing, and artificial intelli...

Children are incredible language learning machines. But how do they do it? Our latest paper, just published in TICS, synthesizes decades of evidence to propose four components that must be built into any theory of how children learn language. 1/
www.cell.com/trends/cogni... @mpi-nl.bsky.social

27.06.2025 05:19 β€” πŸ‘ 154    πŸ” 58    πŸ’¬ 9    πŸ“Œ 12
Post image

🚨 New preprint 🚨

Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out 🧡

24.06.2025 13:49 β€” πŸ‘ 32    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

In what way is the frontoparietal network domain general? We show it uses the same neural resources to represent rules in auditory and visual tasks but does so with independent codes doi.org/10.1162/IMAG..., thanks to A Rich, D Moerel, @linateichmann.bsky.social, J Duncan @alexwoolgar.bsky.social

24.06.2025 09:27 β€” πŸ‘ 13    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Preview
Dimensions underlying the representational alignment of deep neural networks with humans - Nature Machine Intelligence An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep neural networks focus more on visual ...

What makes humans similar or different to AI? In a paper out in @natmachintell.nature.com led by @florianmahner.bsky.social & @lukasmut.bsky.social, w/ Umut GΓΌclΓΌ, we took a deep look at the factors underlying their representational alignment, with surprising results.

www.nature.com/articles/s42...

23.06.2025 20:02 β€” πŸ‘ 103    πŸ” 36    πŸ’¬ 2    πŸ“Œ 3
Preview
Universality and diversity in human song Songs exhibit universal patterns across cultures.

Music is universal. It varies more within than between societies and can be described by a few key dimensions. That’s because brains operate by using the raw materials of music: oscillations (brainwaves).
www.science.org/doi/10.1126/...
#neuroscience

23.06.2025 11:38 β€” πŸ‘ 39    πŸ” 20    πŸ’¬ 4    πŸ“Œ 1

@mkachlicka is following 20 prominent accounts