Jess Alexander's Avatar

Jess Alexander

@je55bot.bsky.social

Linguaphile, data nerd, 🧠 geek. Subrident problem solving and forays into affective neurolinguistics. πŸƒβ€β™€οΈ

32 Followers  |  98 Following  |  20 Posts  |  Joined: 16.11.2024  |  2.1018

Latest posts by je55bot.bsky.social on Bluesky

Heading out to @snlmtg.bsky.social to geek out with other neurolinguists this weekend. If you are interested in emotional prosody, speech intelligibility, and/or vocoded speech, come visit my poster (B68) on Friday afternoon! 🧠 πŸ€“

11.09.2025 14:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

🚨 Just over a week left to register for the #CNSP2025 Online Workshop (details in post below)! 🚨

Link to the workshop registration form: docs.google.com/forms/d/e/1F...

22.08.2025 10:32 β€” πŸ‘ 6    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - jessb0t/emoSPIN: project | comparing humans and LLMs on decoding emotional speech in noise project | comparing humans and LLMs on decoding emotional speech in noise - jessb0t/emoSPIN

The less background noise, the better humans can understand speech. Some speech-to-text models perform similarly. But what happens when the speaker's voice is imbued with emotion? I was curious, so I did a simple mini investigation. The results surprised me! πŸ€“ github.com/jessb0t/emoSPIN

18.08.2025 15:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Looking forward to September! πŸ€“

31.07.2025 21:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And many more… 🎢

10.06.2025 23:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The deadline has been extended to the 10th June. There are still a couple of spots available. Apply before it's sold out! EEG/fNIRS/hyperscanning/TRFs/Speech/Music/Ping pong!

05.06.2025 20:29 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

But how loud that background noise is, as well as what kind of emotional state the speaker is currently in, will both play a role in how accurately we understand the words spoken and how accurately we perceive the underlying emotion.

Please reach out if you have any questions about our data! (8/8)

03.06.2025 21:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

So what? Well, our daily interactions require us not only to understand what people are saying, but also to intuit how they are feeling so that we respond appropriately. And we usually pull off both these incredible feats in some level of background noise.

03.06.2025 21:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For emotion recognition, we find that background noise induces perceptual biases, causing listeners exposed to higher levels of noise to behave differently than listeners exposed to more moderate noise levels. And the ability to recognize the emotion doesn't seem to help in understanding the words.

03.06.2025 21:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Interestingly, the intelligibility advantage doesn't correlate well with raw acoustic intensity, but rather with how intensity is distributed across different frequency bands.

03.06.2025 21:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Here, across four different levels of speech-shaped background noise, we find an advantage for high-arousal emotions (angry, happy) relative to neutral for both speech intelligibility and emotion recognition.

03.06.2025 21:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Prior work has also presented conflicting results on whether vocal emotions differ in how accurately they are recognized in the presence of typical background noise, like the din of a busy restaurant. Angry speech seems to have a recognition advantage, but is it special...or just more intense?

03.06.2025 21:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Acoustics of speech differ based on the emotional state of the speaker. In English, for instance, angry and happy speech tend to have higher mean F0 and mean intensity than neutral speech. But the literature is divided on whether this leads to any intelligibility difference across vocal emotions.

03.06.2025 21:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
High-arousal emotional speech enhances speech intelligibility and emotion recognition in noise Prosodic and voice quality modulations of the speech signal offer acoustic cues to the emotional state of the speaker. In quiet, listeners are highly adept at i

Officially out in JASA!

Paper: doi.org/10.1121/10.0036812
Data+Code: osf.io/g4kyh/

A short 🧡 below with details... (1/8)

03.06.2025 21:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Don’t miss this year’s CNSP workshop! Also, if you are a predoctoral or postdoctoral scholar, consider submitting a proposal for a methods tutorial! Submission form here: tinyurl.com/submit-cnsp-tutorial. πŸ§ πŸ§‘β€πŸ’»πŸ’‘

16.05.2025 12:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congrats, @vshirazy.bsky.social !!!

11.04.2025 00:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks much!

12.01.2025 15:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This epitomizes why I love scientists. Oh, and chefs. πŸ§‘β€πŸ”¬πŸ§‘β€πŸ³

11.01.2025 22:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Listening task? Are headphones required? If so, how do you ensure participants are using them? For instance, do you request a return if they fail a headphone check more than once? Asking for a friend… 🀫

11.01.2025 22:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ˜‚πŸ˜‚πŸ€“πŸ˜‚πŸ˜‚

17.12.2024 00:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is great fun! Each round is a head-to-head between models that listen to your audio prompt and respond in text. Then you pick the winner. And, all the while, you contribute to πŸ—£οΈ AI research!

11.12.2024 15:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Attending #ASA187 hosted by @acousticsorg? Come check out my flashtalk in tomorrow's Suprasegmentals session! I will be sharing our recent results showing enhanced intelligibility and emotion recognition for happy and angry speech in noise, plus a dive under the hood of listener behavior.

18.11.2024 14:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@je55bot is following 20 prominent accounts