Human Language Processing Lab's Avatar

Human Language Processing Lab

@hlplab.bsky.social

20 Followers  |  21 Following  |  30 Posts  |  Joined: 02.11.2024  |  2.2101

Latest posts by hlplab.bsky.social on Bluesky


OSF

OSF with all stimuli, data, & code as well as detailed supplementary information: osf.io/2asgw/overview. Linked Github repo: github.com/hlplab/Causa...

12.12.2025 17:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
APA PsycNet

Congrats to @brainnotonyet.bsky.social alumni Shawn Cummings, @gekagrob.bsky.social & Menghan Yan. Out in JEP:LMC @apajournals.bsky.social: listeners compensate perception of spectral (acoustic) cues based on visually-evident consequences of a pen in mouth of the speaker! dx.doi.org/10.1037/xlm0...

12.12.2025 17:46 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Preview
How AI Hears Accents

Very cool new accent-relatedness visualization, examples, and some insightful observations accent-explorer.boldvoice.com

17.10.2025 20:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Looking for researchers in computational neuroscience and cognition (incl. language, learning, development, decision-making) to join our faculty!

01.10.2025 19:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Review starts 11/1: Asst. prof. (tenure track), human cognition, Brain and CogSci, U Rochester www.sas.rochester.edu/bcs/jobs/fac...

12.09.2024 22:03 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - santiagobarreda/STM: The 'STM' (Sliding Template Model) R Package The 'STM' (Sliding Template Model) R Package. Contribute to santiagobarreda/STM development by creating an account on GitHub.

New R library STM github.com/santiagobarr... by Santiago Barreda that implements Nearey & Assmann's PST model of vowel perception, and a fully Bayesian extension (the BSTM). Easy to use and to apply to your data. It's also what we used in our recent paper www.degruyterbrill.com/document/doi...

30.09.2025 20:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
OSF | Sign in

As we write, Nearey & Assmann's PSTM presents a "groundbreaking idea [...], with far-reaching consequences for research from typology to sociolinguistics to speech perception … and few seem to know of it." We hope this paper can help change that! OSF osf.io/tpwmv/ 3/3

30.09.2025 20:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Experimental Approaches to Phonology This wide-ranging survey of experimental methods in phonetics and phonology shows the insights and results provided by different methods of investigation, including laboratory-based, statistical, psyc...

Nearey & Assmann's PSTM (2007, www.google.com/books/editio...) remains the only fully incremental model of formant normalization, conducting joint inference over both the talker's normalization parameters (*who*'s talking) and the vowel category (*what* they are saying). 2/3

30.09.2025 20:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Reintroducing and testing the Probabilistic Sliding Template Model of vowel perception Normalization of the speech signal onto comparatively invariant phonetic representations is critical to speech perception. Assumptions about this process also play a central role in phonetics and phon...

New work w/ Santiago Barreda: www.degruyterbrill.com/document/doi... .We reintroduce Nearey & Assmann's seminal probabilistic sliding template model (PSTM), visualize its workings, & find that it predicts human vowel perception with high accuracy, far outperforming other normalization models 1/3

30.09.2025 20:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

DL captures human speech perception both *qualitatively* & *quantitatively* (R2>96%) for over 400 combinations of exposure and test items. Yet, previous DL models fail to capture important limitations. Specifically, we find that DL seems to proceed by remixing prev experience 2/2

30.09.2025 20:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Very excited about this: putting distributional learning (DL) models of adaptive speech perception to a strong, informative test sciencedirect.com/science/arti... by Maryann Tan. We use Bayesian ideal observers & adapters to assess whether DL predicts rapid changes in speech perception 1/2

30.09.2025 20:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This has been a really eye-opening collaboration that made me realize how little I knew about the auditory system, the normalization of spectral information, & the consequences of making problematic assumptions about the perceptual basis of speech perception when building (psycho)linguistic models!

25.02.2025 15:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Anna PERSSON | Lecturer | Doctor of Philosophy | Stockholm University, Stockholm | SU | Department of Swedish Language and Multilingualism | Research profile Lecturer in Swedish as second language at the Department of Swedish Language and Multilingualism, Stockholm University.

This is the final paper from Anna Persson's thesis (www.researchgate.net/profile/Anna...) w/ Santiago Barreda (linguistics.ucdavis.edu/people/santi...).

Article & SI fully written in #rmarkdown. All data, experiment code, & analyses available on OSF osf.io/zemwn/ #reproducibility

25.02.2025 15:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Excited to see this out in JASA @asa-news.bsky.social: doi.org/10.1121/10.0.... provides a large-scale evaluation of formant normalization accounts as a model of vowel perception. @uor-braincogsci.bsky.social

25.02.2025 15:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Together w/ @wbushong.bsky.social's recent paper bsky.app/profile/wbus..., this lays out the road ahead for careful research on information maintenance during speech perception. The discussion in Wednesday's paper identifies strong assumptions made in this line of work that might not be warranted.

25.02.2025 14:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Bushong & Jaeger. Changes in informativity of sentential context affects its integration with subcategorical information about preceding speech Hosted on the Open Science Framework

Data and code available on OSF osf.io/cypg3/

25.02.2025 14:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Bicknell, Bushong, Tanenhaus, & Jaeger (2024). Maintenance of subcategorical information during speech perception: revisiting misunderstood limitations. Accurate word recognition is facilitated by context. Some relevant context, however, occurs after the word. Rational use of such β€œright context” would require listeners to have maintained uncertainty ...

All experiment code, analyses, and trial-level data available on OSF osf.io/6fng2/.

25.02.2025 14:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

By comparing against ideal observer baselines, we identify a reliable, previously unrecognized pattern in listeners' responses that is unexpected under any existing theory. We present simulations that suggest that this pattern can emerge under ideal information maintenance w/ attentional lapses. 3/n

25.02.2025 14:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Bicknell, Bushong, Tanenhaus, & Jaeger (2024). Maintenance of subcategorical information during speech perception: revisiting misunderstood limitations. Accurate word recognition is facilitated by context. Some relevant context, however, occurs after the word. Rational use of such β€œright context” would require listeners to have maintained uncertainty ...

We present Bayesian GLMMs, ideal observer analyses, two re-analyses of previous studies and two new experiments. All data clearly reject the idea that uncertainty maintenance during speech perception is limited to ambiguous inputs or short-lived. 2/n

25.02.2025 14:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Maintenance of subcategorical information during speech perception: Revisiting misunderstood limitations Accurate word recognition is facilitated by context. Some relevant context, however, occurs after the word. Rational use of such β€œright context” would…

Now out: exciting work w/ Klinton Bicknell, @wbushong.bsky.social, & Mike Tanenhaus www.sciencedirect.com/science/arti.... It's a massive tour-de-force, revisiting several misunderstood 'limitations' of information maintenance during spoken language understanding. @uor-braincogsci.bsky.social

25.02.2025 14:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

@uor-braincogsci.bsky.social

25.02.2025 14:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We also revisits long-held assumptions about how we study the maintenance of perceptual information during spoken language understanding. We discuss why most evidence for such maintenance is actually compatible with simpler explanations. 2/2

18.02.2025 15:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

New work by @wbushong.bsky.social out in JEP:LMC: listeners might strategically moderate maintenance of perceptual information during spoken language understanding based on the expected informativity of subsequent context. 1/2

18.02.2025 15:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
CRC Prominence in Language

SFB-funded Collaborative Research Centre β€œProminence in Language” at U Cologne, Germany offers junior & senior research fellowships for 1-6 months between 04-12/2025 (1800-2500 Euro/month) sfb1252.uni-koeln.de/en/ (20 projects in prosody, morphosyntax & semantics, text & discourse structure)

16.12.2024 15:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Select Institute for Collaborative Innovation as your application unit. Apply by 1/28/25 career.admo.um.edu.mo

16.12.2024 15:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Home Recent News and Events Research Highlight

Join Andriy Myachykov in Macau =): U Macau invites applications for research assistant professor & postdoctoral fellows under UM Talent Programme aiming to attract high-calibre talents---including in neuroscience & cognitive science at UM Center for Cognitive & Brain Sciences (ccbs.ici.um.edu.mo).

16.12.2024 15:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Sculpting new visual categories into the human brain | PNAS Learning requires changing the brain. This typically occurs through experience, study, or instruction. We report an alternate route for humans to a...

New paper story time (now out in PNAS)! We developed a method that caused people to learn new categories of visual objects, not by teaching them what the categories were, but by changing how their brains worked when they looked at individual objects in those categories.

www.pnas.org/doi/10.1073/...

04.12.2024 19:59 β€” πŸ‘ 152    πŸ” 62    πŸ’¬ 8    πŸ“Œ 7
Post image

Link to whole thesis www.diva-portal.org/smash/record...

07.12.2024 16:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Persson, A., Barreda, S. & Jaeger, T. F. Comparing accounts of formant normalization against US English listeners' vowel perception This is the OSF for a paper that evaluates normalization accounts against US English vowel perception data. All data and code used to generate the article is available here. Hosted on the Open Sc...

Article 3 (currently under review at JASA) evaluates over a dozen of the most popular normalization accounts against the perception of English vowels & finds that some of the computationally most parsimonious accounts predict human behavior best. osf.io/zemwn/

07.12.2024 16:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Article 2 in Frontiers develops a general computational framework to evaluate accounts of formant normalization, and the predictions they make for perception www.frontiersin.org/journals/psy.... The article also discusses under-appreciated shortcomings of popular approaches to such comparisons.

07.12.2024 16:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@hlplab is following 20 prominent accounts