OSF with all stimuli, data, & code as well as detailed supplementary information: osf.io/2asgw/overview. Linked Github repo: github.com/hlplab/Causa...
12.12.2025 17:51 β π 0 π 0 π¬ 0 π 0
@hlplab.bsky.social
OSF with all stimuli, data, & code as well as detailed supplementary information: osf.io/2asgw/overview. Linked Github repo: github.com/hlplab/Causa...
12.12.2025 17:51 β π 0 π 0 π¬ 0 π 0Congrats to @brainnotonyet.bsky.social alumni Shawn Cummings, @gekagrob.bsky.social & Menghan Yan. Out in JEP:LMC @apajournals.bsky.social: listeners compensate perception of spectral (acoustic) cues based on visually-evident consequences of a pen in mouth of the speaker! dx.doi.org/10.1037/xlm0...
12.12.2025 17:46 β π 2 π 1 π¬ 2 π 0Very cool new accent-relatedness visualization, examples, and some insightful observations accent-explorer.boldvoice.com
17.10.2025 20:32 β π 0 π 0 π¬ 0 π 0Looking for researchers in computational neuroscience and cognition (incl. language, learning, development, decision-making) to join our faculty!
01.10.2025 19:58 β π 1 π 0 π¬ 0 π 0Review starts 11/1: Asst. prof. (tenure track), human cognition, Brain and CogSci, U Rochester www.sas.rochester.edu/bcs/jobs/fac...
12.09.2024 22:03 β π 3 π 3 π¬ 0 π 0New R library STM github.com/santiagobarr... by Santiago Barreda that implements Nearey & Assmann's PST model of vowel perception, and a fully Bayesian extension (the BSTM). Easy to use and to apply to your data. It's also what we used in our recent paper www.degruyterbrill.com/document/doi...
30.09.2025 20:50 β π 1 π 0 π¬ 0 π 0As we write, Nearey & Assmann's PSTM presents a "groundbreaking idea [...], with far-reaching consequences for research from typology to sociolinguistics to speech perception β¦ and few seem to know of it." We hope this paper can help change that! OSF osf.io/tpwmv/ 3/3
30.09.2025 20:46 β π 0 π 0 π¬ 0 π 0Nearey & Assmann's PSTM (2007, www.google.com/books/editio...) remains the only fully incremental model of formant normalization, conducting joint inference over both the talker's normalization parameters (*who*'s talking) and the vowel category (*what* they are saying). 2/3
30.09.2025 20:42 β π 0 π 0 π¬ 0 π 0New work w/ Santiago Barreda: www.degruyterbrill.com/document/doi... .We reintroduce Nearey & Assmann's seminal probabilistic sliding template model (PSTM), visualize its workings, & find that it predicts human vowel perception with high accuracy, far outperforming other normalization models 1/3
30.09.2025 20:39 β π 0 π 0 π¬ 2 π 0DL captures human speech perception both *qualitatively* & *quantitatively* (R2>96%) for over 400 combinations of exposure and test items. Yet, previous DL models fail to capture important limitations. Specifically, we find that DL seems to proceed by remixing prev experience 2/2
30.09.2025 20:23 β π 0 π 0 π¬ 0 π 0Very excited about this: putting distributional learning (DL) models of adaptive speech perception to a strong, informative test sciencedirect.com/science/arti... by Maryann Tan. We use Bayesian ideal observers & adapters to assess whether DL predicts rapid changes in speech perception 1/2
30.09.2025 20:23 β π 1 π 0 π¬ 1 π 0This has been a really eye-opening collaboration that made me realize how little I knew about the auditory system, the normalization of spectral information, & the consequences of making problematic assumptions about the perceptual basis of speech perception when building (psycho)linguistic models!
25.02.2025 15:04 β π 0 π 0 π¬ 0 π 0This is the final paper from Anna Persson's thesis (www.researchgate.net/profile/Anna...) w/ Santiago Barreda (linguistics.ucdavis.edu/people/santi...).
Article & SI fully written in #rmarkdown. All data, experiment code, & analyses available on OSF osf.io/zemwn/ #reproducibility
Excited to see this out in JASA @asa-news.bsky.social: doi.org/10.1121/10.0.... provides a large-scale evaluation of formant normalization accounts as a model of vowel perception. @uor-braincogsci.bsky.social
25.02.2025 15:02 β π 2 π 0 π¬ 2 π 0Together w/ @wbushong.bsky.social's recent paper bsky.app/profile/wbus..., this lays out the road ahead for careful research on information maintenance during speech perception. The discussion in Wednesday's paper identifies strong assumptions made in this line of work that might not be warranted.
25.02.2025 14:47 β π 0 π 0 π¬ 0 π 0Data and code available on OSF osf.io/cypg3/
25.02.2025 14:41 β π 0 π 0 π¬ 0 π 0All experiment code, analyses, and trial-level data available on OSF osf.io/6fng2/.
25.02.2025 14:38 β π 0 π 0 π¬ 0 π 0By comparing against ideal observer baselines, we identify a reliable, previously unrecognized pattern in listeners' responses that is unexpected under any existing theory. We present simulations that suggest that this pattern can emerge under ideal information maintenance w/ attentional lapses. 3/n
25.02.2025 14:38 β π 0 π 0 π¬ 0 π 0We present Bayesian GLMMs, ideal observer analyses, two re-analyses of previous studies and two new experiments. All data clearly reject the idea that uncertainty maintenance during speech perception is limited to ambiguous inputs or short-lived. 2/n
25.02.2025 14:36 β π 0 π 0 π¬ 0 π 0Now out: exciting work w/ Klinton Bicknell, @wbushong.bsky.social, & Mike Tanenhaus www.sciencedirect.com/science/arti.... It's a massive tour-de-force, revisiting several misunderstood 'limitations' of information maintenance during spoken language understanding. @uor-braincogsci.bsky.social
25.02.2025 14:34 β π 2 π 0 π¬ 4 π 0@uor-braincogsci.bsky.social
25.02.2025 14:24 β π 0 π 0 π¬ 0 π 0We also revisits long-held assumptions about how we study the maintenance of perceptual information during spoken language understanding. We discuss why most evidence for such maintenance is actually compatible with simpler explanations. 2/2
18.02.2025 15:35 β π 0 π 0 π¬ 0 π 0New work by @wbushong.bsky.social out in JEP:LMC: listeners might strategically moderate maintenance of perceptual information during spoken language understanding based on the expected informativity of subsequent context. 1/2
18.02.2025 15:35 β π 0 π 0 π¬ 1 π 0SFB-funded Collaborative Research Centre βProminence in Languageβ at U Cologne, Germany offers junior & senior research fellowships for 1-6 months between 04-12/2025 (1800-2500 Euro/month) sfb1252.uni-koeln.de/en/ (20 projects in prosody, morphosyntax & semantics, text & discourse structure)
16.12.2024 15:41 β π 0 π 0 π¬ 0 π 0Select Institute for Collaborative Innovation as your application unit. Apply by 1/28/25 career.admo.um.edu.mo
16.12.2024 15:35 β π 0 π 0 π¬ 0 π 0Join Andriy Myachykov in Macau =): U Macau invites applications for research assistant professor & postdoctoral fellows under UM Talent Programme aiming to attract high-calibre talents---including in neuroscience & cognitive science at UM Center for Cognitive & Brain Sciences (ccbs.ici.um.edu.mo).
16.12.2024 15:35 β π 0 π 0 π¬ 1 π 0New paper story time (now out in PNAS)! We developed a method that caused people to learn new categories of visual objects, not by teaching them what the categories were, but by changing how their brains worked when they looked at individual objects in those categories.
www.pnas.org/doi/10.1073/...
Link to whole thesis www.diva-portal.org/smash/record...
07.12.2024 16:58 β π 0 π 0 π¬ 0 π 0Article 3 (currently under review at JASA) evaluates over a dozen of the most popular normalization accounts against the perception of English vowels & finds that some of the computationally most parsimonious accounts predict human behavior best. osf.io/zemwn/
07.12.2024 16:57 β π 0 π 0 π¬ 0 π 0Article 2 in Frontiers develops a general computational framework to evaluate accounts of formant normalization, and the predictions they make for perception www.frontiersin.org/journals/psy.... The article also discusses under-appreciated shortcomings of popular approaches to such comparisons.
07.12.2024 16:55 β π 0 π 0 π¬ 0 π 0