7/7 Check out our CogSci proceedings paper for more details, and stay tuned for updates! Thanks to all who provided their feedback :)
escholarship.org/uc/item/9kr1...
7/7 Check out our CogSci proceedings paper for more details, and stay tuned for updates! Thanks to all who provided their feedback :)
escholarship.org/uc/item/9kr1...
6/7 We release our model's code on GitHub: github.com/thomashikaru...
31.07.2025 17:55 — 👍 2 🔁 0 💬 1 📌 05/7 The model also returns incremental surprisals (quantified as mean particle weight, here tested on sentences from Ryskin et al., 2021 @ryskin.bsky.social), which can be compared to a baseline LM. "Explainable errors" tend to be less surprising under our model than the baseline.
31.07.2025 17:55 — 👍 2 🔁 0 💬 1 📌 04/7 The rich, interpretable output of the model includes posteriors over inferred errors at each word and over intended (latent) sentences. The model makes inferences that are consistent with the human noisy-channel inferences implied by Gibson et al., 2013.
31.07.2025 17:55 — 👍 2 🔁 0 💬 1 📌 03/7 We combine a generative model of noisy production (LM prior + symbolic error model), with approximate, incremental Sequential Monte Carlo inference. This allows fine-grained control of the error types under consideration and inference dynamics.
31.07.2025 17:55 — 👍 2 🔁 0 💬 1 📌 02/7 According to noisy-channel theory, humans interpret utterances non-literally using both linguistic priors and error likelihoods. However, how this works at a more algorithmic level is an open question, and something that implemented computational models can help us explore.
31.07.2025 17:55 — 👍 2 🔁 0 💬 1 📌 01/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).
31.07.2025 17:55 — 👍 18 🔁 7 💬 1 📌 0