Thomas Hikaru Clark's Avatar

Thomas Hikaru Clark

@thomashikaru.bsky.social

MIT Brain and Cognitive Sciences

104 Followers  |  144 Following  |  7 Posts  |  Joined: 10.04.2025
Posts Following

Posts by Thomas Hikaru Clark (@thomashikaru.bsky.social)

Preview
A Model of Approximate and Incremental Noisy-Channel Language Processing Author(s): Clark, Thomas; Vigly, Jacob Hoover; Gibson, Edward; Levy, Roger | Abstract: How are comprehenders able to extract meaning from utterances in the presence of production errors? The noisy-cha...

7/7 Check out our CogSci proceedings paper for more details, and stay tuned for updates! Thanks to all who provided their feedback :)
escholarship.org/uc/item/9kr1...

31.07.2025 17:55 — 👍 2    🔁 0    💬 0    📌 0
Preview
GitHub - thomashikaru/noisy_channel_model Contribute to thomashikaru/noisy_channel_model development by creating an account on GitHub.

6/7 We release our model's code on GitHub: github.com/thomashikaru...

31.07.2025 17:55 — 👍 2    🔁 0    💬 1    📌 0
Post image

5/7 The model also returns incremental surprisals (quantified as mean particle weight, here tested on sentences from Ryskin et al., 2021 @ryskin.bsky.social), which can be compared to a baseline LM. "Explainable errors" tend to be less surprising under our model than the baseline.

31.07.2025 17:55 — 👍 2    🔁 0    💬 1    📌 0
Post image

4/7 The rich, interpretable output of the model includes posteriors over inferred errors at each word and over intended (latent) sentences. The model makes inferences that are consistent with the human noisy-channel inferences implied by Gibson et al., 2013.

31.07.2025 17:55 — 👍 2    🔁 0    💬 1    📌 0
Post image

3/7 We combine a generative model of noisy production (LM prior + symbolic error model), with approximate, incremental Sequential Monte Carlo inference. This allows fine-grained control of the error types under consideration and inference dynamics.

31.07.2025 17:55 — 👍 2    🔁 0    💬 1    📌 0

2/7 According to noisy-channel theory, humans interpret utterances non-literally using both linguistic priors and error likelihoods. However, how this works at a more algorithmic level is an open question, and something that implemented computational models can help us explore.

31.07.2025 17:55 — 👍 2    🔁 0    💬 1    📌 0
Post image

1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).

31.07.2025 17:55 — 👍 18    🔁 7    💬 1    📌 0