TunePad (tunepad.com)! Developed by amazing researchers at Northwestern, extremely friendly and intuitive UI. I'm teaching fifth-graders at local schools TunePad and most of them are loving it
22.02.2026 03:23 β π 2 π 0 π¬ 1 π 0@anziw.bsky.social
linguistics phd student
TunePad (tunepad.com)! Developed by amazing researchers at Northwestern, extremely friendly and intuitive UI. I'm teaching fifth-graders at local schools TunePad and most of them are loving it
22.02.2026 03:23 β π 2 π 0 π¬ 1 π 0one year ago i thought p-side was about predicates and s-side was about subjects :3
18.01.2026 20:05 β π 4 π 0 π¬ 0 π 0Wow!! Congratulations!!
07.01.2026 23:28 β π 1 π 0 π¬ 0 π 0Reposting because the link has expired:
PDF: drive.google.com/file/d/1t2EF... (if this doesn't work, lmk)
Publisher link: www.sciencedirect.com/science/arti...
so cool!!
14.11.2025 21:18 β π 1 π 0 π¬ 1 π 0New Preprint: osf.io/eq2ra
Reading feels effortless, but it's actually quite complex under the hood. Most words are easy to process, but some words make us reread or linger. It turns out that LLMs can tell us about why, but only in certain cases... (1/n)
Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."
New work to appear @ TACL!
Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.
Yet they often assign higher probability to ungrammatical strings than to grammatical strings.
How can both things be true? π§΅π
I took Grusha and Forrest's version of NLP at Colgate (arxiv.org/abs/2408.05664) and as a current linguistics phd student still doing NLP research, I can say that this is THE undergrad course that has benefited me the most
10.11.2025 03:12 β π 2 π 0 π¬ 1 π 0I've found it kind of a pain to work with resources like VerbNet, FrameNet, PropBank (frame files), and WordNet using existing tools. Maybe you have too. Here's a little package that handles data management, loading, and cross-referencing via either a CLI or a python API.
27.09.2025 13:51 β π 27 π 7 π¬ 3 π 1Brand new version of this paper (now a short book!) available at lingbuzz.net/lingbuzz/008...!
26.09.2025 18:12 β π 7 π 2 π¬ 1 π 2favorite garden path sentence of the year: "It's better to be hurt by someone you know accidentally, than by a stranger on purpose" by Dwight Schrute
23.09.2025 17:14 β π 1 π 0 π¬ 0 π 0probably? maybe you can have different policies each semester and test it out lol
02.09.2025 00:14 β π 1 π 0 π¬ 0 π 0might depend more on participation policy
01.09.2025 19:13 β π 1 π 0 π¬ 1 π 0A paper with Vic Ferreira and Norvin Richards is now out
(1) Speakers syntactically encode zero complementizers as cognitively active mental object.
(2) No evidence LLMs capture cross constructional generalizations about null complementizers.
nam10.safelinks.protection.outlook.com?url=https%3A...
Work with kittens? Check out the National Kitten Coalition's new Kitten Resource Library! They're an org I like a lot!! library.kittencoalition.org
25.07.2025 23:50 β π 214 π 72 π¬ 2 π 1"We introduce Collaborative Rational Speech Act (CRSA), an information-theoretic (IT) extension of RSA that models multi-turn dialog by optimizing a gain function adapted from rate-distortion theory."
arxiv.org/abs/2507.14063
Someone asked me today how to get better at scientific writing. I'm not the best person to ask because I find my own writing very inadequate! But the tips I thought of were:
1. Practice, and practice with co-authors who are better writers than you. Observe how they make edits and copy them.
(1/n)
early November is also the best season for crabs!!
19.05.2025 01:58 β π 3 π 0 π¬ 0 π 0is computational psycholinguistics a poly-sci? #puns #linguistics
24.01.2025 14:54 β π 5 π 0 π¬ 1 π 0