Qing Yao's Avatar

Qing Yao

@qyao.bsky.social

Linguistics PhD student at UT Austin

29 Followers  |  29 Following  |  15 Posts  |  Joined: 20.11.2024  |  1.562

Latest posts by qyao.bsky.social on Bluesky

LMsโ€™ dative alternation preferences come from both direct evidence and more general properties of language. They donโ€™t just memorizeโ€“they generalize! See the paper for details on animacy too (interestingly more complicated!)

31.03.2025 13:30 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
LMs' length preference vs. perplexity on validation set. We see that models whose training set manipulation reduces exposure to short-first orderings are the ones which have weaker short-first preference.

LMs' length preference vs. perplexity on validation set. We see that models whose training set manipulation reduces exposure to short-first orderings are the ones which have weaker short-first preference.

Post image

Learned length preference changes with the input manipulation. That is, the more โ€œlong-firstโ€ we make the input, the weaker the short-first preference. We think this shows the dative preferences in models come not just from datives but from general properties of English.

31.03.2025 13:30 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

For example, โ€œThe primates use tools to eat the green coconuts from the shopโ€ becomes:
-Short-first: [tools] use [the primates] [[to] eat [[the] [green] coconuts [from the shop]]]
-Long-first: [[[from the shop] [the] coconuts [green]] eat [to]] use [the primates] [tools]

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We think it plausibly comes not from the datives alone but from general properties of English (which is โ€œshort-firstโ€). To test that, we manipulate the global structure of the input, creating a corpus where every sentence is short-first and one where theyโ€™re all long-first.

31.03.2025 13:30 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

Now what if we get rid of datives, and further all constructions which have two postverbal arguments? Now we see the length preference is back again. Yes itโ€™s smaller (direct evidence matters), but why is it there? Where does it come from if not the datives?

31.03.2025 13:30 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

What if we modify the corpus such that for every DO there is a PO (balance direct evidence)? The preferences are still present! But what if now we SWAP every dative in the input so that every DO is now a PO, every PO a DO? The preference essentially disappears (but not flipped!)

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

To test this, we train small LMs on manipulated datasets where we vary direct (datives) and indirect (non-datives) evidence and test the change in their preferences. First, we see that we get human-like preferences on a model trained on our default BabyLM corpus.

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The English dative preferences come from more general features of the language: short constituents tend to appear earlier all over, not just in the dative. We hypothesize LMs rely on direct evidence from datives but also general word order preferences (e.g. โ€œeasy firstโ€) from non-datives.

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
examples from direct and prepositional object datives with short-first and long-first word orders: 
DO (long first): She gave the boy who signed up for class and was excited it.
PO (short first): She gave it to the boy who signed up for class and was excited.
DO (short first): She gave him the book that everyone was excited to read.
PO (long-first): She gave the book that everyone was excited to read to him.

examples from direct and prepositional object datives with short-first and long-first word orders: DO (long first): She gave the boy who signed up for class and was excited it. PO (short first): She gave it to the boy who signed up for class and was excited. DO (short first): She gave him the book that everyone was excited to read. PO (long-first): She gave the book that everyone was excited to read to him.

LMs learn argument-based preferences for dative constructions (preferring recipient first when itโ€™s shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social

arxiv.org/abs/2503.20850

31.03.2025 13:30 โ€” ๐Ÿ‘ 18    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 6

For example, โ€œThe primates use tools to eat the green coconuts from the shopโ€ becomes:
- Short-first: [tools] use [the primates] [[to] eat [[the] [green] coconuts [from the shop]]]
- Long-first: [[[from the shop] [the] coconuts [green]] eat [to]] use [the primates] [tools]

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We think it plausibly comes not from the datives alone but from general properties of English (which is โ€œshort-firstโ€). To test that, we manipulate the global structure of the input, creating a corpus where every sentence is short-first and one where theyโ€™re all long-first.

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

Now what if we get rid of datives, and further all constructions which have two postverbal arguments? Now we see the length preference is back again. Yes itโ€™s smaller (direct evidence matters), but why is it there? Where does it come from if not the datives?

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

What if we modify the corpus such that for every DO there is a PO (balance direct evidence)? The preferences are still present! But what if now we SWAP every dative in the input so that every DO is now a PO, every PO a DO? The preference essentially disappears (but not flipped!)

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

To test this, we train small LMs on manipulated datasets where we vary direct (datives) and indirect (non-datives) evidence and test the change in their preferences. First, we see that we get human-like preferences on a model trained on our default BabyLM corpus.

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The English dative preferences come from more general features of the language: short constituents tend to appear earlier all over, not just in the dative. We hypothesize LMs rely on direct evidence from datives but also general word order preferences (e.g. โ€œeasy firstโ€) from non-datives.

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@qyao is following 20 prominent accounts