Qing Yao's Avatar

Qing Yao

@qyao.bsky.social

Linguistics PhD student at UT Austin

37 Followers  |  34 Following  |  15 Posts  |  Joined: 20.11.2024  |  2.1268

Latest posts by qyao.bsky.social on Bluesky

UT Austin Computational Linguistics Research Group โ€“ Humans processing computers processing humans processing language

UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

๐Ÿค˜

07.10.2025 20:53 โ€” ๐Ÿ‘ 28    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4

Excited to present this at COLM tomorrow! (Tuesday, 11:00 AM poster session)

06.10.2025 15:21 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Heading to #COLM2025 to present my first paper w/ @jennhu.bsky.social @kmahowald.bsky.social !

When: Tuesday, 11 AM โ€“ 1 PM
Where: Poster #75

Happy to chat about my work and topics in computational linguistics & cogsci!

Also, I'm on the PhD application journey this cycle!

Paper info ๐Ÿ‘‡:

06.10.2025 16:05 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Language Models Fail to Introspect About Their Knowledge of Language There has been recent interest in whether large language models (LLMs) can introspect about their own internal states. Such abilities would make LLMs more interpretable, and also validate the use of s...

Iโ€™m at #COLM2025 from Wed with:

@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513

@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850

@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002

Iโ€™ll talk at INTERPLAY too. Come say hi!

06.10.2025 15:57 โ€” ๐Ÿ‘ 20    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!

Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.

06.10.2025 12:05 โ€” ๐Ÿ‘ 8    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font

Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font

The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students!

Come join me, @kmahowald.bsky.social, and @jessyjli.bsky.social as we tackle interesting research questions at the intersection of ling, cogsci, and ai!

Some topics I am particularly interested in:

30.09.2025 16:17 โ€” ๐Ÿ‘ 18    ๐Ÿ” 10    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2

LMsโ€™ dative alternation preferences come from both direct evidence and more general properties of language. They donโ€™t just memorizeโ€“they generalize! See the paper for details on animacy too (interestingly more complicated!)

31.03.2025 13:30 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
LMs' length preference vs. perplexity on validation set. We see that models whose training set manipulation reduces exposure to short-first orderings are the ones which have weaker short-first preference.

LMs' length preference vs. perplexity on validation set. We see that models whose training set manipulation reduces exposure to short-first orderings are the ones which have weaker short-first preference.

Post image

Learned length preference changes with the input manipulation. That is, the more โ€œlong-firstโ€ we make the input, the weaker the short-first preference. We think this shows the dative preferences in models come not just from datives but from general properties of English.

31.03.2025 13:30 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

For example, โ€œThe primates use tools to eat the green coconuts from the shopโ€ becomes:
-Short-first: [tools] use [the primates] [[to] eat [[the] [green] coconuts [from the shop]]]
-Long-first: [[[from the shop] [the] coconuts [green]] eat [to]] use [the primates] [tools]

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We think it plausibly comes not from the datives alone but from general properties of English (which is โ€œshort-firstโ€). To test that, we manipulate the global structure of the input, creating a corpus where every sentence is short-first and one where theyโ€™re all long-first.

31.03.2025 13:30 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

Now what if we get rid of datives, and further all constructions which have two postverbal arguments? Now we see the length preference is back again. Yes itโ€™s smaller (direct evidence matters), but why is it there? Where does it come from if not the datives?

31.03.2025 13:30 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

What if we modify the corpus such that for every DO there is a PO (balance direct evidence)? The preferences are still present! But what if now we SWAP every dative in the input so that every DO is now a PO, every PO a DO? The preference essentially disappears (but not flipped!)

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

To test this, we train small LMs on manipulated datasets where we vary direct (datives) and indirect (non-datives) evidence and test the change in their preferences. First, we see that we get human-like preferences on a model trained on our default BabyLM corpus.

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The English dative preferences come from more general features of the language: short constituents tend to appear earlier all over, not just in the dative. We hypothesize LMs rely on direct evidence from datives but also general word order preferences (e.g. โ€œeasy firstโ€) from non-datives.

31.03.2025 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
examples from direct and prepositional object datives with short-first and long-first word orders: 
DO (long first): She gave the boy who signed up for class and was excited it.
PO (short first): She gave it to the boy who signed up for class and was excited.
DO (short first): She gave him the book that everyone was excited to read.
PO (long-first): She gave the book that everyone was excited to read to him.

examples from direct and prepositional object datives with short-first and long-first word orders: DO (long first): She gave the boy who signed up for class and was excited it. PO (short first): She gave it to the boy who signed up for class and was excited. DO (short first): She gave him the book that everyone was excited to read. PO (long-first): She gave the book that everyone was excited to read to him.

LMs learn argument-based preferences for dative constructions (preferring recipient first when itโ€™s shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social

arxiv.org/abs/2503.20850

31.03.2025 13:30 โ€” ๐Ÿ‘ 18    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 7

For example, โ€œThe primates use tools to eat the green coconuts from the shopโ€ becomes:
- Short-first: [tools] use [the primates] [[to] eat [[the] [green] coconuts [from the shop]]]
- Long-first: [[[from the shop] [the] coconuts [green]] eat [to]] use [the primates] [tools]

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We think it plausibly comes not from the datives alone but from general properties of English (which is โ€œshort-firstโ€). To test that, we manipulate the global structure of the input, creating a corpus where every sentence is short-first and one where theyโ€™re all long-first.

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

DO preference vs. length difference when we remove all datives (left) and all cases with 2 post-verbal arguments (right). The pearson correlation, r is now -0.24 for the no-datives condition, and -0.22 for no cases with 2postverbal arguments.

Now what if we get rid of datives, and further all constructions which have two postverbal arguments? Now we see the length preference is back again. Yes itโ€™s smaller (direct evidence matters), but why is it there? Where does it come from if not the datives?

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

DO preference vs. length difference for the balanced and swapped-datives manipulations. Left: balanced, pearson correlation r = -0.33; right: swapped-datives, pearson correlation r = -0.03.

What if we modify the corpus such that for every DO there is a PO (balance direct evidence)? The preferences are still present! But what if now we SWAP every dative in the input so that every DO is now a PO, every PO a DO? The preference essentially disappears (but not flipped!)

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

left: plot showing DO preference vs. Human Judgments โ€“ Pearsonโ€™s r = 0.5; right: plot showing the DO preference as a function of (log) length difference between the recipient and the theme, with pearsonโ€™s r = -0.43, where the negative sign indicates short-first is preferred

To test this, we train small LMs on manipulated datasets where we vary direct (datives) and indirect (non-datives) evidence and test the change in their preferences. First, we see that we get human-like preferences on a model trained on our default BabyLM corpus.

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The English dative preferences come from more general features of the language: short constituents tend to appear earlier all over, not just in the dative. We hypothesize LMs rely on direct evidence from datives but also general word order preferences (e.g. โ€œeasy firstโ€) from non-datives.

31.03.2025 13:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@qyao is following 20 prominent accounts