Kyle Mahowald's Avatar

Kyle Mahowald

@kmahowald.bsky.social

UT Austin linguist http://mahowak.github.io/. computational linguistics, cognition, psycholinguistics, NLP, crosswords. occasionally hockey?

2,950 Followers  |  517 Following  |  83 Posts  |  Joined: 12.07.2023  |  2.0861

Latest posts by kmahowald.bsky.social on Bluesky

Right β€œgood way to solve problemsβ€œ as in object permanence, color properties, etc that could be said to be useful in general for any agent who has goals they have to achieve in an environment. not just useful for humans

11.10.2025 20:10 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Imo work in bayesian cognition, rational analysis etc suggest that at least some concepts humans have exist because they are good ways to solve those problems in general. That’s maybe a point for β€œsame concepts”. But I guess if the resources and constraints are very different all bets are off.…

11.10.2025 18:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Curious as to if people think if (when?) β€˜superhuman AI’ arrives, will the building blocks of its performance be human recognizable concepts which have been applied and combined in new and novel ways to achieve β€˜superhuman’ performance? Or will it be completely uninterpretable?

11.10.2025 18:17 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 3    πŸ“Œ 0

Come join us at the city of ACL!

Very happy to chat about my experience as a new faculty at UT Ling, come find me at #COLM2025 if you’re interested!!

07.10.2025 23:28 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We’re hiring faculty as well! Happy to talk about it at COLM!

08.10.2025 01:17 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thanks, didn't know the history of his later life. Deleted and re-posted to omit.

07.10.2025 20:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Austin is a lovely city, and the department is wonderful and supportive. I've had a great experience here.

As you can see in the ad, the scope of what we are looking for is broad.

Happy to discuss this position or Ph.D. positions at #COLM2025 or offline!

07.10.2025 20:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language

UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🀘

07.10.2025 20:53 β€” πŸ‘ 34    πŸ” 22    πŸ’¬ 1    πŸ“Œ 4

Austin is a lovely city, and the department is wonderful and supportive. I've had a great experience.

As you can see in the ad, the scope of we're looking for construe as computational linguistics is broad.

Happy to chat at #COLM2025 or offline about this faculty position and/or Ph.D. positions!

07.10.2025 20:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!

Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.

06.10.2025 12:05 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
On Language Models’ Sensitivity to Suspicious Coincidences

And also @sriramp05.bsky.social at the PragLM Workshop! arxiv.org/html/2504.09...

06.10.2025 16:21 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Heading to #COLM2025 to present my first paper w/ @jennhu.bsky.social @kmahowald.bsky.social !

When: Tuesday, 11 AM – 1 PM
Where: Poster #75

Happy to chat about my work and topics in computational linguistics & cogsci!

Also, I'm on the PhD application journey this cycle!

Paper info πŸ‘‡:

06.10.2025 16:05 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Language Models Fail to Introspect About Their Knowledge of Language There has been recent interest in whether large language models (LLMs) can introspect about their own internal states. Such abilities would make LLMs more interpretable, and also validate the use of s...

I’m at #COLM2025 from Wed with:

@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513

@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850

@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002

I’ll talk at INTERPLAY too. Come say hi!

06.10.2025 15:57 β€” πŸ‘ 20    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Preview
Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models Language models (LMs) tend to show human-like preferences on a number of syntactic phenomena, but the extent to which these are attributable to direct exposure to the phenomena or more general propert...

Traveling to my first @colmweb.org🍁

Not presenting anything but here are two posters you should visit:

1. @qyao.bsky.social on Controlled rearing for direct and indirect evidence for datives (w/ me, @weissweiler.bsky.social and @kmahowald.bsky.social), W morning

Paper: arxiv.org/abs/2503.20850

06.10.2025 15:22 β€” πŸ‘ 13    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Do you want to use AI models to understand human language?

Are you fascinated by whether linguistic representations are lurking in LLMs?

Are you in need of a richer model of spatial words across languages?

Consider UT Austin for all your Computational Linguistics Ph.D. needs!

mahowak.github.io

30.09.2025 17:26 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
infant data from experiment 1

infant data from experiment 1

conceptual schema for different habituation models

conceptual schema for different habituation models

title page

title page

results from experiment 2 with adults

results from experiment 2 with adults

Ever wonder how habituation works? Here's our attempt to understand:

A stimulus-computable rational model of visual habituation in infants and adults doi.org/10.7554/eLif...

This is the thesis of two wonderful students: @anjiecao.bsky.social @galraz.bsky.social, w/ @rebeccasaxe.bsky.social

29.09.2025 23:38 β€” πŸ‘ 69    πŸ” 27    πŸ’¬ 1    πŸ“Œ 2

At UT we just got to hear about this in a zoom talk from @sfeucht.bsky.social. I echo the endorsement:
cool ideas about representations in llms with linguistic relevance!

27.09.2025 23:00 β€” πŸ‘ 14    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

Can AI aid scientists amidst their own workflows, when they do not know step-by-step workflows and may not know, in advance, the kinds of scientific utility a visualization would bring?

Check out @sebajoe.bsky.social’s feature on ✨AstroVisBench:

25.09.2025 20:52 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Simon Goldstein & Harvey Lederman, What Does ChatGPT Want? An Interpretationist Guide - PhilPapers This paper investigates LLMs from the perspective of interpretationism, a theory of belief and desire in the philosophy of mind. We argue for three conclusions. First, the right object of study ...

Simon Goldstein and I have a new paper, β€œWhat does ChatGPT want? An interpretationist guide”.

The paper argues for three main claims.

philpapers.org/rec/GOLWDC-2 1/7

24.09.2025 12:37 β€” πŸ‘ 24    πŸ” 6    πŸ’¬ 2    πŸ“Œ 5
Preview
How Linguistics Learned to Stop Worrying and Love the Language Models Language models can produce fluent, grammatical text. Nonetheless, some maintain that language models don't really learn language and also that, even if they did, that would not be informative for the...

The arxiv version is here! arxiv.org/abs/2501.17047

16.09.2025 17:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
How Linguistics Learned to Stop Worrying and Love the Language Models How Linguistics Learned to Stop Worrying and Love the Language Models

πŸ“£@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...

15.09.2025 15:46 β€” πŸ‘ 50    πŸ” 10    πŸ’¬ 4    πŸ“Œ 3

Provocative piece and more interesting than most that have been written about this topic. I greatly encourage people to weigh in!

My own perspective is that while there is utility to LMs, the scientific insights are greatly overstated.

15.09.2025 16:02 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Preview
Prophetic perfect tense - Wikipedia

Yes, after some discussion, we decided to stick with the past tense like in the movie. Richard says it's an example of the prophetic perfect tense en.wikipedia.org/wiki/Prophet....

15.09.2025 16:17 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

BBS also values publishing commentaries not just from the most relevant subarea of the article but from a wide variety of areas. So also consider submitting if you're further afield in some way!

15.09.2025 15:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The accepted manuscript is here: www.cambridge.org/core/service...

Have already heard plenty of spirited and useful disagreement on the piece. If that's you, especially considering submitting something! (Or if you want to say how much you agree with us, that's of course welcome too.)

15.09.2025 15:46 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
How Linguistics Learned to Stop Worrying and Love the Language Models How Linguistics Learned to Stop Worrying and Love the Language Models

πŸ“£@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...

15.09.2025 15:46 β€” πŸ‘ 50    πŸ” 10    πŸ’¬ 4    πŸ“Œ 3

Congrats to Leonie on the new gig! Surely though she will mess our Texas summers.

15.09.2025 15:29 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Can AI introspect? Surprisingly tricky to define what that means! And also interesting to test. New work from @siyuansong.bsky.social, @harveylederman.bsky.social, @jennhu.bsky.social and me on introspection in LLMs. See paper and thread for a definition and some experiments!

26.08.2025 17:39 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Composition with Grid A few months into the pandemic, my wife and I adopted a new pastime: we would complete the New York Times crossword puzzle every day.

Lovely write up by @ksetiya.bsky.social on @rkubala.bsky.social’s
piece on the art of crosswords! Come for Robbie, stay for Sondheim crossword quotes. ksetiya.substack.com/p/compositio...

23.08.2025 18:45 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Art, Understanding, and Mystery Apparent orthodoxy holds that artistic understanding is finally valuable. Artistic understandingβ€”grasping, as such, the features of an artwork that make it aesthetically or artistically good or badβ€”is...

Happy to share the published version of "Art, Understanding, and Mystery"! I often hear some version of the thought that it's bad to understand artworks; this paper attempts to make that claim precise and show one way to defend artistic understanding! journals.publishing.umich.edu/ergo/article...

06.08.2025 21:12 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

@kmahowald is following 19 prominent accounts