Art, Understanding, and Mystery
Apparent orthodoxy holds that artistic understanding is finally valuable. Artistic understandingβgrasping, as such, the features of an artwork that make it aesthetically or artistically good or badβis...
Happy to share the published version of "Art, Understanding, and Mystery"! I often hear some version of the thought that it's bad to understand artworks; this paper attempts to make that claim precise and show one way to defend artistic understanding! journals.publishing.umich.edu/ergo/article...
06.08.2025 21:12 β π 3 π 2 π¬ 0 π 0
Is there any work on conditionals like '[Larry David is making a new show with the Obamas.] If Susie Essman is involved, hooray!'? It seems really hard to make sense of it on most of the approaches to conditionals/expressives that I'm aware of, but I'm curious if people know stuff or have thoughts.
17.07.2025 18:12 β π 6 π 2 π¬ 6 π 0
Check it out if you're at SCiL! Fun tool for easily getting semantic projections from word embeddings into interpretable space.
17.07.2025 18:16 β π 14 π 0 π¬ 0 π 0
Experimentology cover: title and curves for distributions.
Experimentology is out today!!! A group of us wrote a free online textbook for experimental methods, available at experimentology.io - the idea was to integrate open science into all aspects of the experimental workflow from planning to design, analysis, and writing.
01.07.2025 18:25 β π 525 π 226 π¬ 10 π 15
Some attitudes we usually do not have
I present a new attitude puzzle involving disjunction. Specifically, though it can sound strange to ascribe the belief that Ο$\phi$ or Ο$\psi$ when βΟβ$\ulcorner \phi \urcorner$ and βΟβ$\ulcorner \ps...
My paper just came out in PPR today, here: onlinelibrary.wiley.com/doi/10.1111/.... It's about interesting relationships that our attitudes bear to the world. I argue that belief is very different from other attitudes, and this difference follows from its relationship to the truth of token contents.
23.06.2025 15:15 β π 35 π 8 π¬ 2 π 1
So excited to welcome @kanishka.bsky.social (back) to UT!
02.06.2025 15:48 β π 10 π 0 π¬ 1 π 0
Picture of the UT Tower taken by me on my first day at UT as a postdoc in 2023!
NewsποΈ
I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!π€
Excited to develop ideas about linguistic and conceptual generalization (recruitment details soon!)
02.06.2025 13:18 β π 65 π 7 π¬ 12 π 2
No. We are opting for simplicity. It doesn't help to pile targeted interventions, even if each makes a lot of sense on its own. The key to our process is the ACs, so we just want to help them
31.05.2025 20:23 β π 1 π 1 π¬ 1 π 0
Causal Interventions Reveal Shared Structure Across English Filler-Gap Constructions
Large Language Models (LLMs) have emerged as powerful sources of evidence for linguists seeking to develop theories of syntax. In this paper, we argue that causal interpretability methods, applied to ...
We believe this work shows how mechanistic analyses can provide novel insights into syntactic structures β making good on the promise that studying LLMs can help us better understand linguistics by helping us develop linguistically interesting hypotheses!
π: arxiv.org/abs/2505.16002
27.05.2025 14:32 β π 6 π 1 π¬ 0 π 0
Do the causal mechanism underlying filler-gap processing in models transfer across constructions? We find yes β¦ but with some wrinkles. Check out the paper_t that Sasha wrote ___t!
27.05.2025 14:59 β π 7 π 0 π¬ 0 π 0
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models.
New work with @kmahowald.bsky.social and @cgpotts.bsky.social!
π§΅π!
27.05.2025 14:32 β π 20 π 5 π¬ 1 π 1
Come by to our panel at APS to share your thoughts, and ask us all the hard stuff!
24.05.2025 13:10 β π 5 π 1 π¬ 0 π 0
2025 Preliminary Program
This preliminary program does not include all sessions and will be updated as additional program information is received. Please check back for the latest program updates. You may also view program in...
At APS in DC, thrilled to be moderating a panel on statistics and abstraction in cognition with 3 stars: Jay McClelland, Tom Griffiths @cocoscilab.bsky.social, and @adinawilliams.bsky.social at 1pm tmrw. Bayes. Neural nets. Linguisticsβ¦weβll have it all! www.psychologicalscience.org/conventions/...
23.05.2025 23:04 β π 10 π 3 π¬ 0 π 1
Delighted to have Elias joining the UT NLP community!
05.05.2025 21:09 β π 11 π 0 π¬ 1 π 0
What do "Maui, Sicily, Thailand" have in common? Ok, "places". But I say "White Lotus locales": it would be quite a coincidence if I hit on all 3 by chance! We ask how LMs do at this kind of inference.
Also fun to do a study on "the number game", the first Bayesian cogsci I learned in grad school!
22.04.2025 01:26 β π 5 π 0 π¬ 0 π 0
I might be able to hire a postdoc for this fall in computational linguistics at UT Austin. Topics in the general LLM + cognitive space (particularly reasoning, chain of thought, LLMs + code) and LLM + linguistic space. If this could be of interest, feel free to get in touch!
21.04.2025 15:56 β π 59 π 31 π¬ 0 π 1
Writing my first post here to announce that I've accepted an assistant professor job at TTIC! I'll be starting in Fall 2026, and recruiting students this upcoming cycle.
Until then, I'll be wrapping up the PhD at Berkeley, and this summer I'll join NYU as a CDS Faculty Fellow ποΈ
15.04.2025 03:34 β π 41 π 2 π¬ 3 π 2
APA PsycNet
PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN
Some of these words are consistently remembered better than others. Why is that?
In our paper, just published in J. Exp. Psychol., we provide a simple Bayesian account and show that it explains >80% of variance in word memorability: tinyurl.com/yf3md5aj
10.04.2025 14:38 β π 40 π 15 π¬ 1 π 0
@kmahowald.bsky.social with a beautiful high-tech illustration π¨ while describing @qyao.bsky.social's latest paper at the HSP online seminar series!
Paper: arxiv.org/abs/2503.20850
04.04.2025 19:34 β π 4 π 1 π¬ 1 π 0
Will be talking about this work (and more) at 2 ET/11 PT in the HSP talk series on Computational Language Models and Psycholinguistics! www.hspsociety.org
04.04.2025 16:01 β π 4 π 0 π¬ 0 π 0
If you give a mouse a cookie....does an LM learn something different than if you "give a cookie to a mouse"? Or if you don't give anyone anything? Or if you do other weird stuff to the input? New paper on manipulating ling input and training small LMs to study direct vs indirect evidence.
31.03.2025 13:44 β π 22 π 4 π¬ 1 π 0
examples from direct and prepositional object datives with short-first and long-first word orders:
DO (long first): She gave the boy who signed up for class and was excited it.
PO (short first): She gave it to the boy who signed up for class and was excited.
DO (short first): She gave him the book that everyone was excited to read.
PO (long-first): She gave the book that everyone was excited to read to him.
LMs learn argument-based preferences for dative constructions (preferring recipient first when itβs shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social
arxiv.org/abs/2503.20850
31.03.2025 13:30 β π 18 π 8 π¬ 1 π 6
Cool mix where dative alternation preferences in models come from: effect of global preference for "short before long" in English (which we test by scrambling sentences using dep parses). I'm excited about this paradigm, and 1st year UT ling student @qyao.bsky.social did amazing work on this!
31.03.2025 13:44 β π 3 π 0 π¬ 0 π 0
If you give a mouse a cookie....does an LM learn something different than if you "give a cookie to a mouse"? Or if you don't give anyone anything? Or if you do other weird stuff to the input? New paper on manipulating ling input and training small LMs to study direct vs indirect evidence.
31.03.2025 13:44 β π 22 π 4 π¬ 1 π 0
Check out our new work on introspection in LLMs! π
TL;DR we find no evidence that LLMs have privileged access to their own knowledge.
Beyond the study of LLM introspection, our findings inform an ongoing debate in linguistics research: prompting (eg grammaticality judgments) =/= prob measurement!
12.03.2025 17:43 β π 47 π 7 π¬ 0 π 1
And keep your eye out for @siyuansong.bsky.social, a star UT undergrad who I suspect you will see more from!
12.03.2025 14:35 β π 4 π 1 π¬ 0 π 0
If I ask model A βis this sentence grammaticalβ and it says yes, does that mean model A is more likely to produce that sentence than model B? Check out our new paper on whether models introspect about knowledge of language.
12.03.2025 14:35 β π 12 π 1 π¬ 1 π 0
I'm excited to announce two papers of ours which will be presented this summer at @naaclmeeting.bsky.social eting.bsky.social and @iclr-conf.bsky.social !
π§΅
11.03.2025 22:03 β π 10 π 3 π¬ 1 π 0
Philosopher at University of Michigan. https://brian.weatherson.org/
Assistant Professor at @cs.ubc.caβ¬ and βͺ@vectorinstitute.aiβ¬ working on Natural Language Processing. Book: https://lostinautomatictranslation.com/
Incoming PhD student with @LeonardLearnLab | Previously @ Wellesley, Princeton | she/her π³οΈβπ | (Arielle pronounced R.E.L.)
Assistant Professor of Cognitive AI @UvA Amsterdam
language and vision in brains & machines
cognitive science π€ AI π€ cognitive neuroscience
michaheilbron.github.io
UC Davis computational psycholinguist (she)
Psycholinguist investigating bilingual language use and comprehension. Assoc Professor at University of Florida on leave. Current Program Director in the SBE Directorate at NSF. Views are my own.
Semanticist, linguist, Associate Professor at Boston University, nothing more, not obsessed with lindy hop or anything fun like that
Linguistics PhD student at UT Austin
Undergraduate at UT Austin majoring in CS & Math
linguist, experimental work on meaning (lexical semantics), language use, representation, learning, constructionist usage-based approach, Princeton U https://adele.scholar.princeton.edu/publications/topic
CS / Psych / Neuro Prof @ Stanford. Interested in NeuroAI and Bach. And Bonsai.
Musician / Writer
Dada Drummer Almanach https://dadadrummer.substack.com/
Damon & Naomi https://damonandnaomi.bandcamp.com/
and yeah I was in Galaxie 500
Husband, dad, veteran, writer, and proud Midwesterner. 19th US Secretary of Transportation and former Mayor of South Bend.
Research editor @nytmag, nonfiction co-chair @bkbf, etc.
Philosophy of AI and Mind, but with a historical bent. NYU.
My dog is better than your dog.
https://www.jacob-browning.com/
Philosopher interested in language, belief, and learning
University College London
http://danielrothschild.com
NYT columnist. Signal: carlzimmer.31
Newsletter: https://buttondown.com/carlzimmer/
Web: http://carlzimmer.com
[This account includes a tweet archive]