I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!
Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
@kanishka.bsky.social
Assistant Professor of Linguistics, and Harrington Fellow at UT Austin. Works on computational understanding of language, concepts, and generalization. πΈοΈποΈ: https://kanishka.website
I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!
Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
On my way to #COLM2025 π
Check out jessyli.com/colm2025
QUDsim: Discourse templates in LLM stories arxiv.org/abs/2504.09373
EvalAgent: retrieval-based eval targeting implicit criteria arxiv.org/abs/2504.15219
RoboInstruct: code generation for robotics with simulators arxiv.org/abs/2405.20179
Iβm at #COLM2025 from Wed with:
@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513
@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850
@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002
Iβll talk at INTERPLAY too. Come say hi!
Heading to #COLM2025 to present my first paper w/ @jennhu.bsky.social @kmahowald.bsky.social !
When: Tuesday, 11 AM β 1 PM
Where: Poster #75
Happy to chat about my work and topics in computational linguistics & cogsci!
Also, I'm on the PhD application journey this cycle!
Paper info π:
Also, Iβm on the look out for my first PhD student! If youβd like to be the one, please reach out to me (dms/email open) and we can chat!!
@jessyjli.bsky.social and @kmahowald.bsky.social are also hiring students, and weβre all eager to co-advise!
Iβll also be moderating a roundtable at the INTERPLAY workshop on Oct 10 β excited to discuss behavior, representations, and a third secret thing with folks!
06.10.2025 15:22 β π 0 π 0 π¬ 1 π 02. @sriramp05.bsky.social on LMs and Suspicious Coincidences at the PragLM workshop poster session on Friday, (work w/ me, @kmahowald.bsky.social, and @eunsol.bsky.social)
Paper: arxiv.org/abs/2504.09387
Traveling to my first @colmweb.orgπ
Not presenting anything but here are two posters you should visit:
1. @qyao.bsky.social on Controlled rearing for direct and indirect evidence for datives (w/ me, @weissweiler.bsky.social and @kmahowald.bsky.social), W morning
Paper: arxiv.org/abs/2503.20850
All of us (@kanishka.bsky.social @kmahowald.bsky.social and me) are looking for PhD students this cycle! If computational linguistics/NLP is your passion, join us at UT Austin!
For my areas see jessyli.com
We'll all be attending #COLM2025 -- come say hi if you are interested in working with us!!
Separate tweet incoming for COLM papers!
Understanding how internal representations drive conceptual behavior in LMs
E.g., arxiv.org/abs/2410.22590
Role of language (vs. other modalities) in learning meaning-sensitivities
E.g., arxiv.org/abs/2507.13328
Using neural networks to generate experimental hypotheses about language acquisition in scenarios where hypothesis-spaces are intractable
E.g., arxiv.org/abs/2408.05086
"Controlled Rearing" of LMs to understand the role of input in acquiring linguistic generalization
E.g., arxiv.org/abs/2403.19827, arxiv.org/abs/2503.20850
Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font
The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students!
Come join me, @kmahowald.bsky.social, and @jessyjli.bsky.social as we tackle interesting research questions at the intersection of ling, cogsci, and ai!
Some topics I am particularly interested in:
Sigmoid function. Non-linearities in neural network allow it to behave in distributed and near-symbolic fashions.
New paper! π¨ I argue that LLMs represent a synthesis between distributed and symbolic approaches to language, because, when exposed to language, they develop highly symbolic representations and processing mechanisms in addition to distributed ones.
arxiv.org/abs/2502.11856
Hehe but really β unifying all of them + easy access = yes plssss
27.09.2025 15:01 β π 1 π 0 π¬ 0 π 0Friendship ended with minicons, glazing is my new fav package!
27.09.2025 14:47 β π 5 π 0 π¬ 1 π 0Godβs work π
27.09.2025 14:46 β π 1 π 0 π¬ 0 π 0I've found it kind of a pain to work with resources like VerbNet, FrameNet, PropBank (frame files), and WordNet using existing tools. Maybe you have too. Here's a little package that handles data management, loading, and cross-referencing via either a CLI or a python API.
27.09.2025 13:51 β π 27 π 7 π¬ 3 π 1Would you say itβs a dead area right now? (Ignoring the podcasts)
26.09.2025 03:14 β π 0 π 0 π¬ 1 π 0Love to start the day by mistakenly stumbling onto hate speech against south-asians ty internet
21.09.2025 15:09 β π 4 π 0 π¬ 0 π 0Accepted at #NeurIPS2025! So proud of Yulu and Dheeraj for leading this! Be on the lookout for more "nuanced yes/no" work from them in the future π
18.09.2025 16:12 β π 6 π 1 π¬ 0 π 0Abstract deadline changed to *December 1, 2025*
07.09.2025 21:48 β π 13 π 5 π¬ 0 π 0π£@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
15.09.2025 15:46 β π 50 π 10 π¬ 4 π 3Happy and proud to see @rjantonello.bsky.socialβs work awarded by SNL!
13.09.2025 21:47 β π 28 π 4 π¬ 1 π 0Density plot with X axis being probability of text being synthetic from an AI detector model. Plots show that GPT4.1 outputs are assigned high probability of being AI text, but GPT5 outputs are assigned low probability of being AI text.
Exhibit N on how synthetic text/AI detectors just don't work reliably. Generating some (long) sentences from GPT4.1 and GPT5 with the same prompt, the top open-source model on the RAID benchmark classifies most GPT4.1 outputs as synthetic and most GPT5 as not synthetic.
10.09.2025 20:05 β π 5 π 1 π¬ 0 π 0Title: The Cats of Cogsci. Two cats, Coco and Loki, with Northwestern Cognitive Science logo in the background. Coco is sitting on some books and Loki is holding an apple. Both are wearing glasses b/c they are academics.
Loki the cat has his paw on a laptop; text "Remember to add Cog Sci 110 to you shopping cart now in Caesar so that you're ready to enroll come September 12th!"
Super happy with Cogsci program assistant Chris Kent's work for our college Instagram feed. Glad I could get our Loki featured to advertise my class.
06.09.2025 20:15 β π 13 π 2 π¬ 0 π 0Excited to speak alongside such an illustrious set of speakers!
24.08.2025 14:49 β π 6 π 0 π¬ 0 π 0Lovely write up by @ksetiya.bsky.social on @rkubala.bsky.socialβs
piece on the art of crosswords! Come for Robbie, stay for Sondheim crossword quotes. ksetiya.substack.com/p/compositio...