Ada's Avatar

Ada

@adadtur.bsky.social

she/her McGillNLP & Mila occasionally live on ckut 90.3 fm :-) adadtur.github.io

561 Followers  |  809 Following  |  11 Posts  |  Joined: 21.10.2024
Posts Following

Posts by Ada (@adadtur.bsky.social)

Takeaway: reasoning LLMs are getting better and better on math and codeโ€”deterministic reasoning tasks. But we should also evaluate them on open-ended, inherently uncertain everyday reasoning! (9/10)

04.03.2026 16:13 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐ŸšจNew Paper!๐Ÿšจ How do reasoning LLMs handle inferences that have no deterministic answer? We find that they diverge from humans in some significant ways, and fail to reflect human uncertaintyโ€ฆ ๐Ÿงต(1/10)

04.03.2026 16:13 โ€” ๐Ÿ‘ 27    ๐Ÿ” 11    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

Beyond our controlled setup, we also show how LatentLens works much better than baselines on off-the-shelf Qwen2-VL-7B-Instruct

11.02.2026 14:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Building a VLM can be surprisingly simple: You keep both the LLM and vision encoder frozen, you just train a small MLP that projects into the LLM embedding space as prefixes. Thatโ€™s it ๐Ÿ˜ฎ

But how and why does that work? How do visual tokens relate to language, i.e. do they have interpretable NNs?

11.02.2026 14:12 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
GitHub - McGill-NLP/latentlens: Code and data for the paper "LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs" Code and data for the paper "LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs" - McGill-NLP/latentlens

Paper: arxiv.org/abs/2602.00462
Code: github.com/McGill-NLP/...
Demo: tinyurl.com/ce57mn4v

Couldn't have imagined better collaborators to wrap up the phd: Shravan Nayak @oscmansan.bsky.social @vaibhavadlakha.bsky.social
@delliott.bsky.social @sivareddyg.bsky.social @mariusmosbach.bsky.social

11.02.2026 14:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐ŸšจNew paper

Are visual tokens going into an LLM interpretable ๐Ÿค”

Existing methods (e.g. logit lens) and assumptions would lead you to think โ€œnot muchโ€...

We propose LatentLens and show that most visual tokens are interpretable across *all* layers ๐Ÿ’ก

Details ๐Ÿงต

11.02.2026 14:12 โ€” ๐Ÿ‘ 28    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 6

"Not only is the ratio of AIโ€™s resource rapacity to its productive utility indefensibly and irremediably skewed, AI-made material is itself a waste product: flimsy, shoddy, disposable, a single-use plastic of the mind."

>>

16.09.2025 20:02 โ€” ๐Ÿ‘ 50    ๐Ÿ” 11    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

enshittification | noun | when a digital platform is made worse for users, in order to increase profits

03.09.2025 20:22 โ€” ๐Ÿ‘ 29162    ๐Ÿ” 8575    ๐Ÿ’ฌ 507    ๐Ÿ“Œ 649
Windows Notepad, the native simple text editor, now has formatting options and a Copilot button.

Windows Notepad, the native simple text editor, now has formatting options and a Copilot button.

Look what they did to Notepad. Shut the fuck up. This is Notepad. You are not welcome here. Oh yeah "Let me use Copilot for Notepad". "I'm going to sign into my account for Notepad". What the fuck are you talking about. It's Notepad.

27.08.2025 01:41 โ€” ๐Ÿ‘ 17434    ๐Ÿ” 4579    ๐Ÿ’ฌ 446    ๐Ÿ“Œ 496
Post image

Our new paper in #PNAS (bit.ly/4fcWfma) presents a surprising findingโ€”when words change meaning, older speakers rapidly adopt the new usage; inter-generational differences are often minor.

w/ Michelle Yang, โ€ช@sivareddyg.bsky.socialโ€ฌ , @msonderegger.bsky.socialโ€ฌ and @dallascard.bsky.socialโ€ฌ๐Ÿ‘‡(1/12)

29.07.2025 12:05 โ€” ๐Ÿ‘ 34    ๐Ÿ” 17    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2
Post image

Thrilled to announce our new survey that explores the exciting possibilities and troubling risks of computational persuasion in the era of LLMs ๐Ÿค–๐Ÿ’ฌ
๐Ÿ“„Arxiv: arxiv.org/pdf/2505.07775
๐Ÿ’ป GitHub: github.com/beyzabozdag/...

13.05.2025 20:12 โ€” ๐Ÿ‘ 8    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
02 | Gauthier Gidel: Bridging Theory and Deep Learning, Vibes at Mila, and the Effects of AI on Art Behind the Research of AI ยท Episode

Started a new podcast with @tomvergara.bsky.social !

Behind the Research of AI:
We look behind the scenes, beyond the polished papers ๐Ÿง๐Ÿงช

If this sounds fun, check out our first "official" episode with the awesome Gauthier Gidel
from @mila-quebec.bsky.social :

open.spotify.com/episode/7oTc...

25.06.2025 15:54 โ€” ๐Ÿ‘ 17    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Zohran Mamdani, a 33-year-old state assemblyman, declared victory in New York Cityโ€™s Democratic mayoral primary after Andrew Cuomo conceded the race.

โ€œTonight we made history,โ€ Mamdani said, addressing his supporters. wapo.st/44yMVoI

25.06.2025 12:30 โ€” ๐Ÿ‘ 4417    ๐Ÿ” 517    ๐Ÿ’ฌ 141    ๐Ÿ“Œ 49

Mahmoud Khalil is finally home with his beautiful wife and newborn son.

Each one of the 104 days he spent detained was a grave injustice.

From the moment of his detention, @ccrjustice.org + @aclu.org engaged my office as we worked closely to help secure his release. They did remarkable work here.

21.06.2025 20:53 โ€” ๐Ÿ‘ 21921    ๐Ÿ” 3038    ๐Ÿ’ฌ 280    ๐Ÿ“Œ 62
Preview
A Shortcut-aware Video-QA Benchmark for Physical Understanding via Minimal Video Pairs Existing benchmarks for assessing the spatio-temporal understanding and reasoning abilities of video language models are susceptible to score inflation due to the presence of shortcut solutions based ...

The facts:

We release (MVPBench) with around 55K videos (grouped as *minimal video pairs*) from diverse physical understanding sources

Arxiv: arxiv.org/abs/2506.09987

Huggingface: huggingface.co/datasets/fac...

GitHub: github.com/facebookrese...

Leaderboard: huggingface.co/spaces/faceb...

13.06.2025 14:47 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Excited to share the results of my recent internship!

We ask ๐Ÿค”
What subtle shortcuts are VideoLLMs taking on spatio-temporal questions?

And how can we instead curate shortcut-robust examples at a large-scale?

We release: MVPBench

Details ๐Ÿ‘‡๐Ÿ”ฌ

13.06.2025 14:47 โ€” ๐Ÿ‘ 16    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Congrats!

30.05.2025 18:20 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Today, I was denied access to seeing my constituent, Mr. Kilmar Abrego Garcia. If there is nothing to hide, cut the crap. Let his lawyer and I check on him.

26.05.2025 19:32 โ€” ๐Ÿ‘ 38960    ๐Ÿ” 10741    ๐Ÿ’ฌ 733    ๐Ÿ“Œ 353
Preview
Live updates: Trump administration revokes Harvardโ€™s ability to enroll foreign students Get the latest news on President Donald Trumpโ€™s return to the White House and the Republican-led Congress.

Breaking news: The Trump administration revoked Harvardโ€™s ability to enroll foreign students, saying it allowed anti-American agitators.

Existing foreign students must transfer or risk losing their legal status, DHS said.

22.05.2025 18:34 โ€” ๐Ÿ‘ 216    ๐Ÿ” 131    ๐Ÿ’ฌ 82    ๐Ÿ“Œ 128
Post image Post image Post image Post image

when in albuquerqueโ€ฆ

07.05.2025 06:00 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We won a Senior Area Chair Award at NAACL!! Many thanks again to my amazing coauthors Gaurav Kamath and @sivareddyg.bsky.social :-)

03.05.2025 15:50 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Check out Gaurav's video on their #NAACL paper and find @adadtur.bsky.social at the conference ๐Ÿ‘‡

02.05.2025 01:41 โ€” ๐Ÿ‘ 11    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Great work from labmates on LLMs vs humans regarding linguistic preferences: You know when a sentence kind of feels off e.g. "I met at the park the man". So in what ways do LLMs follow these human intuitions?

01.05.2025 15:04 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Language Models Largely Exhibit Human-like Constituent Ordering Preferences Though English sentences are typically inflexible vis-ร -vis word order, constituents often show far more variability in ordering. One prominent theory presents the notion that constituent ordering is ...

Ada is an undergrad and will soon be looking for PhDs. Gaurav is a PhD student looking for intellectually stimulating internships/visiting positions. They did most of the work without much of my help. Highly recommend them. Please reach out to them if you have any positions.

01.05.2025 15:14 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Incredibly proud of my students @adadtur.bsky.social and Gaurav Kamath for winning a SAC award at #NAACL2025 for their work on assessing how LLMs model constituent shifts.

01.05.2025 15:11 โ€” ๐Ÿ‘ 17    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Congratulations to Mila members @adadtur.bsky.social , Gaurav Kamath and @sivareddyg.bsky.social for their SAC award at NAACL! Check out Ada's talk in Session I: Oral/Poster 6. Paper: arxiv.org/abs/2502.05670

01.05.2025 14:30 โ€” ๐Ÿ‘ 13    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 3
Video thumbnail

I filmed this yesterday on my way to Lousiana where my constituent Rรผmeysa ร–ztรผrk is being wrongfully held by ICE. Iโ€™m there now demanding her release. More to come.

22.04.2025 21:36 โ€” ๐Ÿ‘ 30276    ๐Ÿ” 6195    ๐Ÿ’ฌ 933    ๐Ÿ“Œ 443
A circular diagram with a blue whale icon at the center. The diagram shows 8 interconnected research areas around LLM reasoning represented as colored rectangular boxes arranged in a circular pattern. The areas include: ยง3 Analysis of Reasoning Chains (central cloud), ยง4 Scaling of Thoughts (discussing thought length and performance metrics), ยง5 Long Context Evaluation (focusing on information recall), ยง6 Faithfulness to Context (examining question answering accuracy), ยง7 Safety Evaluation (assessing harmful content generation and jailbreak resistance), ยง8 Language & Culture (exploring moral reasoning and language effects), ยง9 Relation to Human Processing (comparing cognitive processes), ยง10 Visual Reasoning (covering ASCII generation capabilities), and ยง11 Following Token Budget (investigating direct prompting techniques). Arrows connect the sections in a clockwise flow, suggesting an iterative research methodology.

A circular diagram with a blue whale icon at the center. The diagram shows 8 interconnected research areas around LLM reasoning represented as colored rectangular boxes arranged in a circular pattern. The areas include: ยง3 Analysis of Reasoning Chains (central cloud), ยง4 Scaling of Thoughts (discussing thought length and performance metrics), ยง5 Long Context Evaluation (focusing on information recall), ยง6 Faithfulness to Context (examining question answering accuracy), ยง7 Safety Evaluation (assessing harmful content generation and jailbreak resistance), ยง8 Language & Culture (exploring moral reasoning and language effects), ยง9 Relation to Human Processing (comparing cognitive processes), ยง10 Visual Reasoning (covering ASCII generation capabilities), and ยง11 Following Token Budget (investigating direct prompting techniques). Arrows connect the sections in a clockwise flow, suggesting an iterative research methodology.

Models like DeepSeek-R1 ๐Ÿ‹ mark a fundamental shift in how LLMs approach complex problems. In our preprint on R1 Thoughtology, we study R1โ€™s reasoning chains across a variety of tasks; investigating its capabilities, limitations, and behaviour.
๐Ÿ”—: mcgill-nlp.github.io/thoughtology/

01.04.2025 20:06 โ€” ๐Ÿ‘ 52    ๐Ÿ” 16    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 10
Video thumbnail

Not sure if this has been shared here yet, but this is video of Rumeysa Ozturk's arrest posted by WCVB. It's terrifying.

26.03.2025 16:43 โ€” ๐Ÿ‘ 4598    ๐Ÿ” 2301    ๐Ÿ’ฌ 49    ๐Ÿ“Œ 1168