A blizzard is raging through Montreal when your friend says โLooks like Florida out there!โ Humans easily interpret irony, while LLMs struggle with it. We propose a ๐ณ๐ฉ๐ฆ๐ต๐ฐ๐ณ๐ช๐ค๐ข๐ญ-๐ด๐ต๐ณ๐ข๐ต๐ฆ๐จ๐บ-๐ข๐ธ๐ข๐ณ๐ฆ probabilistic framework as a solution.
Paper: arxiv.org/abs/2506.09301 to appear @ #ACL2025 (Main)
26.06.2025 15:52 โ ๐ 14 ๐ 7 ๐ฌ 1 ๐ 4
"Build the web for agents, not agents for the web"
This position paper argues that rather than forcing web agents to adapt to UIs designed for humans, we should develop a new interface optimized for web agents, which we call Agentic Web Interface (AWI).
arxiv.org/abs/2506.10953
14.06.2025 04:17 โ ๐ 5 ๐ 4 ๐ฌ 0 ๐ 0
Excited to share the results of my recent internship!
We ask ๐ค
What subtle shortcuts are VideoLLMs taking on spatio-temporal questions?
And how can we instead curate shortcut-robust examples at a large-scale?
We release: MVPBench
Details ๐๐ฌ
13.06.2025 14:47 โ ๐ 16 ๐ 5 ๐ฌ 1 ๐ 0
Exciting work on hallucinations from @ziling-cheng.bsky.social
06.06.2025 18:15 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Incredibly proud of my students @adadtur.bsky.social and Gaurav Kamath for winning a SAC award at #NAACL2025 for their work on assessing how LLMs model constituent shifts.
01.05.2025 15:11 โ ๐ 17 ๐ 5 ๐ฌ 1 ๐ 0
Congratulations to Mila members @adadtur.bsky.social , Gaurav Kamath and @sivareddyg.bsky.social for their SAC award at NAACL! Check out Ada's talk in Session I: Oral/Poster 6. Paper: arxiv.org/abs/2502.05670
01.05.2025 14:30 โ ๐ 13 ๐ 7 ๐ฌ 0 ๐ 3
Presenting โจ ๐๐๐๐๐: ๐๐๐ง๐๐ซ๐๐ญ๐ข๐ง๐ ๐๐ก๐๐ฅ๐ฅ๐๐ง๐ ๐ข๐ง๐ ๐ฌ๐ฒ๐ง๐ญ๐ก๐๐ญ๐ข๐ ๐๐๐ญ๐ ๐๐จ๐ซ ๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐จ๐ง โจ
Work w/ fantastic advisors Dima Bahdanau and @sivareddyg.bsky.social
Thread ๐งต:
21.02.2025 16:28 โ ๐ 17 ๐ 8 ๐ฌ 1 ๐ 1
Overview figure for paper, showing creation of constituent movement data, in addition to three step experimentation: "Model Shifting Preference", "Motivating Factors of Model Preference", "Human-Model Preference Correlation"
Super excited to finally announce our NAACL 2025 main conference paper โLanguage Models Largely Exhibit Human-like Constituent Ordering Preferencesโ!
We examine constituent ordering preferences between humans and LLMs; we present two main findingsโฆ ๐งต
19.02.2025 19:31 โ ๐ 5 ๐ 2 ๐ฌ 1 ๐ 1
At McGill we have an NLP lab that works on a lot of things, from human-AI collaboration, to evaluation, to low resource NLP (me).
@emnlpmeeting.bsky.social just happened in Miami, and my colleagues just presented six papers there:
24.11.2024 16:31 โ ๐ 8 ๐ 1 ๐ฌ 1 ๐ 0
Thank you for trying again! I haven't a solution to the search issue and might contact support soon. Will let you know once we're indexed!
24.11.2024 13:12 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
It turns out we had even more papers at EMNLP!
Let's complete the list with three more๐งต
24.11.2024 02:17 โ ๐ 14 ๐ 4 ๐ฌ 1 ๐ 1
Our lab members recently presented 3 papers at @emnlpmeeting.bsky.social in Miami โ๏ธ ๐
From interpretability to bias/fairness and cultural understanding -> ๐งต
23.11.2024 20:35 โ ๐ 19 ๐ 6 ๐ฌ 1 ๐ 2
Hello ๐ could you add us? Great initiative!
22.11.2024 19:55 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
MSc Master's @mila-quebec.bsky.social @mcgill-nlp.bsky.social
Research Fellow @ RBC Borealis
Model analysis, interpretability, reasoning and hallucination
Studying model behaviours to make them better :))
Looking for Fall '26 PhD
Curious about human ways of thinking.
๐๏ธ Researcher and teacher at Charles University, Prague.
๐๏ธ https://kasnerz.github.io
Research Scientist at Ai2, PhD in NLP ๐ค UofA. Ex
GoogleDeepMind, MSFTResearch, MilaQuebec
https://nouhadziri.github.io/
Research Scientist at Google DeepMind
https://e-bug.github.io
PhD student at University of Montreal // Mila ยทยทยท mechanistic understanding of LLMs + Human-AI collaboration for science ยทยทยท http://mirandrom.github.io
Assistant Professor @BrockU CS Department. Alum @Mila/McGill
NLP, AI for Social Good, Interpretability, AI & Society
aemami.ca
Assistant professor in Natural Language Processing at the University of Edinburgh and visiting professor at NVIDIA | A Kleene star shines on the hour of our meeting.
Working on RL training of LLMs @Mila_Quebec.
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
PhD student at Johns Hopkins University
Alumni from McGill University & MILA
Working on NLP Evaluation, Responsible AI, Human-AI interaction
she/her ๐จ๐ฆ
Interp & analysis in NLP
Mostly ๐ฆ๐ท, slightly ๐จ๐ฑ
PhD Student at Mila and McGill University. I work on fairness and privacy in large language models and responsible AI more broadly!
MIT media lab // researching fairness, equity, & pluralistic alignment in LLMs
previously @ mila / mcgill
i like language and dogs and plants and ultimate frisbee and baking and sunsets
https://elinorp-d.github.io
PhD fellow in XAI, IR & NLP
โ๏ธ Mila - Quebec AI Institute | University of Copenhagen ๐ฐ
#NLProc #ML #XAI
Recreational sufferer
Hello! I'm Cesare (pronounced Chez-array). I'm a PhD student at McGill/Mila working in NLP/computational pragmatics.
@mcgill-nlp.bsky.social
@mila-quebec.bsky.social
https://cesare-spinoso.github.io/
Computer Science PhD Student at McGill University in Montreal
Currently researching task-oriented multi-agent cooperation. Interested in the human consequences of modern recommendation and information seeking systems.
Indigenous language technology. PhD candidate at McGill University in Montreal. Ngฤpuhi Nui Tonu.
PhD student at Mila under Chris Pal. NLP Researcher, working in real-world applications of LLMs