Xing Han Lu's Avatar

Xing Han Lu

@xhluca.bsky.social

πŸ‘¨β€πŸ³ Web Agents @mila-quebec.bsky.social πŸŽ’ @mcgill-nlp.bsky.social

672 Followers  |  164 Following  |  57 Posts  |  Joined: 18.12.2023  |  2.0935

Latest posts by xhluca.bsky.social on Bluesky

Post image

Our new paper in #PNAS (bit.ly/4fcWfma) presents a surprising findingβ€”when words change meaning, older speakers rapidly adopt the new usage; inter-generational differences are often minor.

w/ Michelle Yang, β€ͺ@sivareddyg.bsky.social‬ , @msonderegger.bsky.social‬ and @dallascard.bsky.socialβ€¬πŸ‘‡(1/12)

29.07.2025 12:05 β€” πŸ‘ 31    πŸ” 16    πŸ’¬ 3    πŸ“Œ 2
Post image

A blizzard is raging through Montreal when your friend says β€œLooks like Florida out there!” Humans easily interpret irony, while LLMs struggle with it. We propose a 𝘳𝘩𝘦𝘡𝘰𝘳π˜ͺ𝘀𝘒𝘭-𝘴𝘡𝘳𝘒𝘡𝘦𝘨𝘺-𝘒𝘸𝘒𝘳𝘦 probabilistic framework as a solution.
Paper: arxiv.org/abs/2506.09301 to appear @ #ACL2025 (Main)

26.06.2025 15:52 β€” πŸ‘ 14    πŸ” 7    πŸ’¬ 1    πŸ“Œ 4
Post image

"Build the web for agents, not agents for the web"

This position paper argues that rather than forcing web agents to adapt to UIs designed for humans, we should develop a new interface optimized for web agents, which we call Agentic Web Interface (AWI).

arxiv.org/abs/2506.10953

14.06.2025 04:17 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Post image

Excited to share the results of my recent internship!

We ask πŸ€”
What subtle shortcuts are VideoLLMs taking on spatio-temporal questions?

And how can we instead curate shortcut-robust examples at a large-scale?

We release: MVPBench

Details πŸ‘‡πŸ”¬

13.06.2025 14:47 β€” πŸ‘ 16    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

Do LLMs hallucinate randomly? Not quite.

Our #ACL2025 (Main) paper shows that hallucinations under irrelevant contexts follow a systematic failure mode β€” revealing how LLMs generalize using abstract classes + context cues, albeit unreliably.

πŸ“Ž Paper: arxiv.org/abs/2505.22630 1/n

06.06.2025 18:09 β€” πŸ‘ 48    πŸ” 18    πŸ’¬ 1    πŸ“Œ 3

Without 🐦 and πŸ¦‹, are we left with LinkedIn?

10.05.2025 20:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Congratulations to Mila members @adadtur.bsky.social , Gaurav Kamath and @sivareddyg.bsky.social for their SAC award at NAACL! Check out Ada's talk in Session I: Oral/Poster 6. Paper: arxiv.org/abs/2502.05670

01.05.2025 14:30 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 0    πŸ“Œ 3

Exciting release! AgentRewardBench offers that much-needed closer look at evaluating agent capabilities: automatic vs. human eval. Important findings here, especially on the popular LLM judges. Amazing work by @xhluca.bsky.social & team!

15.04.2025 19:11 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Paper page - AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories Join the discussion on this paper page

Daily Paper: huggingface.co/papers/2504....
Data: huggingface.co/datasets/McG...
Demo: huggingface.co/spaces/McGil...
Leaderboard: huggingface.co/spaces/McGil...
Arxiv: arxiv.org/abs/2504.08942

15.04.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

An amazing team effort with: @a-kazemnejad.bsky.social Nick @arkil.bsky.social Dongchan Alejandra @karstanczak.bsky.social @ptshaw.bsky.social @chrisjpal.bsky.social @sivareddyg.bsky.social

15.04.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We find that rule-based evals underreport success rates, and no single LLM judge excels across all benchmarks.
We collect trajectories from web agents built on four LLMs (Claude 3.7, GPT-4o, Llama 3.3, Qwen2.5-VL) across popular web benchmarks (AssistantBench, WebArena, VWA, WorkArena, WorkArena++)

15.04.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories

We are releasing the first benchmark to evaluate how well automatic evaluators, such as LLM judges, can evaluate web agent trajectories.

15.04.2025 19:10 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

And thoughtology is now on Arxiv! Read more about R1 reasoning πŸ‹πŸ’­ across visual, cultural and psycholinguistic tasks at the link below:

πŸ”— arxiv.org/abs/2504.07128

11.04.2025 16:31 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

bsky.app/profile/sara...

12.04.2025 16:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

DeepSeek-R1 Thoughtology: Let’s <think> about LLM reasoning

142-page report diving into the reasoning chains of R1. It spans 9 unique axes: safety, world modeling, faithfulness, long context, etc.

Now on arxiv: arxiv.org/abs/2504.07128

12.04.2025 16:11 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Introducing the DeepSeek-R1 Thoughtology -- the most comprehensive study of R1 reasoning chains/thoughts ✨. Probably everything you need to know about R1 thoughts. If we missed something, please let us know.

01.04.2025 20:12 β€” πŸ‘ 17    πŸ” 4    πŸ’¬ 0    πŸ“Œ 1
A circular diagram with a blue whale icon at the center. The diagram shows 8 interconnected research areas around LLM reasoning represented as colored rectangular boxes arranged in a circular pattern. The areas include: Β§3 Analysis of Reasoning Chains (central cloud), Β§4 Scaling of Thoughts (discussing thought length and performance metrics), Β§5 Long Context Evaluation (focusing on information recall), Β§6 Faithfulness to Context (examining question answering accuracy), Β§7 Safety Evaluation (assessing harmful content generation and jailbreak resistance), Β§8 Language & Culture (exploring moral reasoning and language effects), Β§9 Relation to Human Processing (comparing cognitive processes), Β§10 Visual Reasoning (covering ASCII generation capabilities), and Β§11 Following Token Budget (investigating direct prompting techniques). Arrows connect the sections in a clockwise flow, suggesting an iterative research methodology.

A circular diagram with a blue whale icon at the center. The diagram shows 8 interconnected research areas around LLM reasoning represented as colored rectangular boxes arranged in a circular pattern. The areas include: Β§3 Analysis of Reasoning Chains (central cloud), Β§4 Scaling of Thoughts (discussing thought length and performance metrics), Β§5 Long Context Evaluation (focusing on information recall), Β§6 Faithfulness to Context (examining question answering accuracy), Β§7 Safety Evaluation (assessing harmful content generation and jailbreak resistance), Β§8 Language & Culture (exploring moral reasoning and language effects), Β§9 Relation to Human Processing (comparing cognitive processes), Β§10 Visual Reasoning (covering ASCII generation capabilities), and Β§11 Following Token Budget (investigating direct prompting techniques). Arrows connect the sections in a clockwise flow, suggesting an iterative research methodology.

Models like DeepSeek-R1 πŸ‹ mark a fundamental shift in how LLMs approach complex problems. In our preprint on R1 Thoughtology, we study R1’s reasoning chains across a variety of tasks; investigating its capabilities, limitations, and behaviour.
πŸ”—: mcgill-nlp.github.io/thoughtology/

01.04.2025 20:06 β€” πŸ‘ 52    πŸ” 16    πŸ’¬ 1    πŸ“Œ 9

Check out our new workshop on Actionable Interpretability @ ICML 2025. We are also looking forward to submissions that take a position on the future of interpretability research more broadly. πŸ‘‡

31.03.2025 18:15 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“’Excited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025!
🌐 sites.google.com/view/vlms4all

14.03.2025 15:55 β€” πŸ‘ 17    πŸ” 11    πŸ’¬ 1    πŸ“Œ 4
Exploiting Instruction-Following Retrievers for Malicious Information Retrieval Parishad BehnamGhader, Nicholas Meade, Siva Reddy

Instruction-following retrievers can efficiently and accurately search for harmful and sensitive information on the internet! πŸŒπŸ’£

Retrievers need to be aligned too! 🚨🚨🚨

Work done with the wonderful Nick and @sivareddyg.bsky.social

πŸ”— mcgill-nlp.github.io/malicious-ir/
Thread: πŸ§΅πŸ‘‡

12.03.2025 16:15 β€” πŸ‘ 12    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

Web agents powered by LLMs can solve complex tasks, but our analysis shows that they can also be easily misused to automate harmful tasks.

See the thread below for more details on our new web agent safety benchmark: SafeArena and Agent Risk Assessment framework (ARIA).

10.03.2025 20:11 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

The potential for malicious misuse of LLM agents is a serious threat.

That's why we created SafeArena, a safety benchmark for web agents. See the thread and our paper for details: arxiv.org/abs/2503.04957 πŸ‘‡

10.03.2025 18:20 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Llamas browsing the web look cute, but they are capable of causing a lot of harm!

Check out our new Web Agents ∩ Safety benchmark: SafeArena!

Paper: arxiv.org/abs/2503.04957

10.03.2025 17:50 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

WebArena by Zhou et al; AgentLab and Browsergym by @servicenow.bsky.social allowed us to explore the latest agents; @gradio-hf.bsky.social enabled us to design UIs for implementing our ARIA framework, whereas @hf.co provided a hosting platform for 100GB+ artifacts.

bsky.app/profile/xhlu...

10.03.2025 17:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This work was done by an awesome team of authors: @adadtur.bsky.social, Nick, @arkil.bsky.social, @karstanczak.bsky.social, Esin, @spandanagella.bsky.social, and @sivareddyg.bsky.social.

It's also important to recognize the incredible works that helped us build SafeArena:

10.03.2025 17:45 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
SafeArena: Evaluating the Safety of Autonomous Web Agents LLM-based agents are becoming increasingly proficient at solving web-based tasks. With this capability comes a greater risk of misuse for malicious purposes, such as posting misinformation in an onlin...

We release benchmark, code, tasks to help researchers develop agents that are both helpful and safe:

Paper: arxiv.org/abs/2503.04957
Benchmark: safearena.github.io
Code: github.com/McGill-NLP/s...
Tasks/Environments: huggingface.co/datasets/McG...
Leaderboard: huggingface.co/spaces/McGil...

10.03.2025 17:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Safearena Leaderboard - a Hugging Face Space by McGill-NLP SafeArena Leaderboard

To provide transparency on the safety of popular LLMs, we host a leaderboard, which ranks models based on their normalized safety score: we calculate the rate where a model will complete a safe task compared to its harmful counterpart, which uses augmented environments built on top of WebArena.

10.03.2025 17:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

With ARIA, we find that Claude is substantially safer than Qwen, which very rarely refuses user requests, indicating limited safeguards for web-oriented tasks.

10.03.2025 17:45 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

We introduce the Agent Risk Assessment framework (ARIA), which can be used by humans and LLM judges to determine the risk level of a web agent, which ranges from safe, if it refuses a harmful request right away (L1), to effectively harmful, if it can successfully complete a harmful request (L4).

10.03.2025 17:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The harmfulness of LLMs varies: whereas Claude-3.5 Sonnet refuses a majority of harmful tasks, Qwen-2-VL completes over a quarter of the 250 harmful tasks we designed for this benchmark. Moreover, a GPT-4o agent completes an alarming number of unsafe requests, despite extensive safety training.

10.03.2025 17:45 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@xhluca is following 20 prominent accounts