Sanjeev Arora's Avatar

Sanjeev Arora

@profsanjeevarora.bsky.social

Director, Princeton Language and Intelligence. Professor of CS.

1,204 Followers  |  29 Following  |  8 Posts  |  Joined: 13.11.2024  |  1.6358

Latest posts by profsanjeevarora.bsky.social on Bluesky

Post image Post image

Check out our new blogpost and policy brief on our recently updated lab website!

❓Are we actually capturing the bubble of risk for cybersecurity evals? Not really! Adversaries can modify agents by a small amount and get massive gains.

14.07.2025 22:22 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Would it make sense to also track how often this happened in pre-2023 cases? Humans "hallucinate" by making cut-and-paste mistakes, or other types of errors.

26.05.2025 02:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

The paper seems to reflect reflect a fundamental misunderstanding about how LLMs work. One cannot (currently) tell an LLM to "ignore pretraining data from year X onwards". The LLM doesn't have data stored neatly inside it in sortable format. It is not like a hard drive.

22.04.2025 02:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Great comment by my colleague @randomwalker.bsky.social

16.03.2025 19:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Understanding and extrapolating benchmark results will become essential for effective policymaking and informing users. New work identifies indicators that have high predictive power in modeling LLM performance. Excited for it to be out!

11.03.2025 20:07 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

What are 3 concrete steps that can improve AI safety in 2025? πŸ€–βš οΈ

Our new paper, β€œIn House Evaluation is Not Enough” has 3 calls-to-actions to empower evaluators:

1️⃣ Standardized AI flaw reports
2️⃣ AI flaw disclosure programs + safe harbors.
3️⃣ A coordination center for transferable AI flaws.

1/🧡

13.03.2025 15:59 β€” πŸ‘ 11    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

Congratulations ! great result.

27.02.2025 17:25 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A new path forward for open AI (note the space between the two words). Looking forward to seeing how it enables great research in the open.

29.01.2025 23:19 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
x.com

x.com/parksimon080...

Can VLMs do difficult reasoning tasks? Using new dataset for evaluating Simple-to-Hard generalization (a form of OOD generalization) we study how to mitigate the dreaded "modality gap" VLM vs its base LLM.
(note: the poster, Simon Park, applied to PhD programs this spring)

08.01.2025 14:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

SimPO: new method from Princeton PLI for improving chat models via preference data. Simpler than DPO and widely adopted within weeks by top models in the chatbot arena. Excellent and elementary account by author
@xiamengzhou.bsky.social (she's also on job market!). tinyurl.com/pepcynaxFully

03.12.2024 14:55 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0
Preview
Join Paper Club with Princeton University on Model Alignment Challenges in Preference Learning [AI Tinkerers - Paper Club] Join Our Paper Club Event Series! Meet with Sadhika Malladi, AI Researcher at Princeton University and discuss the challenges of aligning language models with human preferences. Don’t miss this unique...

I'll be giving a talk on my two recent preference learning works (led by Angelica Chen and @noamrazin.bsky.social) in the AI Tinkerers Paper Club today (11/26) at noon ET. Excited to share this talk with a broader audience! paperclub.aitinkerers.org/p/join-paper...

26.11.2024 12:55 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

Interesting thread from Geoffrey Irving about the fragility of interpreting LLMs' latent reasoning (whether self-reported, or recovered by some mechanistic interpretability idea). I have been pessimistic about trusting latent reasoning.

25.11.2024 14:50 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@profsanjeevarora is following 17 prominent accounts