Tim G. J. Rudner's Avatar

Tim G. J. Rudner

@timrudner.bsky.social

Assistant Professor & Faculty Fellow, NYU. AI Fellow, Georgetown University. Probabilistic methods for robust and transparent ML & AI Governance. Prev: Oxford, Yale, UC Berkeley. https://timrudner.com

5,484 Followers  |  540 Following  |  82 Posts  |  Joined: 03.01.2024  |  2.1893

Latest posts by timrudner.bsky.social on Bluesky

Post image Post image Post image Post image

CDS Faculty Fellow @timrudner.bsky.social served as general chair for the 7th Symposium on Advances in Approximate Bayesian Inference, held April alongside ICLR 2025.

The symposium explored connections between probabilistic machine learning and AI safety, NLP, RL, and AI for science.

17.07.2025 19:10 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Congratulations again!

14.07.2025 17:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congratulations Umang!

15.05.2025 16:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI in Military Decision Support: Balancing Capabilities with Risk CDS Faculty Fellow Tim G. J. Rudner and colleagues at CSET outline responsible practices for deploying AI in military decision-making.

CDS Faculty Fellow Tim G. J. Rudner (@timrudner.bsky.social) and colleagues at CSET β€” @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell β€” examine responsible AI deployment in military decision-making.

Read our post on their policy brief: nyudatascience.medium.com/ai-in-milita...

14.05.2025 19:23 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

The result in this paper I'm most excited about:

We showed that planning in world model latent space allows successful zero-shot generalization to *new* tasks!

Project website: latent-planning.github.io

Paper: arxiv.org/abs/2502.14819

07.05.2025 21:26 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

#1: Can Transformers Learn Full Bayesian Inference In Context? with @arikreuter.bsky.social @timrudner.bsky.social @vincefort.bsky.social

01.05.2025 12:36 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI πŸ€–πŸ§ 

30.04.2025 00:02 β€” πŸ‘ 19    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Congratulations to the #AABI2025 Proceedings Track Best Paper Award recipients!

29.04.2025 20:55 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!

29.04.2025 20:54 β€” πŸ‘ 21    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
Post image

We concluded #AABI2025 with a panel discussion on

**The Role of Probabilistic Machine Learning in the Age of Foundation Models and Agentic AI**

Thanks to Emtiyaz Khan, Luhuan Wu, and @jamesrequeima.bsky.social for participating!

29.04.2025 20:49 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

.@jamesrequeima.bsky.social gave the third invited talk of the day at #AABI2025!

**LLM Processes**

29.04.2025 20:41 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Luhuan Wu is giving the second invited talk of the day at #AABI2025!

**Bayesian Inference for Invariant Feature Discovery from Multi-Environment Data**

Watch it on our livestream: timrudner.com/aabi2025!

29.04.2025 04:02 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Emtiyaz Khan is giving the first invited talk of the day at #AABI2025!

29.04.2025 01:55 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

We just kicked off #AABI2025 at NTU in Singapore!

We're livestreaming the talks here: timrudner.com/aabi2025!

Schedule: approximateinference.org/schedule/

#ICLR2025 #ProbabilisticML

29.04.2025 01:47 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 0    πŸ“Œ 1
Preview
AABI 2025 Β· Luma 7th Symposium on Advances of Approximate Bayesian Inference (AABI) https://approximateinference.org/schedule

Make sure to get your tickets to #AABI2025 if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic ML, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#ProbabilisticML #Bayes #UQ #ICLR2025 #AABI2025

18.04.2025 03:42 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Make sure to get your tickets to AABI if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic modeling, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#Bayes #MachineLearning #ICLR2025 #AABI2025

13.04.2025 07:43 β€” πŸ‘ 17    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.

cset.georgetown.edu/publication/...

17.04.2025 16:05 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
How the U.S. Public and AI Experts View Artificial Intelligence These groups are far apart in their enthusiasm and predictions for AI, but both want more personal control and worry about too little regulation.

A great Pew Research survey:

"How the U.S. Public and AI Experts View Artificial Intelligence"

Everyone working in ML should read this and ask themselves why experts and non-experts have such divergent views about the potential of AI to have a positive impact.

www.pewresearch.org/internet/202...

05.04.2025 21:28 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 2    πŸ“Œ 1

This is an excellent article!

Steering foundation models towards trustworthy behaviors is one of the most important research directions today.

Helen is a deep and rigorous thinker, and you should definitely subscribe to her Substack!

04.04.2025 01:42 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I'm super excited to see our #CSET report on **AI-enabled military decision support systems** being released today!

Great work by @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell!

01.04.2025 15:32 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

Evaluations of AI explainability claims are not clear cut.

@minanrn.bsky.social, @timrudner.bsky.social & Christian Schoeberl show that in the domain of recommender systems – where explanations are key – there are different notions of what explainability means: cset.georgetown.edu/publication/...

27.02.2025 15:14 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

[1/6] Discourse around AI evaluations has focused a lot on testing LLMs for catastrophic risks. In a new @csetgeorgetown.bsky.social report, Christian Schoeberl, @timrudner.bsky.social, and I explore another side of AI evals: evals of claims about the trustworthiness of AI systems

20.02.2025 19:52 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Check out our paper on the quality of interpretability evaluations of recommender systems:

cset.georgetown.edu/publication/...

Led by @minanrn.bsky.social and Christian Schoeberl!

@csetgeorgetown.bsky.social

19.02.2025 20:45 β€” πŸ‘ 11    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

πŸ“£ Jobs alert

We’re hiring postdoc and research engineer to work on UQ for LLMs!! Details ⬇️

#ai #llm #uq

12.02.2025 16:26 β€” πŸ‘ 13    πŸ” 11    πŸ’¬ 0    πŸ“Œ 0

You still have a chance to submit your work to all tracks for AABI. New deadline is February 14 for both Workshop and Proceedings track!

#AABI2025 #ML #ICLR2025 #Stats #Bayes

12.02.2025 05:39 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Great news for everyone struggling with concurrent AABI and UAI submissions: We've extended the AABI deadline by one week, so there's no excuse not to submit anymore πŸ˜‰

06.02.2025 15:41 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Call for Papers

We have extended the #AABI workshop and proceedings deadlines!

*New deadlines:*

Workshop Track: February 14, AoE
Proceedings Track: February 14, AoE
approximateinference.org/call/

#ProbML #AABI #ICLR #Bayes

06.02.2025 17:33 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

We have extended the workshop and proceedings deadlines:

*New deadlines:*

Workshop Track: February 14, AoE
Proceedings Track: February 14, AoE

approximateinference.org/call/

#ProbML #AABI #ICLR

06.02.2025 03:14 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Preview
Home

We particularly welcome submissions that explore connections between probabilistic machine learning and other fields such as
- deep learning
- NLP
- active learning
- RL
- compression
- AI safety
- AI for scientific discovery
- causal inference

approximateinference.org

05.02.2025 04:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This year’s symposium will have an *expanded scope* and will be focused on the development, analysis, and application of *probabilistic machine learning methods* broadly construed.

#AABI #ProbML #Bayes

05.02.2025 04:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@timrudner is following 20 prominent accounts