Daniel Brown's Avatar

Daniel Brown

@daniel-brown.bsky.social

CS assistant prof @Utah. Researches human-robot interaction, human-in-the-loop ML, AI safety and alignment. https://users.cs.utah.edu/~dsbrown/

533 Followers  |  13 Following  |  18 Posts  |  Joined: 24.11.2024  |  1.7934

Latest posts by daniel-brown.bsky.social on Bluesky

We hope this work can help inspire the development of better AI alignment tests and evaluations for LLM reward models.

Check out the workshop paper here: anamarasovic.com/publications...

8/8

10.10.2025 16:03 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We applied this approach to RewardBench and found evidence that much of the data in safety and reasoning datasets may be redundant (44% for safety and 24% for reasoning) and that this can lead to inflated alignment scores.

7/8

10.10.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

By scaling up these ideas to LLMs, we can now estimate the set of reward model weights (weights that map the last decoder hidden state to a scalar output) that are consistent with a preference alignment dataset and also identify redundant and non-redundant examples in the preference dataset.

6/8

10.10.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Once you find these core demonstrations or comparisons you can use them to craft efficient alignment tests. But until recently, we were only able to empirically test these ideas on simple toy domains.

5/8

10.10.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The main idea was that for linear rewards, we can determine, via an intersection of half-spaces, the set of reward functions that make a policy optimal and that this set of rewards is defined by a small number of "non-redundant" demonstrations or comparisons.

4/8

10.10.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It was a fun paper and has some interesting nuggets, like the fact that there exist sufficient conditions under which we can verify exact and approximate AI alignment across an infinite set of deployment environments via a constant-query-complexity test.

3/8

10.10.2025 16:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Value Alignment Verification As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent's performance and correctness. In th...

As some background, a couple of years ago I worked with
Jordan Schneider, @scottniekum.bsky.social, and Anca Dragan on what we called "Value Alignment Verification" with the goal of efficiently testing whether an AI system is aligned with human values.
arxiv.org/abs/2012.01557

2/8

10.10.2025 16:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Can you trust your reward model alignment scores?
New work presented today at the COLM Workshop on Socially Responsible Language Modelling Research led by Purbid Bambroo and in collaboration with @anamarasovic.bsky.social that probes LLM preference test sets for redundancy and inflated scores.

1/8

10.10.2025 16:03 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Agreement Volatility: A Second-Order Metric for Uncertainty... Autonomous surgical robots are a promising solution to the increasing demand for surgery amid a shortage of surgeons. Recent work has proposed learning-based approaches for the autonomous...

This was a really fun collaboration with Jordan Thompson, Britton Jordan, and Alan Kuntz.

Check out our paper here: openreview.net/forum?id=K7K...

5/5

29.09.2025 18:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our approach also enables uncertainty attribution! We can backpropagate uncertainty estimates into an input point cloud to visualize and interpret the robot's uncertainty.

If you're at #CoRL25, check out Jordan Thompson's talk and poster (Spotlight 6 & Poster 3).

4/5

29.09.2025 18:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We apply our approach to surgically-inspired deformable tissue manipulation and find it achieves a 10% lower reliance on human interventions compared to prior work that leverages variance-based uncertainty estimates.

3/5

29.09.2025 18:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Inspired by prior work on active, uncertainty-aware human-robot hand-offs like Ryan Hoque and @ken-goldberg.bsky.social's ThriftyDAgger (arxiv.org/abs/2109.08273), we show that agreement volatility enables robots to know when they need help so they can request appropriate human interventions.

2/5

29.09.2025 18:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Check out our new paper being presented today at #CoRL2025 on uncertainty quantification: openreview.net/forum?id=K7K....
We propose a new second-order metric for uncertainty quantification in robot learning that we call "Agreement Volatility."

1/5

29.09.2025 18:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Toward robust, interactive, and human‐aligned AI systems Ensuring that AI systems do what we, as humans, actually want them to do is one of the biggest open research challenges in AI alignment and safety. My research seeks to directly address this challeng...

Excited to announce that my lab's research was recently highlighted in an AI Magazine article: onlinelibrary.wiley.com/doi/pdf/10.1...

25.09.2025 20:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

If you're in Melbourne, come check out Connor's talk in the Teleoperation and Shared Control session today!

Paper: arxiv.org/abs/2501.08389
Website: sites.google.com/view/zerosho...

This is joint work with two of my other amazing PhD students Zohre Karimi and Atharv Belsare!

3/3

03.03.2025 20:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We study how to enable robots to use end-effector vision to estimate zero-shot human intents in conjunction with blended control to help humans accomplish manipulation tasks like grocery shelving with unknown and dynamically changing object locations.

2/3

03.03.2025 20:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Shared autonomy systems have been around for a long time but most approaches require a learned or specified set of possible human goals or intents. I'm excited for my student Connor Mattson to present our work at #HRI2025 on a zero-shot, vision-only shared autonomy (VOSA) framework.

1/3

03.03.2025 20:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Excited to announce that I've been invited to give a talk at AAAI-25 on "Leveraging Human Input to Enable Robust, Interactive, and Aligned AI Systems" as part of their New Faculty Highlights program!

16.12.2024 19:16 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@daniel-brown is following 12 prominent accounts