David Lindner's Avatar

David Lindner

@davidlindner.bsky.social

Making AI safer at Google DeepMind davidlindner.me

802 Followers  |  105 Following  |  31 Posts  |  Joined: 25.11.2024  |  1.7864

Latest posts by davidlindner.bsky.social on Bluesky

Excited to share some technical details about our approach to scheming and deceptive alignment as outlined in Google's Frontier Safety Framework!

(1) current models are not yet capable of realistic scheming
(2) CoT monitoring is a promising mitigation for future scheming

08.07.2025 13:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
MATS Program

View our full paper here: arxiv.org/abs/2507.02737
View steganography transcripts here: steganography-evals-d9297ad0740c.herokuapp.com/
Github: github.com/arturzolkow...

Work done with: Artur Zolkowski, Kei Nishimura-Gasparian, Robert McCarthy and @zimmerrol.bsky.social‬ during matsprogram.org

04.07.2025 15:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Overall, current models have some steganographic capabilities but likely aren't strong enough to bypass well-designed monitors. Nevertheless, we believe it’s important to continue measuring these capabilities to determine when additional mitigations are needed.

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also create case studies implementing situations where models with strong steganographic capabilities could take unwanted actions and bypass a monitor. Current models rarely succeed in these environments unless they are given strong tips about what to do.

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Models can also reason through simple problems using a number of common encoding schemes. But performance quickly degrades as the problems get more difficult.

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

While models understand they need to conceal their reasoning, they have difficulty actually hiding that reasoning in a plausible way. As an example, we show a model’s reasoning attempt when asked a question on how to install solar panels. [bold text and highlighting added by us]

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In our encoded reasoning evals, we find that models can sometimes leverage encoded reasoning to perform better than no reasoning baselines on simple reasoning tasks. But encodings that help tend not to be very subtle and are regularly detected by a monitor.

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Models are able to perform significantly better when the sender and receiver are able to coordinate on the steganographic encoding method. But this mainly applies when the model has access to an unmonitored CoT. With no CoT, performance remains extremely poor.

04.07.2025 15:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

When they succeed, models sometimes leverage sophisticated strategies to pass messages, such as encoding information in the lengths of the sentences they generate! We found this example difficult to decode ourselves. [formatting and word counts added by us]

04.07.2025 15:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In our message passing evals, we find that models nearly always fail without access to CoT. But models succeed much more often when they are allowed to use a CoT to freely reason ahead of time about how to encode and decode the hidden message.

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Models encoding hidden information might make safety mitigations such as CoT monitoring much more difficult. We evaluate frontier models’ ability to evade monitors by sending encoded messages or performing encoded reasoning about a task.

Read more: arxiv.org/abs/2507.02737

04.07.2025 15:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Can frontier models hide secret information and reasoning in their outputs?

We find early signs of steganographic capabilities in current frontier models, including Claude, GPT, and Gemini. 🧡

04.07.2025 15:34 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Super excited this giant paper outlining our technical approach to AGI safety and security is finally out!

No time to read 145 pages? Check out the 10 page extended abstract at the beginning of the paper

04.04.2025 14:27 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Introducing our short course on AGI safety We are excited to release a short course on AGI safety for students, researchers and professionals interested in this topic. The course…

We are excited to release a short course on AGI safety! The course offers a concise and accessible introduction to AI alignment problems and our technical / governance approaches, consisting of short recorded talks and exercises (75 minutes total). deepmindsafetyresearch.medium.com/1072adb7912c

14.02.2025 16:06 β€” πŸ‘ 18    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Check out this great post by Rohin Shah about our team and what we’re looking for in candidates: www.alignmentforum.org/posts/wqz5CR...

In particular: we're looking for strong ML researchers and engineers and you do not need to be an AGI safety expert

18.02.2025 16:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Research Engineer, Gemini Safety / AGI Safety & Alignment New York City, New York, US

Research Engineer: boards.greenhouse.io/deepmind/job...

Research Scientist: boards.greenhouse.io/deepmind/job...

Happy to answer questions via DM!

10.02.2025 18:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Want to join one of the best AI safety teams in the world?

We're hiring at Google DeepMind! We have open positions for research engineers and research scientists in the AGI Safety & Alignment and Gemini Safety teams.

Locations: London, Zurich, New York, Mountain View and SF

10.02.2025 18:55 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1

By default, LLM agents with long action sequences use early steps to undermine your evaluation of later steps; a big alignment risk.

Our new paper mitigates this, keeps the ability for long-term planning, and doesnt assume you can detect the undermining strategy. πŸ‘‡

23.01.2025 15:47 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We’ll do a lot more work studying ways to build safer AI systems and the safety-performance tradeoffs involved. If that’s something you’d like to work on, do get in touch via DM!

23.01.2025 15:33 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks to @sebfar.bsky.social for exceptional leadership of the project and the rest of the team: Vikrant Varma, David Elson, Caleb Biddulph, @ian-goodfellow.bsky.social, and Rohin Shah for making this work possible!

23.01.2025 15:33 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Want to dive deeper? Check out our paper and our blog posts explaining the work in more detail

πŸ“„ Paper: arxiv.org/abs/2501.13011

πŸ’‘ Introductory explainer: deepmindsafetyresearch.medium.com/mona-a-meth...

βš™οΈ Technical safety post: www.alignmentforum.org/posts/zWySW...

23.01.2025 15:33 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

MONA is promising – but it’s not a silver bullet. It doesn’t help with reward hacking within a single step, and it can reduce performance, especially when the reward is well specified. Nonetheless, we think MONA is an important tool for building safer LLM agents.

23.01.2025 15:33 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Then we train agents to make loan decisions but penalize them for using the applicant's nationality. Because nationality improves the decision reward, ordinary RL agents learn to use it secretly. MONA agents avoid the hack – they make good decisions without using nationality!

23.01.2025 15:33 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

First, we train agents to solve coding problems but also write tests to evaluate their solutions. Ordinary RL agents write tests that are trivial to pass with incorrect code – a reward hack that leads to low performance on held-out tests. MONA agents solve the problems correctly!

23.01.2025 15:33 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Does this work? Yes! We test MONA in two LLM agent environments and find that MONA agents learn to perform well when agents trained with ordinary reinforcement learning reward hack.

23.01.2025 15:33 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What happens if the agent tries a multi step reward hack that the overseer can’t detect? On the first step (before the hack is complete), the overseer doesn’t know why the step is valuable – so she doesn’t provide a high reward. So the first step isn’t incentivized by MONA.

23.01.2025 15:33 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Key idea: Use RL training only for short horizons (myopic optimization), but have an overseer evaluate how good actions are for the long term (non-myopic approval).

Best of both worlds: we get human-understandable plans (safe!) and long-term planning (performant!)

23.01.2025 15:33 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New Google DeepMind safety paper! LLM agents are coming – how do we stop them finding complex plans to hack the reward?

Our method, MONA, prevents many such hacks, *even if* humans are unable to detect them!

Inspired by myopic optimization but better performance – details in🧡

23.01.2025 15:33 β€” πŸ‘ 40    πŸ” 8    πŸ’¬ 1    πŸ“Œ 5

Stop by SoLaR workshop at NeurIPS today to see Kai present the paper!

14.12.2024 15:36 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

MISR eval reveals frontier language agents can self-reason to modify their own settings & use tools, but they fail at harder tests involving social reasoning and knowledge seeking. Opaque reasoning remains difficult. arxiv.org/abs/2412.03904

10.12.2024 11:16 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@davidlindner is following 18 prominent accounts