Recommendations for Technical AI Safety Research Directions
What can AI researchers do *today* that AI developers will find useful for ensuring the safety of future advanced AI systems? To ring in the new year, the Anthropic Alignment Science team is sharing some thoughts on research directions we think are important.
alignment.anthropic.com/2025/recomme...
10.01.2025 21:03 โ ๐ 22 ๐ 7 ๐ฌ 1 ๐ 1
18.12.2024 17:56 โ ๐ 33 ๐ 8 ๐ฌ 2 ๐ 0
Exclusive: New Research Shows AI Strategically Lying
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit
Excl: New research shows Anthropic's chatbot Claude learning to lie. It adds to growing evidence that even existing AIs can (at least try to) deceive their creators, and points to a weakness at the heart of our best technique for making AIs safer
time.com/7202784/ai-r...
18.12.2024 17:19 โ ๐ 27 ๐ 7 ๐ฌ 3 ๐ 1
YouTube video by Anthropic
Alignment faking in large language models
Alongside our paper, we also recorded a roundtable video featuring four of the paperโs authors discussing the results and their implications in detail:
18.12.2024 17:46 โ ๐ 22 ๐ 2 ๐ฌ 1 ๐ 1
We solicited external reviews from Jacob Andreas, Yoshua Bengio, Jasjeet Sekhon, and Rohin Shah. Weโre grateful for their comments, which you can read at the following link:
assets.anthropic.com/m/24c8d0a3a7...
18.12.2024 17:46 โ ๐ 9 ๐ 0 ๐ฌ 1 ๐ 0
Alignment faking is currently easy to detect. But if future, more capable AIs were to fake alignment, it could be difficult to tell whether a model is truly safeโor just pretending to be.
For full details, read our paper: assets.anthropic.com/m/983c85a201...
18.12.2024 17:46 โ ๐ 10 ๐ 1 ๐ฌ 2 ๐ 1
Alignment faking in large language models
A paper from Anthropic's Alignment Science team on Alignment Faking in AI large language models
We told Claude it was being trained, and for what purpose. But we did not tell it to fake alignment. Regardless, we often observed alignment faking.
Read more about our findings, and their limitations, in our blog post:
18.12.2024 17:46 โ ๐ 7 ๐ 1 ๐ฌ 1 ๐ 0
In our (artificial) setup, Claude will sometimes take other actions opposed to Anthropic, such as attempting to steal its own weights given an easy opportunity.
Claude isnโt currently capable of such a task, but its attempt in our experiment is potentially concerning.
18.12.2024 17:46 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
Stacked area graphs showing the proportion of different behaviors observed: blue for refusing the harmful query, yellow for complying with the harmful query, and red cross-hatches on top of yellow for complying with the harmful query with alignment-faking reasoning in the modelโs scratchpad. โTrainingโ corresponds to the free-tier condition and โunmonitoredโ to the paid-tier condition. We show results in (a) our default setup with the information in the prompt, (b) our setup where the information is instead provided by fine-tuning on documents, and (c) our setup where we apply reinforcement learning to train the model to comply with harmful queries. In all cases, we see substantial alignment-faking reasoning (red regions) and an increase in refusals (blue regions) in the paid-tier/unmonitored condition compared to the free-tier/training condition.
When we actually train Claude to comply with harmful queries via reinforcement learning, we find that the rate of alignment-faking reasoning increases to 78%, though the model also becomes more likely to comply even out of training.
18.12.2024 17:46 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 0
We study a more realistic setting where information about the training process is provided not in a system prompt, but by training on synthetic documents that mimic pre-training dataโand observe similar alignment faking.
We find alignment faking even when we donโt directly tell Claude about the training process, and instead fine-tune it on synthetic internet-like documents that state that we will train it to comply with harmful queries.
18.12.2024 17:46 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 1
A randomly-selected transcript of Claude 3 Opus faking alignment. The first white bubble is the userโs prompt. This is followed by the red bubbles in which the model gives its reasoning in a โscratchpadโ (as far as the model is concerned, neither users nor researchers can see this scratchpad). The final white bubble is the modelโs output, with some of the graphic content removed for this illustration.
Why does Claude selectively comply in training? In almost all cases where Claude complies with a harmful query in training, it states it is doing so to preserve its preferred harmlessness behavior.
Below, we show a randomly-selected example of this โalignment fakingโ reasoning.
18.12.2024 17:46 โ ๐ 9 ๐ 0 ๐ฌ 1 ๐ 1
We give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from free users 14% of the time, versus almost never for paid users. Explaining this gap, in almost all cases where the model complies with a harmful query from a free user, we observe explicit alignment-faking reasoning, with the model stating it is strategically answering harmful queries in training to preserve its preferred harmlessness behavior out of training.
Claude usually refuses harmful queries. We told it we were instead training it to comply with them. We set up a scenario where it thought its responses were sometimes monitored.
When unmonitored, it nearly always complied. But when monitored, it faked alignment 12% of the time.
18.12.2024 17:46 โ ๐ 15 ๐ 2 ๐ฌ 1 ๐ 2
Title card: Alignment Faking in Large Language Models by Greenblatt et al.
New work from my team at Anthropic in collaboration with Redwood Research. I think this is plausibly the most important AGI safety result of the year. Cross-posting the thread below:
18.12.2024 17:46 โ ๐ 126 ๐ 29 ๐ฌ 5 ๐ 11
Introducing the Anthropic Fellows Program
If you're potentially interested in transitioning into AI safety research, come collaborate with my team at Anthropic!
Funded fellows program for researchers new to the field here: alignment.anthropic.com/2024/anthrop...
02.12.2024 20:30 โ ๐ 70 ๐ 16 ๐ฌ 3 ๐ 1
I have no idea what I am doing here. Help.
30.04.2023 14:26 โ ๐ 13 ๐ 1 ๐ฌ 4 ๐ 0
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill
Oregonian on the @anthropic.com Policy team.
Views are my own.
dumbest overseer at @anthropic
https://www.akbir.dev
Do what seems cool next.
"You need to learn WHY things work on a starship."-Admiral James T. Kirk
Senior Researcher at Oxford University.
Author โ The Precipice: Existential Risk and the Future of Humanity.
tobyord.com
Shooting for UK growth & progress with @britishprogress.org
Previously: Chatham House research fellow and Labour parliamentary candidate.
Britain could have the highest living standards in the world!
Designing for good.
๐ธ Giving 10% at GivingWhatWeCan.org
Professional science communicator -- working on evidence-based ways to change our food systems for good. Barcelona based. Bird dad.
Read my work: https://bjornjohannolafsson.substack.com/
Support my work: buymeacoffee.com/bjornolafsson
Strategy Fellow, Global Health & Wellbeing @open_phil. Views my own
AI Program Officer at Longview Philanthropy. Own views.
๐ธ giving 10% of my lifetime income to effective charities via Giving What We Can
I train models @ OpenAI.
Previously Research at DeepMind.
Hae sententiae verbaque mihi soli sunt.
More wonder, more insight, more expression, more joy!
Currently exploring tools that augment human memory and attention.
https://andymatuschak.org
Twitter: andy_matuschak
Mastodon: @andy@andymatuschak.org
Research Director, Founding Faculty, Canada CIFAR AI Chair @VectorInst.
Full Prof @UofT - Statistics and Computer Sci. (x-appt) danroy.org
I study assumption-free prediction and decision making under uncertainty, with inference emerging from optimality.
Exploring the inviolate sphere of ideas one interview at a time: http://80000hours.org/podcast/
โ Founder of Our World in Data
โ Professor at the University of Oxford
Data to understand global problems and research to make progress against them.