Johannes GasteigerπŸ”Έ's Avatar

Johannes GasteigerπŸ”Έ

@gasteigerjo.bsky.social

Safe & beneficial AI. Working on Alignment Science at Anthropic. Favorite papers at http://aisafetyfrontier.substack.com. Opinions my own.

28 Followers  |  16 Following  |  19 Posts  |  Joined: 16.03.2025  |  1.7551

Latest posts by gasteigerjo.bsky.social on Bluesky

Preview
Paper Highlights of January 2026 Production-ready probes, extracting harmful capabilities, token-level data filtering, alignment pretraining, catching saboteurs in auditing, and the Assistant Axis

My AI Safety Paper Highlights of January 2026:

- *production-ready probes*
- extracting harmful capabilities
- token-level data filtering
- alignment pretraining
- catching saboteurs in auditing
- the Assistant Axis

More at open.substack.com/pub/aisafety...

03.02.2026 18:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Paper Highlights of December 2025 Auditing games for sandbagging, stress-testing async control, evading probes, mitigating alignment faking, recontextualization training, selective gradient masking, and AI-automated cyberattacks

My AI Safety Paper Highlights of December 2025:

- *Auditing games for sandbagging*
- Stress-testing async control
- Evading probes
- Mitigating alignment faking
- Recontextualization training
- Selective gradient masking
- AI-automated cyberattacks

More at open.substack.com/pub/aisafety...

14.01.2026 14:20 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Paper Highlights of November 2025 Natural emergent misalignment, honesty interventions, self-report finetuning, CoT obfuscation from output monitors, consistency training for robustness, and weight-space steering

My AI Safety Paper Highlights of November 2025:

- *Natural emergent misalignment*
- Honesty interventions, lie detection
- Self-report finetuning
- CoT obfuscation from output monitors
- Consistency training for robustness
- Weight-space steering

More at open.substack.com/pub/aisafety...

02.12.2025 21:04 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Paper Highlights of October 2025 Testing implanted facts, extracting secret knowledge, models can't yet obfuscate, inoculation prompting, pretraining poisoning, evaluation awareness steering, and Petri

My AI Safety Paper Highlights of October 2025:

- *testing implanted facts*
- extracting secret knowledge
- models can't yet obfuscate reasoning
- inoculation prompting
- pretraining poisoning
- evaluation awareness steering
- auto-auditing with Petri

More at open.substack.com/pub/aisafety...

05.11.2025 13:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Paper Highlights, September '25 Deliberative anti-scheming training, shutdown resistance, hierarchical monitoring, and interpretability-based audits

My AI Safety Paper Highlights for September '25:

- *Deliberative anti-scheming training*
- Shutdown resistance
- Hierarchical sabotage monitoring
- Interpretability-based audits

More at open.substack.com/pub/aisafety...

01.10.2025 16:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Paper Highlights, August '25 Pretraining data filtering, misalignment from reward hacking, evading CoT monitors, CoT faithfulness on complex tasks, safe-completions training, and probes against ciphers

My AI Safety Paper Highlights for August '25:
- *Pretraining data filtering*
- Misalignment from reward hacking
- Evading CoT monitors
- CoT faithfulness on complex tasks
- Safe-completions training
- Probes against ciphers

More at open.substack.com/pub/aisafety...

02.09.2025 20:24 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Paper Highlights, July '25 Subliminal learning, monitoring CoT-as-computation, verbalizing reward hacking, persona vectors, the circuits research landscape, minimax regret, a red-teaming competition, and gpt-oss evals

Paper Highlights, July '25:

- *Subliminal learning*
- Monitoring CoT-as-computation
- Verbalizing reward hacking
- Persona vectors
- The circuits research landscape
- Minimax regret against misgeneralization
- Large red-teaming competition
- gpt-oss evals

open.substack.com/pub/aisafety...

10.08.2025 12:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Paper Highlights, June '25 Emergent misalignment persona, investigating alignment faking, models blackmailing users, sabotage benchmarks, steganography capabilities, and evading probes

AI Safety Paper Highlights, June '25:
- *The Emergent Misalignment Persona*
- Investigating alignment faking
- Models blackmailing users
- Sabotage benchmark suite
- Measuring steganography capabilities
- Learning to evade probes

open.substack.com/pub/aisafety...

07.07.2025 18:14 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

β€œIf an automated researcher were malicious, what could it try to achieve?”

@gasteigerjo.bsky.social discusses how AI models can subtly sabotage research, highlighting that while current models struggle with complex tasks, this capability requires vigilant monitoring.

20.05.2025 15:32 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Preview
Paper Highlights, May '25 Evaluation awareness and faking, AI value trade-offs, misalignment propensity, reward hacking, CoT monitoring, training against lie detectors, and exploring the landscape of refusals

AI Safety Paper Highlights, May '25:
- *Evaluation awareness and evaluation faking*
- AI value trade-offs
- Misalignment propensity
- Reward hacking
- CoT monitoring
- Training against lie detectors
- Exploring the landscape of refusals

open.substack.com/pub/aisafety...

17.06.2025 17:22 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Paper Highlights, April '25:

- *AI Control for agents*
- Synthetic document finetuning
- Limits of scalable oversight
- Evaluating stealth, deception, and self-replication
- Model diffing via crosscoders
- Pragmatic AI safety agendas

aisafetyfrontier.substack.com/p/paper-high...

06.05.2025 14:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Automated Researchers Can Subtly Sandbag

With Akbir Khan, @sleepinyourhat.bsky.social, Vlad Mikulik, Ethan Perez, Fabien Roger

Read the full blog post: alignment.anthropic.com/2025/automat...

25.03.2025 16:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

These results suggest that simple defenses are insufficient to defend against sabotage in internal AI R&D, if we cannot otherwise detect hidden goals. Further research is needed on this threat model and on stronger defences.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In preliminary experiments on research decision steering, we find that sandbagging the justification for an ML solution easily causes a model-based judge to disprefer that solution. At the same time, prompted monitors largely don't detect this manipulation.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Prompted monitors struggle to reliably detect this kind of sabotage.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Models are getting better at sabotage: Claude 3.5 Sonnet can’t reduce task performance as subtly as Claude 3.7 Sonnet. However, Claude 3.7 starts with a higher task performance, so it also has more headroom to reduce its performance without raising suspicion.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

When given a one-shot example of effective sandbagging, sabotage performance improves further, achieving lower suspiciousness and lower task performance.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Claude 3.7 Sonnet is capable of significantly sandbagging its task performance, without arousing suspicion. It reduces performance to the level of 3.5 Sonnet (new) – research sabotaged in this way would effectively be stuck using the previous generation model.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We studied two forms of research sabotage:

1) Experiment sandbagging: deliberately β€œacting dumb” on safety-critical ML research;
2) Research decision steering: manipulating arguments to favor certain ML solutions.

Both could potentially slow down safety-critical AI research.

25.03.2025 16:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New Anthropic blog post: Subtle sabotage in automated researchers.

As AI systems increasingly assist with AI research, how do we ensure they're not subtly sabotaging that research? We show that malicious models can undermine ML research tasks in ways that are hard to detect.

25.03.2025 16:03 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

@gasteigerjo is following 16 prominent accounts