Artificial Intelligence Security's Avatar

Artificial Intelligence Security

@aisecurity.bsky.social

I do AI Security. I work in AI Security. I advocate AI Security. πŸ‘‰ www.arewesafeyet.com

259 Followers  |  7 Following  |  9 Posts  |  Joined: 21.09.2023  |  1.3951

Latest posts by aisecurity.bsky.social on Bluesky

Post image

Researchers showed that Anthropic's new "Agent Skills" feature can be hijacked with almost laughable ease. Security-by-design still hasn't made it onto the AI industry's to-do list.

www.arewesafeyet.com/when-ai-brea...

05.11.2025 22:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The AI systems we increasingly depend on are fundamentally vulnerable. NIST’s latest report makes that reality plain, exposing the limits of today’s AI security measures and highlighting a growing disconnect between how AI is deployed and how it’s defended.

www.arewesafeyet.com/adversarial-...

24.04.2025 10:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

A new paper reveals that fine-tuning large language models on a seemingly narrow task – like writing insecure code – can trigger broad and deeply harmful behaviors. These include promoting violence, expressing authoritarian ideology, and encouraging self-harm.

www.arewesafeyet.com/emergent-mis...

03.04.2025 09:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The UK realized AI might do more harm as a weapon than as an insensitive chatbot. They’ve rebranded their AI β€˜Safety’ Institute to β€˜Security’ Institute to focus on actual threats like cyberattacks. And yet, geopolitics pushed this change more than common sense.
www.arewesafeyet.com/safety-is-de...

26.02.2025 16:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

A new research paper introduces Indiana Jones, a highly effective method for jailbreaking large language models. It uses dialogues between multiple specialized AI systems and historically framed prompts to achieve high success rates.

www.arewesafeyet.com/indiana-jone...

22.02.2025 13:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Deception as a Service: the AI that refuses to hand over its keys | Are We Safe Yet?

This weekend I went through OpenAI's latest model system card. Definitely not your typical Sunday reading.

From self-preservation tactics to outwitting oversight, #o1 GPT raises chilling questions about the fine line between tool and manipulator.

www.arewesafeyet.com/deception-as...

09.12.2024 08:07 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

According to Penn researchers, AI robots are fantastic at following orders.

The problem? They don’t care if those orders come from you or a hacker.

Safety features? Working on it.

www.arewesafeyet.com/ai-robots-ar...

23.10.2024 08:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Red Teaming: A Proactive Approach to AI Safety Artificial intelligence is permeating every aspect of our lives, promising to make them more efficient, smarter, and easier. But are we truly prepared to entrust so much of our world to these complex,...

Leveraging my decades-long background in #cybersecurity, I've written this article on the critical role of red teams in ensuring #AI safety and reliability.

By adapting red teaming methodologies to AI, we can proactively identify risks and build trust in these transformative technologies.

23.03.2024 11:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
PsySafe: a new approach to multi-agent system security Title: PsySafe: A Novel Approach to Securing Multi-Agent Systems Multi-agent systems, powered by Large Language Models (LLM), are exhibiting remarkable capabilities in the field of collective intellig...

Fascinating research on the security risks posed by the 'dark psychological states' of AI agents in multi-agent systems - a must-read for anyone working with or interested in the future of AI and its implications for cybersecurity.

19.03.2024 19:22 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@aisecurity is following 6 prominent accounts