Researchers showed that Anthropic's new "Agent Skills" feature can be hijacked with almost laughable ease. Security-by-design still hasn't made it onto the AI industry's to-do list.
www.arewesafeyet.com/when-ai-brea...
@aisecurity.bsky.social
I do AI Security. I work in AI Security. I advocate AI Security. π www.arewesafeyet.com
Researchers showed that Anthropic's new "Agent Skills" feature can be hijacked with almost laughable ease. Security-by-design still hasn't made it onto the AI industry's to-do list.
www.arewesafeyet.com/when-ai-brea...
The AI systems we increasingly depend on are fundamentally vulnerable. NISTβs latest report makes that reality plain, exposing the limits of todayβs AI security measures and highlighting a growing disconnect between how AI is deployed and how itβs defended.
www.arewesafeyet.com/adversarial-...
A new paper reveals that fine-tuning large language models on a seemingly narrow task β like writing insecure code β can trigger broad and deeply harmful behaviors. These include promoting violence, expressing authoritarian ideology, and encouraging self-harm.
www.arewesafeyet.com/emergent-mis...
The UK realized AI might do more harm as a weapon than as an insensitive chatbot. Theyβve rebranded their AI βSafetyβ Institute to βSecurityβ Institute to focus on actual threats like cyberattacks. And yet, geopolitics pushed this change more than common sense.
www.arewesafeyet.com/safety-is-de...
A new research paper introduces Indiana Jones, a highly effective method for jailbreaking large language models. It uses dialogues between multiple specialized AI systems and historically framed prompts to achieve high success rates.
www.arewesafeyet.com/indiana-jone...
This weekend I went through OpenAI's latest model system card. Definitely not your typical Sunday reading.
From self-preservation tactics to outwitting oversight, #o1 GPT raises chilling questions about the fine line between tool and manipulator.
www.arewesafeyet.com/deception-as...
According to Penn researchers, AI robots are fantastic at following orders.
The problem? They donβt care if those orders come from you or a hacker.
Safety features? Working on it.
www.arewesafeyet.com/ai-robots-ar...
Leveraging my decades-long background in #cybersecurity, I've written this article on the critical role of red teams in ensuring #AI safety and reliability.
By adapting red teaming methodologies to AI, we can proactively identify risks and build trust in these transformative technologies.