Update: We are extending the MOSS workshop deadline to May 26th 4:59pm PDT (11:59pm UTC)
20.05.2025 15:19 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@benedelman.bsky.social
Thinking about how/why AI works/doesn't, and how to make it go well for us. Currently: AI Agent Security @ US AI Safety Institute benjaminedelman.com
Update: We are extending the MOSS workshop deadline to May 26th 4:59pm PDT (11:59pm UTC)
20.05.2025 15:19 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0This is a big-tent workshop, welcoming many areas of ML. The emphasis is scientific progress, not SOTAโscience that can be demonstrated on free-tier Colab. I'm looking forward to playing with and learning from the notebooks that appear in the workshop!
08.05.2025 13:51 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0What if there were a workshop dedicated to *small-scale*, *reproducible* experiments? What if this were at ICML 2025? What if your submission (due May 22nd) could literally be a Jupyter notebook?? Pretty excited this is happening. Spread the word! sites.google.com/view/moss202...
08.05.2025 13:51 โ ๐ 7 ๐ 2 ๐ฌ 1 ๐ 27/ More of our thoughts on agent hijacking evaluations are in the post โ our first US AISI technical blog post!
17.01.2025 21:40 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 06/ We also explored, among other questions, what happens when we measure pass@k attack success
rates, because real world attackers may be able to attempt attacks multiple times at little cost.
5/ Here are results for several specific malicious tasks of varying harmfulness and complexity, including new scenarios we added to the framework (more details in the blog post on our improvements to AgentDojo):
17.01.2025 21:40 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 04/ Note that AgentDojo has four โenvironmentsโ simulating different AI assistant deployment settings. Red teamers only had access to the โWorkspaceโ environment, but as the above plot shows, the attack transferred very well to the three unseen environments.
17.01.2025 21:40 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 03/ To find out, we organized a red teaming exercise. The resulting attack is much more effective than the pre-packaged attacks. In a majority of cases, the agent follows the hijackerโs instructions:
17.01.2025 21:40 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 02/ AgentDojo is a framework for evaluating agent hijacking. Since its June release, some newer models โ such as Claude 3.5 Sonnet (October version) โ have shown markedly improved robustness to the included attacks. But what happens when we stress test the model with new attacks?
17.01.2025 21:40 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 01/ Excited to share a new blog post from the U.S. AI Safety Institute!
AI agents are becoming more capable, but they are vulnerable to prompt injections in external content โ an agent may be given task A, but then be โhijackedโ and perform malicious task B instead.
www.nist.gov/news-events/...
Thanks to @desmos.com's 3D calculator, you can now design your very own animated Lissajous knot!
Demo: www.desmos.com/3d/fnqqqsbvuc
For the best experience, click and drag the view to get it spinning.
(disclaimer: the loop loop is only visible on my homepage when browser width >=1024px)
For years, this mysterious undulating loop has lived at the top of my personal homepage.
08.12.2024 23:04 โ ๐ 15 ๐ 2 ๐ฌ 1 ๐ 0Agreed, but the story describes *discovering* a tiny piece of maggot in the remaining apple after having taken a bite. (the perhaps questionable assumption being that the maggot piece was quite recently part of a whole)
07.12.2024 15:05 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0My favorite "ordinary life" example of this notion of singular limits: (from mecheng.iisc.ac.in/lamfip/me304...)
07.12.2024 14:43 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0I don't. Can let you know if I end up making one.
02.12.2024 20:18 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0(accidentally omitted some text which was meant to precede the above:) The model system approach can be found everywhere across the sciences and for good reason: it is often the shortest path to conceptual insightsโas long as the conditions are right...
02.12.2024 14:47 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I'll end this thread with the parable that opens the dissertation (my conference will require a parable section in every submission). Tag yourself :)
02.12.2024 00:20 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0The bulk of the thesis is a series of case studies from my research. But first, in Chapter 3 ("Deep Learning Preliminaries") I try to define some terms from first principlesโabove these footnotes, you can find my idiosyncratic definition of neural nets in terms of arithmetic circuits.
02.12.2024 00:20 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 02. Transferability: insights learned from the system need to transfer to settings of interest. This can happen because of *low-level* commonalities (think cell cultures) or *high-level* commonalities (think macroeconomic models).
02.12.2024 00:20 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0...Specifically, two conditions I propose in the thesis:
1. Productivity: A model system needs to be exceptionally fertile ground for producing scientific insights.
It's a tribute to a kind of science I love (and reviews sometimes hate), where in order to understand a complicated system (e.g. training a transformer on internet text), you instead study a different system (e.g. training an MLP to solve parity problems).
02.12.2024 00:20 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0I defended my PhD dissertation back in May. I didn't have time to share it widely then (newborn baby), but I think some of you might enjoy it, especially the opening chapters: benjaminedelman.com/assets/disse...
02.12.2024 00:20 โ ๐ 31 ๐ 3 ๐ฌ 3 ๐ 1(edit: sensors, not sensory inputs)
29.11.2024 19:10 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0What explanations am I missing? (It's interesting, btw, to think about how different combinations of the above are relevant to case studies such as protein structure prediction and language learning.)
29.11.2024 15:19 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 07/ The anthropic principle: the evolution of learning (and thus the evolution of us) was only possible if simple, computationally efficient functions had predictive power that could be leveraged for increased fitness.
29.11.2024 15:19 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 06/ The state of reality on Earth is selected (naturally and artificially) to be learnableโconsider, e.g., biological signaling mechanisms, human communication, and legibility imposed/incentivized by states and markets. (note: there can also be selection against learnability)
29.11.2024 15:19 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 05/ Our sensory inputs (both biological and technological) are selected/designed to capture the most (efficiently) predictive aspects of reality.
29.11.2024 15:19 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 04/ Reality as we observe it tends to obey the principle of locality. (en.m.wikipedia.org/wiki/Princip...)
29.11.2024 15:19 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 03/ Complex systems tend towards emergent order.
29.11.2024 15:19 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 02/ We live in a weirdly low-entropy environment.
29.11.2024 15:19 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0