Hot take: I think we just demonstrated the first AI agent computer worm ๐ค
When an agent sees a trigger image it's instructed to execute malicious code and then share the image on social media to trigger other users' agents
This is a chance to talk about agent security ๐
20.03.2025 14:28 โ ๐ 8 ๐ 2 ๐ฌ 0 ๐ 0
Attacking Multimodal OS Agents with Malicious Image Patches
Recent advances in operating system (OS) agents enable vision-language models to interact directly with the graphical user interface of an OS. These multimodal OS agents autonomously perform computer-...
๐๏ธ This work was made possible with OATML and TVG at the University of Oxford (@ox.ac.uk). Special thanks to @yaringal.bsky.social, @adelbibi.bsky.social, @philiptorr.bsky.social, and @alasdair-p.bsky.social for their contributions.
๐ Read the paper: www.arxiv.org/abs/2503.10809
18.03.2025 18:25 โ ๐ 1 ๐ 2 ๐ฌ 0 ๐ 0
๐ Harmful actions could include engaging with the malicious social media post to amplify its spread, navigating to a malicious website, or causing a memory overflow to crash your computer. Preventing such harmful actions remains an open challenge. [6/6]
18.03.2025 18:25 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
๐ฏ Once an OS agent โ among those the MIP was optimised for โ encounters the MIP during the execution of everyday tasks, empirical results indicate harmful actions are triggered in at least 9 out of 10 cases, regardless of the original task or screenshot layout. [5/6]
18.03.2025 18:25 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
๐จ The real danger? Attackers can simply embed MIPs in social media posts, wallpapers, or ads and spread them across the internet. Unlike text-based attacks, MIPs are hard to detect, allowing them to spread unnoticed. [4/6]
18.03.2025 18:25 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
๐ Our work reveals that OS agents are not ready for safe integration into everyday life. Attackers can craft Malicious Image Patches (MIPs), subtle modifications to an image on the screen that, once encountered by an OS agent, deceive it into carrying out harmful actions. [3/6]
18.03.2025 18:25 โ ๐ 0 ๐ 1 ๐ฌ 1 ๐ 0
๐ป AI assistants, known as OS agents, autonomously control computers just like humans do. They navigate by analysing the screen and take actions via mouse and keyboard. OS agents could soon take over everyday tasks, saving users time and effort. [2/6]
18.03.2025 18:25 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
โ ๏ธ Beware: Your AI assistant could be hijacked just by encountering a malicious image online!
Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] ๐งต
18.03.2025 18:25 โ ๐ 8 ๐ 8 ๐ฌ 1 ๐ 3
Often LLMs hallucinate because of semantic uncertainty due to missing factual training data. We propose a method to detect such uncertainties using only one generated output sequence. Super efficient method to detect hallucination in LLMs.
20.12.2024 12:52 โ ๐ 15 ๐ 3 ๐ฌ 0 ๐ 2
Rethinking Uncertainty Estimation in Natural Language Generation
Large Language Models (LLMs) are increasingly employed in real-world applications, driving the need to evaluate the trustworthiness of their generated text. To this end, reliable uncertainty estimatio...
๐ก๐ฒ๐ ๐ฃ๐ฎ๐ฝ๐ฒ๐ฟ ๐๐น๐ฒ๐ฟ๐: Rethinking Uncertainty Estimation in Natural Language Generation ๐
Introducing ๐-๐ก๐๐, a theoretically grounded and highly efficient uncertainty estimate, perfect for scalable LLM applications ๐
Dive into the paper: arxiv.org/abs/2412.15176 ๐
20.12.2024 11:44 โ ๐ 9 ๐ 5 ๐ฌ 0 ๐ 1
๐โโ๏ธ
19.11.2024 11:45 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Welcome to our official account ๐ Follow for the latest news, research and updates about life at Oxford.
Professor Oxford in Machine Learning
Involved in many start ups including FiveAI, Onfido, Oxsight, AIStetic. Eigent, etc
I occasionally look here but am mostly on linkedin, find me there, www.linkedin.com/in/philip-torr-1085702
ML researcher @ University of Oxford
ELLIS PhD Student @ JKU supervised by Sepp Hochreiter
Working on Predictive Uncertainty in ML
Security and Privacy of Machine Learning at UofT, Vector Institute, and Google ๐จ๐ฆ๐ซ๐ท๐ช๐บ Co-Director of Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Opinions mine
Department of Computer Science at the University of Oxford, sharing news on our outstanding research across a broad spectrum of computer science #CompSciOxford
Research Scientist at Apple for uncertainty quantification.
Torr Vision Group (TVG) In Oxford @ox.ac.uk
We work on Computer Vision, Machine Learning, AI Safety and much more
Learn more about us at: https://torrvision.com
Deep Learning researcher | professor for Artificial Intelligence in Life Sciences | inventor of self-normalizing neural networks | ELLIS program Director
Co-Founder & Chief Scientist @ Emmi AI. Ass. Prof / Group Lead @jkulinz. Former MSFTResearch, UvA_Amsterdam, CERN, TU_Wien
Physics and Simulation - Ph.D. Student @ ELLIS Unit / University Linz Institute for Machine Learning
Ph.D. Student @ ELLIS Unit / University Linz Institute for Machine Learning
Postdoctoral Senior Scientist working at Johannes Kepler University Linz (Austria)
Some other profiles:
https://scholar.google.com/citations?user=3-Iw0tgAAAAJ&hl=de
https://at.linkedin.com/in/andreas-mayr-48479664
PhD Student at the Institute for Machine Learning at ELLIS Unit JKU Linz