's Avatar

@earlence.bsky.social

(Assistant) Professor at @UCSanDiego. I hacked a Stop sign once, and it is now in a museum. Also hacked a bicycle. I mostly spend my time building stuff though.

291 Followers  |  114 Following  |  17 Posts  |  Joined: 15.11.2024  |  1.6138

Latest posts by earlence.bsky.social on Bluesky

Building a more robust model definitely helps. But it cannot be the only line of defense. You have to sandbox the model, just like we sandbox OS processes to contain the damage of a memory corruption vuln.

06.05.2025 04:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Prompt injection attacks are the AI version of stack smashing from the 90s. Yet, most efforts are trying to defend against this by hoping to build better robust models (aka, computer programs). Do you see the issue here?

06.05.2025 04:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
SAGAI'25 @ IEEE S&P Goal The workshop will investigate the safety, security, and privacy of GenAI agents from a system design perspective. We believe that this new category of important and critical system components req...

SAGAI'25 will investigate the safety, security, and privacy of GenAI agents from a system design perspective. We are experimenting with a new "Dagstuhl" like seminar with invited speakers and discussion. Really excited about this workshop at IEEE Security and Privacy Symposium.

31.03.2025 19:32 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Preview
Computing Optimization-Based Prompt Injections Against Closed-Weights Models By Misusing a Fine-Tuning API We surface a new threat to closed-weight Large Language Models (LLMs) that enables an attacker to compute optimization-based prompt injections. Specifically, we characterize how an attacker can levera...

We found a way to compute optimization-based LLM prompt injections on proprietary models by misusing the fine tuning interface. Set learning rate to near zero, you get loss values on candidate attack tokens, without really changing base model. Tested on Gemini.

arxiv.org/abs/2501.09798

21.01.2025 15:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
CSE 291: LLM Security

I'm teaching a grad course on LLM Security at UCSD. In addition to academic papers, I've included material from the broader community.

I'm looking for 1 good article on LLM agent security. Send me recs!

cseweb.ucsd.edu/~efernandes/...

02.01.2025 16:11 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

FEEL THE AGI!

30.12.2024 15:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Some stuff we've been doing at UCSD It has been little more than two years since I moved to sunny San Diego and restarted my research group (now with Luoxi Meng, Nishit Pandya, Andrey Labunets and Xiaohan Fu, and occasionally Ashish Hoo...

2024 is ending and it marks just over 2 years at UCSD. Here is a short summary of things we've been doing.

www.linkedin.com/pulse/some-s...

28.12.2024 17:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Is there a GenAI service (or services) that will allow me to upload an image and then specify some text that modifies the image, and get back a new image with those modifications? Eg, say I upload a picture of spiderman in a seated position with text "convert this spiderman into standing position"

25.12.2024 15:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Most work has focused on privesc for some "forbidden knowledge" and IMO this has muddied JB a LOT. If you ignore the "make me a bomb" type issues, you will realize there's a lot more that can be done with JB attacks.

14.12.2024 17:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think that I've finally come to a reasonable definition of GenAI jailbreaking. A jailbreak is a privilege escalation. It allows the attacker to force the model to undertake arbitrary instructions, regardless of whatever safeguards might be in place.

14.12.2024 17:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I will go one step further. To become a bike lane/traffic planner, you have to ride the bike lane yourself.

09.12.2024 23:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
She Was a Russian Socialite and Influencer. Cops Say She’s a Crypto Laundering Kingpin Western authorities say they’ve identified a network that found a new way to clean drug gangs’ dirty cash. WIRED gained exclusive access to the investigation.

NEW: For the last few months, officials at Britain’s NCA have explained to me how they discovered and disrupted two massive Russian money laundering rings.

The networks have moved billions each year andβ€”unusuallyβ€”have been caught swapping cash for crypto with drugs gangs

🧡 A wild thread...

04.12.2024 15:47 β€” πŸ‘ 276    πŸ” 123    πŸ’¬ 8    πŸ“Œ 11

its got that 70s look

03.12.2024 16:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Explainer: What Are AI Agents? Here's how AI agents work, why people are jazzed about them, and what risks they hold

A good explainer on the security pitfalls of "AI Agents"

spectrum.ieee.org/ai-agents

26.11.2024 16:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Banned Books: Analysis of Censorship on Amazon.com - The Citizen Lab We analyze the system Amazon deploys on the US β€œamazon.com” storefront to restrict shipments of certain products to specific regions. We found 17,050 products that Amazon restricted from being shipped...

πŸ“’ Our latest report reveals that the US storefront of Amazon uses a system to restrict shipments of certain products. We found 17k+ products that were restricted from being shipped to specific regions, with the most common type of product being books πŸ“š.
citizenlab.ca/2024/11/anal...

25.11.2024 20:37 β€” πŸ‘ 39    πŸ” 22    πŸ’¬ 2    πŸ“Œ 7

My Christmas break plan is to learn Rust. Any pointers to resources that you found particularly useful?

21.11.2024 21:29 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0
Preview
Meta Finally Breaks Its Silence on Pig Butchering The company gave details for the first time on its approach to combating organized criminal networks behind the devastating scams.

STORY with @lhn.bsky.social: Meta is speaking out about pig butchering scams for the first timeβ€”it says it has removed 2 million pig butchering accounts this year.

In one instance, OpenAI alerted Meta to criminals using ChatGPT to generate comments used in scams

21.11.2024 18:21 β€” πŸ‘ 37    πŸ” 18    πŸ’¬ 0    πŸ“Œ 2

@mattburgess1.bsky.social has covered AI security stuff.

20.11.2024 16:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I will be adopting this terminology as well.

18.11.2024 19:47 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Charlie Murphy

My postdoc Charlie Murphy is on the academic job market this fall. He's doing really hard technical work on building constraint solvers and synthesis engines. You should interview him
pages.cs.wisc.edu/~tcmurphy4/

16.11.2024 07:50 β€” πŸ‘ 15    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

New idea for Anthropic's computer use agent. Task it with going thru my Twitter, finding those folks here, and following them.

16.11.2024 02:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

first thing I did after joining this new twitter was follow a bunch of PL folks. And some security folks.

16.11.2024 02:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@earlence is following 19 prominent accounts