Last year SAGAI workshop had the most attendees out of all the workshops at IEEE S&P.
Don't miss out.
@someshjha.bsky.social
I am a professor in the computer sciences at UW-Madison. My technical interests in trustworthy ML, formal methods, and security. My other interests are Indian classical music, mindfulness, tennis, and pickleball.
Last year SAGAI workshop had the most attendees out of all the workshops at IEEE S&P.
Don't miss out.
Happy Diwali to all. May this coming year be full of joy and prosperity.
20.10.2025 16:30 — 👍 2 🔁 0 💬 1 📌 0Congrats. The work looks cool!
16.10.2025 16:28 — 👍 0 🔁 0 💬 0 📌 0Thanks for inviting me @simonsinstitute.bsky.social
The audience interaction was incredible.
Gorgeous. Where is it?
02.08.2025 17:10 — 👍 2 🔁 0 💬 1 📌 0Looks great! What are you making? I can start driving from Madison now.:-)
02.08.2025 17:09 — 👍 1 🔁 0 💬 1 📌 0In this work, we formally characterize the KAD scheme and uncover a structural vulnerability in its design that invalidates some core security principles.
We design a methodical adaptive attack, DataFlip, to exploit this fundamental weakness. Read about the details arxiv.org/abs/2507.05630
Recent defenses based on known-answer detection (KAD) have achieved near-perfect performance by using an LLM to classify inputs as clean or contaminated.
21.07.2025 03:25 — 👍 0 🔁 0 💬 1 📌 0LLM-integrated applications and agents are vulnerable to prompt injection attacks, in which adversaries embed malicious instructions within seemingly benign user inputs to manipulate the LLM’s intended behavior.
21.07.2025 03:25 — 👍 2 🔁 0 💬 1 📌 0The team is extremely open to working with other industrial and academic teams. Please reach out if you want to collaborate with our team.
16.07.2025 12:57 — 👍 0 🔁 0 💬 0 📌 0Recently, we received a DARPA grant on the problem of LLM-assisted translation of C to Rust. The team consists of amazing set of PIs from UW, Berkeley, UIUC, and Edinburgh. Really excited about what we can do.
Full article can be found here: www.cs.wisc.edu/2025/07/15/t...
I have interacted with @gautamkamath.com and highly recommend him for this position. Please vote for him.
30.06.2025 18:26 — 👍 2 🔁 0 💬 1 📌 0This research took a while to complete, but very proud of the result. Will do a detailed post soon.
05.06.2025 15:26 — 👍 2 🔁 0 💬 0 📌 0SAGAI 2025 program is now complete. What an amazing program! Don't miss it.
sites.google.com/corp/ucsd.ed...
Welcome Lucy.
05.05.2025 23:00 — 👍 1 🔁 0 💬 0 📌 0Air filters are not that expensive. I think even with the price increase you can afford it:-)
24.04.2025 17:30 — 👍 0 🔁 0 💬 1 📌 0Co-organized with @earlence.bsky.social @mihaichr.bsky.social Khawaja Shams (Google) and John Mitchell (Stanford).
Details can be found at: sites.google.com/corp/ucsd.ed...
SAGAI'25 will investigate the safety, security, and privacy of GenAI agents from a system design perspective. We are experimenting with a new "Dagstuhl" like seminar with invited speakers and discussion. Really excited about this workshop at IEEE Security and Privacy Symposium.
31.03.2025 19:32 — 👍 3 🔁 2 💬 1 📌 1Interesting! Didn't know that sifr and sunya are connected.
31.03.2025 00:18 — 👍 0 🔁 0 💬 1 📌 0Eid Mubarak to anyone of my friends that celebrate it.
www.youtube.com/watch?v=5hwX...
Looks great! What is in it? Tofu?
25.03.2025 21:14 — 👍 1 🔁 0 💬 1 📌 0These kind of comparisons are not very useful. Everyone should be charting their own course!
24.03.2025 15:24 — 👍 4 🔁 0 💬 0 📌 0Excellent place to work!
20.03.2025 16:29 — 👍 0 🔁 0 💬 0 📌 0Lorenzo graduated from my group and did some cool work on system and network security during his Ph.D. Congrats, Lorenzo!
Proud of you.
* removes reliance on public datasets, which was assumed in many existing integrity checks.
18.03.2025 22:07 — 👍 0 🔁 0 💬 0 📌 0* enables advanced integrity checks, such as cross-client validation accuracy, which were impossible in prior secure FL approaches. We show these checks are effective under model poisoning attacks and client data distribution shifts.
18.03.2025 22:06 — 👍 0 🔁 0 💬 1 📌 0Why SLVR? Building on secure Multi-party Computation (MPC), SLVR offers a fresh perspective on combining privacy and robustness in federated learning:
* leverages private client data while preserving the privacy guarantee of secure aggregation.
Have you ever wondered: In federated learning, what if we could leverage clients' private data without compromising privacy—what more could we achieve?
🚀 We're excited to introduce SLVR (Securely Leveraging Client Validation for Robust Federated Learning).
Paper: arxiv.org/pdf/2502.08055
Your sense of humor is getting better:-)
18.03.2025 17:01 — 👍 0 🔁 0 💬 0 📌 0Happy Holi to everyone who celebrates it.
www.youtube.com/watch?v=-l8K...