's Avatar

@bhusalb.bsky.social

3 Followers  |  42 Following  |  2 Posts  |  Joined: 20.11.2024  |  1.357

Latest posts by bhusalb.bsky.social on Bluesky

Privacy-Aware In-Context Learning for Large Language Models

Bishnu Bhusal, Manoj Acharya, Ramneet Kaur, Colin Samplawski, Anirban Roy, Adam D. Cobb, Rohit Chadha, Susmit Jha

http://arxiv.org/abs/2509.13625

Large language models (LLMs) have significantly transformed natural language
understanding and generation, but they raise privacy concerns due to potential
exposure of sensitive information. Studies have highlighted the risk of
information leakage, where adversaries can extract sensitive information
embedded in the prompts. In this work, we introduce a novel private prediction
framework for generating high-quality synthetic text with strong privacy
guarantees. Our approach leverages the Differential Privacy (DP) framework to
ensure worst-case theoretical bounds on information leakage without requiring
any fine-tuning of the underlying models.The proposed method performs inference
on private records and aggregates the resulting per-token output distributions.
This enables the generation of longer and coherent synthetic text while
maintaining privacy guarantees. Additionally, we propose a simple blending
operation that combines private and public inference to further enhance
utility. Empirical evaluations demonstrate that our approach outperforms
previous state-of-the-art methods on in-context-learning (ICL) tasks, making it
a promising direction for privacy-preserving text generation while maintaining
high utility.

Privacy-Aware In-Context Learning for Large Language Models Bishnu Bhusal, Manoj Acharya, Ramneet Kaur, Colin Samplawski, Anirban Roy, Adam D. Cobb, Rohit Chadha, Susmit Jha http://arxiv.org/abs/2509.13625 Large language models (LLMs) have significantly transformed natural language understanding and generation, but they raise privacy concerns due to potential exposure of sensitive information. Studies have highlighted the risk of information leakage, where adversaries can extract sensitive information embedded in the prompts. In this work, we introduce a novel private prediction framework for generating high-quality synthetic text with strong privacy guarantees. Our approach leverages the Differential Privacy (DP) framework to ensure worst-case theoretical bounds on information leakage without requiring any fine-tuning of the underlying models.The proposed method performs inference on private records and aggregates the resulting per-token output distributions. This enables the generation of longer and coherent synthetic text while maintaining privacy guarantees. Additionally, we propose a simple blending operation that combines private and public inference to further enhance utility. Empirical evaluations demonstrate that our approach outperforms previous state-of-the-art methods on in-context-learning (ICL) tasks, making it a promising direction for privacy-preserving text generation while maintaining high utility.

Privacy-Aware In-Context Learning for Large Language Models

Bishnu Bhusal, Manoj Acharya, Ramneet Kaur, Colin Samplawski, Anirban Roy, Adam D. Cobb, Rohit Chadha, Susmit Jha

http://arxiv.org/abs/2509.13625

18.09.2025 03:49 — 👍 0    🔁 1    💬 0    📌 0
Approximate Algorithms for Verifying Differential Privacy with Gaussian Distributions

Bishnu Bhusal, Rohit Chadha, A. Prasad Sistla, Mahesh Viswanathan

http://arxiv.org/abs/2509.08804

The verification of differential privacy algorithms that employ Gaussian
distributions is little understood. This paper tackles the challenge of
verifying such programs by introducing a novel approach to approximating
probability distributions of loop-free programs that sample from both discrete
and continuous distributions with computable probability density functions,
including Gaussian and Laplace. We establish that verifying
$(\epsilon,\delta)$-differential privacy for these programs is \emph{almost
decidable}, meaning the problem is decidable for all values of $\delta$ except
those in a finite set. Our verification algorithm is based on computing
probabilities to any desired precision by combining integral approximations,
and tail probability bounds. The proposed methods are implemented in the tool,
DipApprox, using the FLINT library for high-precision integral computations,
and incorporate optimizations to enhance scalability. We validate {\ourtool} on
fundamental privacy-preserving algorithms, such as Gaussian variants of the
Sparse Vector Technique and Noisy Max, demonstrating its effectiveness in both
confirming privacy guarantees and detecting violations.

Approximate Algorithms for Verifying Differential Privacy with Gaussian Distributions Bishnu Bhusal, Rohit Chadha, A. Prasad Sistla, Mahesh Viswanathan http://arxiv.org/abs/2509.08804 The verification of differential privacy algorithms that employ Gaussian distributions is little understood. This paper tackles the challenge of verifying such programs by introducing a novel approach to approximating probability distributions of loop-free programs that sample from both discrete and continuous distributions with computable probability density functions, including Gaussian and Laplace. We establish that verifying $(\epsilon,\delta)$-differential privacy for these programs is \emph{almost decidable}, meaning the problem is decidable for all values of $\delta$ except those in a finite set. Our verification algorithm is based on computing probabilities to any desired precision by combining integral approximations, and tail probability bounds. The proposed methods are implemented in the tool, DipApprox, using the FLINT library for high-precision integral computations, and incorporate optimizations to enhance scalability. We validate {\ourtool} on fundamental privacy-preserving algorithms, such as Gaussian variants of the Sparse Vector Technique and Noisy Max, demonstrating its effectiveness in both confirming privacy guarantees and detecting violations.

Approximate Algorithms for Verifying Differential Privacy with Gaussian Distributions

Bishnu Bhusal, Rohit Chadha, A. Prasad Sistla, Mahesh Viswanathan

http://arxiv.org/abs/2509.08804

11.09.2025 03:48 — 👍 0    🔁 1    💬 0    📌 0
Preview
Privacy Nutrition Labels: Promise, Practice, and Paradoxes in Communicating Privacy Privacy nutrition labels have emerged as a compelling alternative to lengthy, complex privacy policies for effectively communicating privacy information. In recent years, research on privacy nutr...

Our paper “Privacy Nutrition Labels: Promise, Practice, and Paradoxes in Communicating Privacy” is out!

We explore the current research landscape of privacy nutrition labels—their promise, challenges, and what’s next.

Read here: link.springer.com/chapter/10.1...

#Privacy #HCI #HCII2025

06.06.2025 13:56 — 👍 0    🔁 0    💬 0    📌 0
LinkedIn This link will take you to a page that’s not on LinkedIn

Excited to share that our paper “Checking δ-Satisfiability of Reals with Integrals” is now published in the Proceedings of the ACM on Programming Languages!

We extend δ-decision procedures to handle constraints involving integrals of real functions.

Paper: dl.acm.org/doi/10.1145/...

10.04.2025 00:34 — 👍 0    🔁 0    💬 0    📌 0

@bhusalb is following 19 prominent accounts