Eli Chien's Avatar

Eli Chien

@elichien.bsky.social

Incoming assistant professor at National Taiwan University. Postdoc at GeorgiaTech. Ph.D. from UofIllinois. Focus on privacy + graph learning. #MachineUnlearning #DifferentialPrivacy #DP #GNN #LLM Homepage: https://sites.google.com/view/eli-chien/home

54 Followers  |  97 Following  |  33 Posts  |  Joined: 20.11.2024
Posts Following

Posts by Eli Chien (@elichien.bsky.social)

I don't recall many papers getting retracted from NeurICMLR or having errata after being published. This is really sad and unfortunate.

That being said, feel free to let me know if there's any error in my work. Will really appreciate the comments. 4/n,n=4.

16.08.2025 04:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

New students and researchers kept rediscovering that some "famous" papers are wrong (in the best case...) by wasting tons of time, but then still have to cite or compare to these works since they're well-cited or published in NeurICMLR. How's that even make sense? 3/n

16.08.2025 04:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Examples: critical errors in papers, awful reproducibility, and the worst, intentional lying/cheating. These researchers still earn a number of citations, nice jobs, and have not been "punished" in terms of their reputation. 2/n

16.08.2025 04:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Some random thoughts after chatting with multiple friends: I do feel that one reason the general ML research community is getting worse (imo, maybe not for others) is that we don't share the bad things we found with others more often. 1/n

16.08.2025 04:20 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I will be at #icml2025 next week to present our work on LLM unlearning evaluation [https://arxiv.org/abs/2412.08559]. We also have a work on AI copyright to be presented at the MemFM and R2FM workshop. Please let me know if you're also around! I will be around 7/15-7/17.

08.07.2025 05:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is a great paper! It resonate with one of our recent work (a short version to appear at ICML MemFM workshop!). We really need to be careful on "defining meaningful" copyright measure.

26.06.2025 05:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What needs to be take care of when applying privacy amplification by iteration to zeroth-order optimization? Can it even be done? What's the "good design" for DP zeroth-order method? Check out our latest work! It's so nice to collaborate with Wei-Ning (as usual) and Pan!

04.06.2025 17:49 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Open Problem: Selection via Low-Sensitivity Queries Two of the basic tools for building differentially private algorithms are noise addition for answering low-sensitivity queries and the exponential mechanism for selection. Could we do away with the e...

Open Problem: Selection via Low-Sensitivity Queries

02.05.2025 13:21 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning Large Language Models are trained on extensive datasets that often contain sensitive, human-generated information, raising significant concerns about privacy breaches. While certified unlearning appro...

Preprint: arxiv.org/abs/2412.08559

Stay tuned for the GitHub code and our updated version (we have some new results!).

I also want to thank my friends @jyhong.bsky.social , Chulin Xie, Ayush Sekhari, Martin Pawelczyk for their helpful discussion and clarification of their works! 2/n, n=2.

01.05.2025 14:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Our paper about LLM unlearning evaluation is accepted by #icml2025 !

Thanks to the leading author Rongzhe, and my collaborators
@mufei-li.bsky.social @xiangyue96.bsky.social
(and others may not be on Bluesky).

It's my first "last" author paper. Feels quite special :p 1/n

01.05.2025 14:08 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Preview
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning Large Language Models are trained on extensive datasets that often contain sensitive, human-generated information, raising significant concerns about privacy breaches. While certified unlearning appro...

Preprint: arxiv.org/abs/2412.08559

Stay tuned for the GitHub code and our updated version (we have some new results!).

I also want to thank my friends @jyhong.bsky.social Chulin Xie, Ayush Sekhari, Martin Pawelczyk for their helpful discussion and clarification of their works! 2/n, n=2.

01.05.2025 14:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I wonder how well this result can be applied to convert the KL-based result in the sampling literature (i.e., LMC convergence) to Renyi divergence, compared to those results that directly bound the Renyi divergence (i.e., the results in Sinho Chewi's book or the paper by Vempala and Wibisono πŸ˜‚)

27.04.2025 01:06 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I wrote a post explaining why, in practice, privacy amplification by subsampling doesn't quite work as well as promised. This is a significant problem for differentially private machine learning applications, but I don't know if this is as widely known as it should be.

21.04.2025 19:22 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
Preview
Statistical Optimal Transport This monograph aims to offer a concise introduction to optimal transport, quickly transitioning to its applications in statistics and machine learning.

PSA β€” if you’re interested in learning about statistical aspects of optimal transport, check out this new monograph by Sinho Chewi, Jonathan Niles-Weed, and Philippe Rigollet: link.springer.com/book/10.1007...

14.04.2025 23:03 β€” πŸ‘ 43    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2
Preview
Privacy Amplification by Subsampling Privacy Amplification by Subsampling is an important property of differential privacy. It is key to making many algorithms efficient – particularly in machine learning applications. Thus a lot of wor...

Privacy Amplification by Subsampling

13.04.2025 16:15 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

The last one is crazy 🀣🀣🀣

14.04.2025 03:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I would like to thank Pan Li, Olgica Milenkovic, Kamalika Chaudhuri, and Cho-Jui Hsieh for their help during my job search. I also appreciate the help from all my friends who provided me suggestions or discussed the situation with me! (can't type all due to space limit). 3/3

09.04.2025 19:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I will keep working on trustworthy/regulatable AI, especially on privacy, machine unlearning, and AI copyright issues. Feel free to let me know if you want to collaborate in the future! Also, I wish the best of luck to my friends who are still on the job market now. It is a really tough year :( 2/3

09.04.2025 19:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Life Update: I am happy to share the news that I will be an Assistant Professor at the National Taiwan University EE department! I am very grateful for this opportunity to be back in my home country, especially at the university where I was an undergrad! 1/3

09.04.2025 19:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I am so shocked to learn that Poisson (in French) means fish...... As a person who constantly deals with Poisson distribution, Poissonization, etc I now have a completely different feeling about Poisson 🀣. I guess we always learn something unexpected on the internet 🀣

09.04.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I believe so, but I will have to wait until Monday to know. I will DM you the Zoom link if there is one!

23.03.2025 21:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
School of CSE Seminar Series: Eli Chien | School of Computational Science and Engineering School of CSE hosts a seminar from Georgia Tech Postdoctoral Fellow Eli Chien

I will give a talk at GaTech CSE seminar this Friday on the topic: "Machine Unlearning: The General Theory and LLM Practice of Privacy".

Please join if you are around :)

cse.gatech.edu/events/2025/...

23.03.2025 19:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks for sharing! We are actually writing something related to this. Will probably cite this post :p

12.03.2025 22:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness We study the Differential Privacy (DP) guarantee of hidden-state Noisy-SGD algorithms over a bounded domain. Standard privacy analysis for Noisy-SGD assumes all internal states are revealed, which lea...

Preprint: arxiv.org/abs/2410.01068

We will update with more related works, and make changes as promised during the rebuttal soon.

I am now cooking something more exciting along this line of work with my collaborators. Hope to share it with everyone soon :p

22.01.2025 19:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I am glad to share that our paper on hidden-state Noisy SGD DP analysis for non-convex non-smooth problems has been accepted at #ICLR2025! I really appreciate the effort from reviewers, AC, and all my friends who provided valuable comments and feedback!

22.01.2025 19:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's not a normal distribution... :)

28.12.2024 14:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

With @adamsmith.xyz and @thejonullman.bsky.social, we have compiled a set of profiles of 29 people in the "foundations of responsible computing" community ("mathematical research in computation and society writ large") who are on the faculty job market.

Link: drive.google.com/file/d/1Hyvg... 1/3

24.12.2024 19:50 β€” πŸ‘ 39    πŸ” 16    πŸ’¬ 2    πŸ“Œ 1

Why do we need "theoretical guarantees" for trustworthy AI? We need to prevent the worst-case scenario, where theory in AI truly shines and is necessary, in my opinion. That's also why my work with theoretical guarantees for machine unlearning and DP matters! πŸ˜‰

20.12.2024 04:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It's my great pleasure to contribute to the great A3D3 community. Congrats to all #A3D3 members!

19.12.2024 20:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The last time when I attended NeurIPS in 2019 Vancouver, I missed my flight back to Urbana due to a border check. Today after NeurIPS 2024 I got stuck in Dallas due to a flight cancellation...πŸ₯²πŸ₯²πŸ₯²

17.12.2024 06:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0