Ignacy Stepka's Avatar

Ignacy Stepka

@ignacyy.bsky.social

PhD student @ CMU MLD | Robustness, interpretability, time-series | https://ignacystepka.com

45 Followers  |  146 Following  |  12 Posts  |  Joined: 13.11.2024  |  1.6066

Latest posts by ignacyy.bsky.social on Bluesky

There are a few with good vibes and (somewhat) specialty coffee. Personally I like KLVN (near Bakery Square), Arriviste (Shadyside), Redhawk (Oakland). They're not super fancy, but way better than the well-known chains!

23.10.2025 18:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps The widespread adoption of machine learning systems has raised critical concerns about fairness and bias, making mitigating harmful biases essential for AI development. In this paper, we investigate t...

πŸ“… Tuesday 5:45 pm - 8:00 pm in Exhibit Hall poster no. 437

My colleague Łukasz Sztukiewicz will present our joint work (with @inverse-hessian.bsky.social) on the relationship between saliency maps and fairness as part of the Undergraduate and Master’s Consortium.

πŸ“„ Paper: arxiv.org/abs/2503.00234

03.08.2025 21:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“… Monday 8:00 am - 12:00 pm in Room 700

Presenting our work on mitigating persistent client dropout in decentralized federated learning as part of the FedKDD workshop.

🌐 Project website: ignacystepka.com/projects/fed...
πŸ“„ Paper: openreview.net/pdf/576de662...

03.08.2025 21:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios, real-world a...

πŸ“… Tuesday 5:30 - 8 pm (poster no. 141) and Friday 8:55 - 9:15 (Room 801 A, talk)

I’ll be giving a talk and presenting a poster on robust counterfactual explanations.

🌐 Project website: ignacystepka.com/projects/bet...
πŸ“„ Paper: arxiv.org/abs/2408.04842

03.08.2025 21:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This week I'm presenting some works at #KDD2025 in Toronto πŸ‡¨πŸ‡¦

Let’s connect if you’re interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!

Here’s where you can find me:

03.08.2025 21:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios, real-world a...

Explore more:

πŸ“„ paper: arxiv.org/abs/2408.04842

πŸ‘¨β€πŸ’» code: github.com/istepka/beta...

🌐 project page: ignacystepka.com/projects/bet...

πŸ‘ Big thanks to my co-authors Jerzy Stefanowski and Mateusz Lango!

#KDD2025 #TrustworthyAI #XAI 7/7🧡

12.05.2025 12:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“Š Results: Across 6 datasets, BetaRCE consistently achieved target robustness levels while preserving explanation quality and maintaining a competitive robustness-cost trade-off. 6/7🧡

12.05.2025 12:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

You control both confidence level (α) and robustness threshold (δ), giving statistical guarantees that your explanation will survive changes! For formal proofs on optimal SAM sampling methods and the full theoretical foundation, check out our paper! 5/7🧡

12.05.2025 12:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

βš™οΈ Under the hood: BetaRCE explores a "Space of Admissible Models" (SAM) - representing expected/foreseeable changes to your model. Using Bayesian statistics, we efficiently estimate the probability that explanations remain valid across these changes. 4/7🧡

12.05.2025 12:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

βœ… Our solution: BetaRCE - offers probabilistic guarantees for robustness to model change. It works with ANY model class, is post-hoc, and can enhance your current counterfactual methods. Plus, it allows you to control the robustness-cost trade-off. 3/7🧡

12.05.2025 12:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

❌ This happens constantly in real-world AI systems. Current explanation methods don't address this well - they're limited to specific models, require extensive tuning, or lack guarantees about explanation robustness. 2/7🧡

12.05.2025 12:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“£ New paper at #KDD2025 on robust counterfactual explanations!

Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7🧡

12.05.2025 12:47 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@ignacyy is following 20 prominent accounts