Time for XAI for Code? ๐
01.08.2025 16:01 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@berkustun.bsky.social
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
Time for XAI for Code? ๐
01.08.2025 16:01 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!
14.07.2025 16:11 โ ๐ 1 ๐ 1 ๐ฌ 1 ๐ 0Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!
๐ Wed 16 Jul 4:30 p.m. PDT โ 7 p.m. PDT
๐East Exhibition Hall A-B #E-1104
๐ arxiv.org/abs/2502.16380
Paper: www.arxiv.org/abs/2506.22740
Blog post: statmodeling.stat.columbia.edu/2025/07/02/w...
ExplainableAI has long frustrated me by lacking a clear theory of what an explanation should do. Improve use of a model for what? How? Given a task what's max effect explanation could have? It's complicated bc most methods are functions of features & prediction but not true state being predicted 1/
02.07.2025 16:53 โ ๐ 44 ๐ 8 ๐ฌ 2 ๐ 0screenshot of title and authors (Jakob Schoeffer, Maria De-Arteaga, Jonathan Elmer)
Having a lot of FOMO not being able to be in person at #FAccT2025 but enjoying the virtual transmission ๐ป. Tomorrow Jakob will be presenting our paper "Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest".
25.06.2025 21:30 โ ๐ 12 ๐ 1 ๐ฌ 1 ๐ 0Explanations don't help us detect algorithmic discrimination. Even when users are trained. Even when we control their beliefs. Even under ideal conditions... ๐
24.06.2025 19:16 โ ๐ 11 ๐ 0 ๐ฌ 0 ๐ 1*wrapfig entered the document*
24.05.2025 03:18 โ ๐ 5 ๐ 1 ๐ฌ 1 ๐ 0โScience is a smart, low cost investment. The costs of not investing in it are higher than the risk of doing soโฆ talk to people about science.โ - @kevinochsner.bsky.social makes his case to the field #sans2025
26.04.2025 21:03 โ ๐ 174 ๐ 55 ๐ฌ 7 ๐ 7I tried to be nice but then they said that saying please and thanks costs millions.
24.04.2025 18:41 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0Hey AI folks - stop using SHAP! It won't help you debug [1], won't catch discrimination [2], and makes no sense for feature importance [3].
Plus - as we show - it also won't give recourse.
In a paper at #ICLR we introduce feature responsiveness scores... 1/
arxiv.org/pdf/2410.22598
When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social
24.04.2025 18:18 โ ๐ 10 ๐ 5 ๐ฌ 1 ๐ 0Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
Absolute banger.
19.04.2025 19:43 โ ๐ 32 ๐ 5 ๐ฌ 4 ๐ 1Many ML models predict labels that donโt reflect what we care about, e.g.:
โ Diagnoses from unreliable tests
โ Outcomes from noisy electronic health records
In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
๐งต๐
๐จ Excited to announce a new paper accepted at #ICLR2025 in Singapore!
โLearning Under Temporal Label Noiseโ
We tackle a new challenge in time series ML: label noise that changes over time ๐งต๐
arxiv.org/abs/2402.04398
is this a rhetorical question?
28.02.2025 19:25 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0The CHI Human-Centered Explainable AI Workshop is back!
Paper submissions: Feb 20
hcxai.jimdosite.com
๐งตon the CFPB and less discriminatory algorithms.
last week, in its supervisory highlights, the Bureau offered a range of impressive new details on how financial institutions should be searching for less discriminatory algorithms.
Also
14.01.2025 22:31 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Engaging discussions on the future of #AI in #healthcare at this week's ICHPS, hosted by @amstatnews.bsky.social.
JCHI's @kdpsingh.bsky.social shared insights on the safety & equity of #MachineLearning algorithms and examined bias in large language models.
I imagine it would look like the modern version of this.
07.01.2025 07:56 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Safety that matters
www.ftc.gov/policy/advoc...
๐ฃ CANAIRI: the Collaboration for Translational AI Trials! Co lead @xiaoliu.bsky.social @naturemedicine.bsky.social
Perhaps most important to AI translation is the local silent trial. Ethically, and from an evidentiary perspective, this is essential!
url.au.m.mimecastprotect.com/s/pQSsClx14m...
โ
22.12.2024 21:34 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0๐ชฉNew paper๐ชฉ (WIP) appearing at @neuripsconf.bsky.social Regulatable ML and Algorithmic Fairness AFME workshop (oral spotlight).
In collaboration with @s010n.bsky.social and Manish Raghavan, we explore strategies and fundamental limits in searching for less discriminatory algorithms.
I'm seeking a postdoc to work with me and @kenholstein.bsky.social on evaluating AI/ML decision support for human experts:
statmodeling.stat.columbia.edu/2024/12/10/p...
P.S. I'll be at NeurIPS Thurs-Mon. Happy to talk about this position or related mutual interests!
Please repost ๐
There is just about a month left before the abstract deadline for @facct.bsky.social 2025๐ฅณ๐ณ๐. Really looking forward to seeing all the submissions. If you are submitting, donโt forget to check out the Author Guide (and the reviewer and AC guide as well). facctconference.org/2025/aguide
11.12.2024 09:28 โ ๐ 15 ๐ 4 ๐ฌ 0 ๐ 2Starter pack #ML for Healthcare go.bsky.app/PJKJ8vK by โช@berkustun.bsky.socialโฌ
11.12.2024 15:09 โ ๐ 2 ๐ 1 ๐ฌ 0 ๐ 0