Berk Ustun's Avatar

Berk Ustun

@berkustun.bsky.social

Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com

2,569 Followers  |  445 Following  |  41 Posts  |  Joined: 28.09.2023  |  2.2527

Latest posts by berkustun.bsky.social on Bluesky

Time for XAI for Code? ๐Ÿ™ƒ

01.08.2025 16:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!

14.07.2025 16:11 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Understanding Fixed Predictions via Confined Regions Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Existing approaches to audit fixed predictions do so on a pointwise basis, which requires ac...

Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!

๐Ÿ• Wed 16 Jul 4:30 p.m. PDT โ€” 7 p.m. PDT
๐Ÿ“East Exhibition Hall A-B #E-1104
๐Ÿ”— arxiv.org/abs/2502.16380

14.07.2025 16:08 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Explanations are a means to an end Modern methods for explainable machine learning are designed to describe how models map inputs to outputs--without deep consideration of how these explanations will be used in practice. This paper arg...

Paper: www.arxiv.org/abs/2506.22740

Blog post: statmodeling.stat.columbia.edu/2025/07/02/w...

02.07.2025 16:53 โ€” ๐Ÿ‘ 22    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

ExplainableAI has long frustrated me by lacking a clear theory of what an explanation should do. Improve use of a model for what? How? Given a task what's max effect explanation could have? It's complicated bc most methods are functions of features & prediction but not true state being predicted 1/

02.07.2025 16:53 โ€” ๐Ÿ‘ 44    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
screenshot of title and authors (Jakob Schoeffer, Maria De-Arteaga, Jonathan Elmer)

screenshot of title and authors (Jakob Schoeffer, Maria De-Arteaga, Jonathan Elmer)

Having a lot of FOMO not being able to be in person at #FAccT2025 but enjoying the virtual transmission ๐Ÿ’ป. Tomorrow Jakob will be presenting our paper "Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest".

25.06.2025 21:30 โ€” ๐Ÿ‘ 12    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Explanations don't help us detect algorithmic discrimination. Even when users are trained. Even when we control their beliefs. Even under ideal conditions... ๐Ÿ‘‡

24.06.2025 19:16 โ€” ๐Ÿ‘ 11    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

*wrapfig entered the document*

24.05.2025 03:18 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โ€œScience is a smart, low cost investment. The costs of not investing in it are higher than the risk of doing soโ€ฆ talk to people about science.โ€ - @kevinochsner.bsky.social makes his case to the field #sans2025

26.04.2025 21:03 โ€” ๐Ÿ‘ 174    ๐Ÿ” 55    ๐Ÿ’ฌ 7    ๐Ÿ“Œ 7

I tried to be nice but then they said that saying please and thanks costs millions.

24.04.2025 18:41 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hey AI folks - stop using SHAP! It won't help you debug [1], won't catch discrimination [2], and makes no sense for feature importance [3].

Plus - as we show - it also won't give recourse.

In a paper at #ICLR we introduce feature responsiveness scores... 1/

arxiv.org/pdf/2410.22598

24.04.2025 16:37 โ€” ๐Ÿ‘ 29    ๐Ÿ” 8    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social

24.04.2025 18:18 โ€” ๐Ÿ‘ 10    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse

24.04.2025 06:19 โ€” ๐Ÿ‘ 17    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Absolute banger.

19.04.2025 19:43 โ€” ๐Ÿ‘ 32    ๐Ÿ” 5    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
Post image

Many ML models predict labels that donโ€™t reflect what we care about, e.g.:
โ€“ Diagnoses from unreliable tests
โ€“ Outcomes from noisy electronic health records

In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
๐Ÿงต๐Ÿ‘‡

19.04.2025 23:04 โ€” ๐Ÿ‘ 12    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Learning under Temporal Label Noise Many time series classification tasks, where labels vary over time, are affected by label noise that also varies over time. Such noise can cause label quality to improve, worsen, or periodically chang...

๐Ÿšจ Excited to announce a new paper accepted at #ICLR2025 in Singapore!

โ€œLearning Under Temporal Label Noiseโ€

We tackle a new challenge in time series ML: label noise that changes over time ๐Ÿงต๐Ÿ‘‡

arxiv.org/abs/2402.04398

13.04.2025 17:40 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

is this a rhetorical question?

28.02.2025 19:25 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image 28.02.2025 19:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Home | HCXAI ACM CHI 2025 Workshop on Human-Centered Explainable AI (HCXAI). May 2025 (Yokohama, Japan & hybrid). Submit your Paper (EasyChair)

The CHI Human-Centered Explainable AI Workshop is back!

Paper submissions: Feb 20

hcxai.jimdosite.com

05.02.2025 23:34 โ€” ๐Ÿ‘ 27    ๐Ÿ” 12    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿงตon the CFPB and less discriminatory algorithms.

last week, in its supervisory highlights, the Bureau offered a range of impressive new details on how financial institutions should be searching for less discriminatory algorithms.

21.01.2025 18:44 โ€” ๐Ÿ‘ 16    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Also

14.01.2025 22:31 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image


Engaging discussions on the future of #AI in #healthcare at this week's ICHPS, hosted by @amstatnews.bsky.social.

JCHI's @kdpsingh.bsky.social shared insights on the safety & equity of #MachineLearning algorithms and examined bias in large language models.

08.01.2025 21:59 โ€” ๐Ÿ‘ 9    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
For a Year, They Lived Tied Together with 8 Feet of Social Distance Artists Linda Montano and Tehching Hsieh The promise was made on American Independence Day, 1983. "We, Linda Montano and Tehching Hsieh, plan to do a one year performance. We will stay together for...

I imagine it would look like the modern version of this.

07.01.2025 07:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
AI and the Risk of Consumer Harm People often talk about โ€œsafetyโ€ when discussing the risks of AI causing harm. AI safety means different things to different people, and those looking for a definition here will be disappointed.

Safety that matters

www.ftc.gov/policy/advoc...

07.01.2025 07:47 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ“ฃ CANAIRI: the Collaboration for Translational AI Trials! Co lead @xiaoliu.bsky.social @naturemedicine.bsky.social

Perhaps most important to AI translation is the local silent trial. Ethically, and from an evidentiary perspective, this is essential!

url.au.m.mimecastprotect.com/s/pQSsClx14m...

06.01.2025 22:36 โ€” ๐Ÿ‘ 13    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

โœ…

22.12.2024 21:34 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐ŸชฉNew paper๐Ÿชฉ (WIP) appearing at @neuripsconf.bsky.social Regulatable ML and Algorithmic Fairness AFME workshop (oral spotlight).

In collaboration with @s010n.bsky.social and Manish Raghavan, we explore strategies and fundamental limits in searching for less discriminatory algorithms.

13.12.2024 13:34 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Postdoc position at Northwestern on evaluating AI/ML decision support | Statistical Modeling, Causal Inference, and Social Science

I'm seeking a postdoc to work with me and @kenholstein.bsky.social on evaluating AI/ML decision support for human experts:
statmodeling.stat.columbia.edu/2024/12/10/p...

P.S. I'll be at NeurIPS Thurs-Mon. Happy to talk about this position or related mutual interests!

Please repost ๐Ÿ™

10.12.2024 18:18 โ€” ๐Ÿ‘ 32    ๐Ÿ” 18    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
ACM FAccT - 2025 Home

There is just about a month left before the abstract deadline for @facct.bsky.social 2025๐Ÿฅณ๐Ÿ˜ณ๐Ÿ™ˆ. Really looking forward to seeing all the submissions. If you are submitting, donโ€™t forget to check out the Author Guide (and the reviewer and AC guide as well). facctconference.org/2025/aguide

11.12.2024 09:28 โ€” ๐Ÿ‘ 15    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

Starter pack #ML for Healthcare go.bsky.app/PJKJ8vK by โ€ช@berkustun.bsky.socialโ€ฌ

11.12.2024 15:09 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@berkustun is following 20 prominent accounts