Excited to share that our work on interpretable risk prediction will be at the #NAACL2024 main conference!
17.03.2024 12:41 β π 0 π 0 π¬ 0 π 0@dmcinerney.bsky.social
PhD Candidate in ML/NLP at Northeastern University, currently working on interpretability in healthcare, broadly interested in distant supervision and bridging the gap between pretraining and applications
Excited to share that our work on interpretable risk prediction will be at the #NAACL2024 main conference!
17.03.2024 12:41 β π 0 π 0 π¬ 0 π 0To train our model, we extract future targets using the LLM and validate that these are reliable, signaling that future work on creating labels with LLM-enabled data augmentation is warranted. (6/6)
28.02.2024 18:56 β π 0 π 0 π¬ 0 π 0β¦and achieve reasonable accuracy. In fact, we find that our use of the interpretable Neural Additive Model, which allows us to get individual evidence scores, does not decrease performance at all compared to a blackbox approach. (5/6)
28.02.2024 18:55 β π 0 π 0 π¬ 1 π 0We also find that the predictions (both per-evidence and aggregated) are intuitive to the cliniciansβ¦ (4/6)
28.02.2024 18:55 β π 0 π 0 π¬ 1 π 0Our approach does retrieve useful evidence, and both extracting evidence with an LLM and sorting it via our ranking function are crucial to the modelβs success. (3/6)
28.02.2024 18:54 β π 0 π 0 π¬ 1 π 0Our interface allows a clinician to supplement their review of a patientβs record with our modelβs risk predictions and surfaced evidence and then annotate the usefulness of that evidence for actually understanding the patient. (2/6)
28.02.2024 18:53 β π 0 π 0 π¬ 1 π 0Our work on reducing diagnostic errors with interpretable risk prediction is now on arXiv!
We retrieve evidence from a patientβs record, visualize how it informs a prediction, and test it in a realistic setting. π (1/6)
arxiv.org/abs/2402.10109
w/ @byron.bsky.social and @jwvdm.bsky.social
we find it makes efficient use of features. (6/6)
25.10.2023 14:21 β π 0 π 0 π¬ 0 π 0This method also shows promise in being data-efficient, and... (5/6)
25.10.2023 14:21 β π 0 π 0 π¬ 0 π 0Inspection of individual instances with this approach yields insights as to what went right and what went wrong. (4/6)
25.10.2023 14:20 β π 0 π 0 π¬ 0 π 0We find that most of the coefficients of the linear model align with clinical expectations for the corresponding feature! (3/6)
25.10.2023 14:19 β π 0 π 0 π¬ 0 π 0Not only do we see decent accuracy at feature extraction itself, but we also see reasonable performance on the downstream tasks in comparison with using ground truth features. (2/6)
25.10.2023 14:18 β π 0 π 0 π¬ 0 π 0Very excited our βCHiLLβ paper was accepted to #EMNLP2023 Findings!
Can we craft arbitrary high-level features without training?π(1/6)
We ask a doctor to ask questions to an LLM and train an interpretable model on the answers.
arxiv.org/abs/2302.12343
w/ @jwvdm.bsky.social and @byron.bsky.social