Jered McInerney's Avatar

Jered McInerney

@dmcinerney.bsky.social

PhD Candidate in ML/NLP at Northeastern University, currently working on interpretability in healthcare, broadly interested in distant supervision and bridging the gap between pretraining and applications

23 Followers  |  23 Following  |  13 Posts  |  Joined: 19.10.2023  |  1.6074

Latest posts by dmcinerney.bsky.social on Bluesky


Excited to share that our work on interpretable risk prediction will be at the #NAACL2024 main conference!

17.03.2024 12:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

To train our model, we extract future targets using the LLM and validate that these are reliable, signaling that future work on creating labels with LLM-enabled data augmentation is warranted. (6/6)

28.02.2024 18:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

…and achieve reasonable accuracy. In fact, we find that our use of the interpretable Neural Additive Model, which allows us to get individual evidence scores, does not decrease performance at all compared to a blackbox approach. (5/6)

28.02.2024 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also find that the predictions (both per-evidence and aggregated) are intuitive to the clinicians… (4/6)

28.02.2024 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our approach does retrieve useful evidence, and both extracting evidence with an LLM and sorting it via our ranking function are crucial to the model’s success. (3/6)

28.02.2024 18:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our interface allows a clinician to supplement their review of a patient’s record with our model’s risk predictions and surfaced evidence and then annotate the usefulness of that evidence for actually understanding the patient. (2/6)

28.02.2024 18:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our work on reducing diagnostic errors with interpretable risk prediction is now on arXiv!

We retrieve evidence from a patient’s record, visualize how it informs a prediction, and test it in a realistic setting. πŸ‘‡ (1/6)

arxiv.org/abs/2402.10109
w/ @byron.bsky.social and @jwvdm.bsky.social

28.02.2024 18:52 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Post image

we find it makes efficient use of features. (6/6)

25.10.2023 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

This method also shows promise in being data-efficient, and... (5/6)

25.10.2023 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Inspection of individual instances with this approach yields insights as to what went right and what went wrong. (4/6)

25.10.2023 14:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We find that most of the coefficients of the linear model align with clinical expectations for the corresponding feature! (3/6)

25.10.2023 14:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Not only do we see decent accuracy at feature extraction itself, but we also see reasonable performance on the downstream tasks in comparison with using ground truth features. (2/6)

25.10.2023 14:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Very excited our β€œCHiLL” paper was accepted to #EMNLP2023 Findings!

Can we craft arbitrary high-level features without training?πŸ‘‡(1/6)

We ask a doctor to ask questions to an LLM and train an interpretable model on the answers.

arxiv.org/abs/2302.12343
w/ @jwvdm.bsky.social and @byron.bsky.social

25.10.2023 14:18 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 5    πŸ“Œ 1

@dmcinerney is following 20 prominent accounts