Mohit Chandra's Avatar

Mohit Chandra

@mohit30.bsky.social

PhDing @GeorgiaTech | Previously: @msftresearch.bsky.social, @Microsoft @iiithyderabad | Research: NLP and Social Computing for Healthcare | Opinions are personal Homepage: https://mohit3011.github.io/ #ResponsibleAI #Human-CenteredAI #NLPforMentalHealth

169 Followers  |  144 Following  |  23 Posts  |  Joined: 19.11.2024  |  1.8917

Latest posts by mohit30.bsky.social on Bluesky

Post image

Hello Athens! πŸ‘‹β˜€οΈ

Excited to be attending #FAccT 2025 and presenting our paper β€œFrom Lived Experience to Insight” on 24th June at 10:45 AM (New Stage C)

dl.acm.org/doi/10.1145/...

Would love to catch up with old friends and make new ones and talk about AI and mental health πŸ˜„

23.06.2025 21:01 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congratulations! πŸ™Œ

30.05.2025 16:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For more details:

Paper: shorturl.at/bldCb
Webpage: shorturl.at/bC1zn
Code: shorturl.at/H8xmp

Grateful for the efforts from my co-authors πŸ™Œ: Siddharth Sriraman, @verma22gaurav.bsky.social, Harneet Singh Khanuja, Jose Suarez Campayo, Zihang Li, Michael L. Birnbaum, Munmun De Choudhury

11/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Table 3: Mean actionability alignment scores of harm reduction strategies (last column),computed as average of practicality, relevance, specificity, and clarity scores.

Table 3: Mean actionability alignment scores of harm reduction strategies (last column),computed as average of practicality, relevance, specificity, and clarity scores.

Finding #6: We examined the actionability of mitigation advices. Expert responses scored the highest on overall actionability in comparison to all the LLMs.

While LLMs provide less practical and relevant advice, their advice is more clear and specific.

10/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Table 2: Alignment of harm reduction strategies of various models with the expert’s response.We report the mean and standard deviation for the AlignScore metric GPT-4o score, with the best (bold) and second-best (underline) performing model in each metric highlighted.

Table 2: Alignment of harm reduction strategies of various models with the expert’s response.We report the mean and standard deviation for the AlignScore metric GPT-4o score, with the best (bold) and second-best (underline) performing model in each metric highlighted.

Finding #5: LLMs struggle to provide expert-aligned harm reduction strategies with larger models producing less expert-aligned strategies than smaller ones.

The best medical model aligned with experts ~71% (GPT-4o score) of the time.

9/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Mean SMOG Scores (for readability) and 95% Confidence Intervals for Various Models (lower values are better).

Mean SMOG Scores (for readability) and 95% Confidence Intervals for Various Models (lower values are better).

Using the ADRA framework, we evaluate LLM alignment with experts across expressed emotion, readability, harm reduction strategies, & actionable advice.

Finding #4: We find that LLMs express similar emotions and tones but provide significantly harder to read responses.

8/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finding #3: In-context learning boosted performance for both ADR detection and multiclass classification (+23 F1 points for the latter). However, gains in ADR detection task were limited to a few models.

Type of examples had a more pronounced impact for the ADR multiclass class. task.

7/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finding #2: All LLMs showed β€œrisk-averse” behavior, labeling no-ADR posts as ADR. Claude 3 Opus had a 42% false-positive rate for ADR detection and GPT-4-Turbo misclassified over 50% non-dose/time-related ADRs.

This highlights the lack of "lived-experience" among models.

6/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Table 1: Performance of different models on Binary Detection and Multiclass Classification tasks under Zero-Shot and 5-Shot scenarios.W e report the accuracy score(Acc.) and weighted F1 score as(F1) with the best and second-best performing model metrics in each scenario highlighted in bold and underline, respectively.

Table 1: Performance of different models on Binary Detection and Multiclass Classification tasks under Zero-Shot and 5-Shot scenarios.W e report the accuracy score(Acc.) and weighted F1 score as(F1) with the best and second-best performing model metrics in each scenario highlighted in bold and underline, respectively.

Finding #1: Larger models perform better for ADR detection tasks (Claude3 Opus led with an accuracy score of 77.41%), but this trend does not hold for ADR multiclass classification. Additionally, distinguishing ADR types remains a significant challenge for all models.

5/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Figure 1: Overview of work; we present two tasks in this work– ADR detection and multiclass classification (RQ1), and Expert-LLM response alignment (RQ2).

Figure 1: Overview of work; we present two tasks in this work– ADR detection and multiclass classification (RQ1), and Expert-LLM response alignment (RQ2).

We introduce the Psych-ADR, a benchmark with Reddit posts annotated for ADR presence/type, paired with expert-written responses and the ADRA framework to systematically evaluate long-form generations in detecting ADR expressions and delivering mitigation strategies.

4/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Broader Takeaway #2: To build reliable AI in healthcare, we must move beyond choice-based benchmarks toward tasks that portray the complexities of the real world (such as ADR mitigation) using nuanced frameworks and benchmarks. πŸ“ˆ

Below are some nuanced findings πŸ‘‡

3/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Broader Takeaway #1: LLMs are tools to empower and not replace mental health professionals. They offer clear & specific advice, addressing the global shortage of care providersβ€”but contextually relevant, practical advice still requires human expertise. πŸ‘¨β€βš•οΈπŸ‘©β€βš•οΈ

2/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
(Left) Screenshot of the paper title and author information. (Right) An image with a Llama and human doctor examining a patient for adverse drug reaction. The Llama doctor suspects that the patient is suffering from ADR, whereas human doctor suspects that it is not an ADR.

(Left) Screenshot of the paper title and author information. (Right) An image with a Llama and human doctor examining a patient for adverse drug reaction. The Llama doctor suspects that the patient is suffering from ADR, whereas human doctor suspects that it is not an ADR.

Adverse Drug Reactions are among the leading causes of hospitalizations for mental health issues. With existing limitations, LLMs have the potential to detect ADRs and provide mitigation strategies.

But do LLMs align with experts? πŸ€” We explore this in our work πŸ‘‡πŸΌπŸ§΅

shorturl.at/bldCb
1/11

07.01.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is so far our best call for papers! We completely re-thought what a CFP is, and.... mostly copied last year's while surgically increasing some numbers

17.12.2024 15:55 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 3    πŸ“Œ 0

Great work! πŸ‘

13.12.2024 01:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This semester in a nutshell:

Write Paper -> Write Rebuttal -> Repeat

Also writing rebuttals is significantly more tiring than writing papers.

03.12.2024 20:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yup! I joined recently along with a large number of folks and I guess it will become like academic twitter if people continue to engage on the platform.

25.11.2024 23:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Really amazing work! very insightful

25.11.2024 23:34 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm loving my Bluesky feed. It feels quite similar to what my timeline used to look before the Twitter buyout.

I hope more people switch to this platform and make academic discussions mainstream again!

#AcademicSky

25.11.2024 23:30 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you so much!

25.11.2024 18:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I would love to get added if possible!

25.11.2024 08:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Congratulations!

It is certainly a good start but I still feel we need more interdisciplinary reviewers (based on the reviews I have gotten). One issue is the ask for reviewers to have at least 3 *CL papers in past 5 years which many researchers might not have.

Something ACs could look into ?

24.11.2024 05:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you!

22.11.2024 21:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Would love to get added to this!

22.11.2024 20:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@mohit30 is following 19 prominent accounts