Daniel Chechelnitsky's Avatar

Daniel Chechelnitsky

@dchechel.bsky.social

PhDing @ CMU LTI

71 Followers  |  223 Following  |  9 Posts  |  Joined: 24.02.2025
Posts Following

Posts by Daniel Chechelnitsky (@dchechel.bsky.social)

Post image

๐ŸŽญ How do LLMs (mis)represent culture?
๐Ÿงฎ How often?
๐Ÿง  Misrepresentations = missing knowledge? spoiler: NO!

At #CHI2026 we are bringing โœจTALESโœจ a participatory evaluation of cultural (mis)reps & knowledge in multilingual LLM-stories for India

๐Ÿ“œ arxiv.org/abs/2511.21322

1/10

02.02.2026 21:38 โ€” ๐Ÿ‘ 45    ๐Ÿ” 21    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Screenshot of paper title and authors. 

Title: Social Story Frames: Contextual Reasoning about Narrative Intent and Reception
Authors: Joel Mire, Maria Antoniak, Steven R. Wilson, Zexin Ma, Achyutarama R. Ganti, Andrew Piper, Maarten Sap

Screenshot of paper title and authors. Title: Social Story Frames: Contextual Reasoning about Narrative Intent and Reception Authors: Joel Mire, Maria Antoniak, Steven R. Wilson, Zexin Ma, Achyutarama R. Ganti, Andrew Piper, Maarten Sap

Reading social media stories evokes a wide range of contextual reader reactionsโ€”inferential, affective, evaluativeโ€”yet we lack methods to study these at scale.

Excited to share our new paper that builds a framework for analyzing storytelling practices across online communities!

19.12.2025 23:05 โ€” ๐Ÿ‘ 22    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
We're investigating how publishers handle name changes and the barriers scholars face. If you've changed your name (or are considering it) and dealt with updating your academic publications, we want to hear from you.

Researchers who have changed their name for any reason, such as gender transition, marriage, divorce, immigration, cultural reasons, or citation formatting issues. Whether you've successfully updated your work, are currently trying, or decided not to because of barriers, your opinion matters.

Your input will help us advocate for better, more inclusive policies in academic publishing. It takes around 5-10 minutes to complete.

Survey Link: https://forms.cloud.microsoft/e/E0XXBmZdEP

Please share with anyone who might benefit.

We're investigating how publishers handle name changes and the barriers scholars face. If you've changed your name (or are considering it) and dealt with updating your academic publications, we want to hear from you. Researchers who have changed their name for any reason, such as gender transition, marriage, divorce, immigration, cultural reasons, or citation formatting issues. Whether you've successfully updated your work, are currently trying, or decided not to because of barriers, your opinion matters. Your input will help us advocate for better, more inclusive policies in academic publishing. It takes around 5-10 minutes to complete. Survey Link: https://forms.cloud.microsoft/e/E0XXBmZdEP Please share with anyone who might benefit.

We're surveying researchers about name changes in academic publishing.

If you've changed your name and dealt with updating publications, we want to hear your experience. Any reason counts: transition, marriage, cultural reasons, etc.

forms.cloud.microsoft/e/E0XXBmZdEP

21.10.2025 12:45 โ€” ๐Ÿ‘ 16    ๐Ÿ” 23    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

tomorrow 6/20, i'm presenting this paper at #alt_FAccT, a NYC local meeting for @FAccTConference

โœจ๐ŸŽค paper session #3 ๐ŸŽคโœจ
๐Ÿ—ฝ1:30pm June 20, Fri @ MSR NYC๐Ÿ—ฝ

โฌ‡๏ธ our #FAccT2025 paper is abt โ€œwhat if ur ChatGPT spoke queer slang and AAVE?โ€

๐Ÿ“š๐Ÿ”— bit.ly/not-like-us

20.06.2025 00:13 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This was done with my co-author Jeffrey Basoah; collaborators @taolongg.bsky.social, @katharinareinecke.bsky.social, @kaitlynzhou.bsky.social, and @blahtino.bsky.social; and my advisors @chryssazrv.bsky.social and @maartensap.bsky.social at @ltiatcmu.bsky.social and @istecnico.bsky.social!

[9/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

And here are some examples where users enjoyed the interaction with the sociolectal LLMs:

๐Ÿ˜Š โ€œIt just sounds more fun to interact withโ€ -AAE participant

๐Ÿ’… โ€œI enjoy being called a diva!โ€ -Queer slang participant

[8/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Lastly we asked users for justifications for their LLM preference. Here are a few comments about the sociolect LLMs:

๐Ÿšซโ€œAgent [AAELM] using AAE sounds like a joke and not natural.โ€ -AAE participant

๐Ÿšซโ€œEven people who use LGBTQ slang donโ€™t talk like that constantly...โ€ -Queer slang participant

[7/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We also were curious into seeing how each of the user perceptions impacted user reliance on LLMs. For this we observed that generally, perception variables were positively associated with reliance. ๐Ÿ˜„

[6/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We also observed user perceptions: trust, social proximity, satisfaction, frustration, and explicit preference for an LLM using sociolects.

Notably, we notice how AAE participants explicitly preferred the SAELM over the AAELM, whereas this wasnโ€™t the case for Queer slang participants. ๐Ÿ’™๐Ÿ’š

[5/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

In our study we find that AAE users rely more on the SAE LLM over AAELM, while for Queer slang users there is no difference between the SAE LLM and QSLM.

This shows that for some sociolects, users will rely more on an LLM in Standard English than one in a sociolect they use themselves. ๐ŸคŽ๐Ÿฉท

[4/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We run two parallel studies:

1: with AAE speakers using AAE LLM (AAELM) ๐Ÿ‘‹๐Ÿพ
2: with Queer slang speakers using Queer slang LLM (QSLM) ๐Ÿณ๏ธโ€๐ŸŒˆ

In each, participants watched videos and were offered to use either a Standard English LLM or AAELM/QSLM to help answer questions.

[3/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Our study (n=985) looks at how AAVE speakers and Queer slang speakers perceive and rely on LLMsโ€™ use of their sociolect (i.e., a dialect centered around a social class). ๐Ÿ—ฃ๏ธ

This answers our main research question:

โ€œHow do users behave and feel when engaging with a sociolectal LLM?โ€ ๐Ÿคท๐Ÿป๐Ÿคท๐Ÿพโ€โ™€๏ธ๐Ÿคท๐Ÿฝโ€โ™‚๏ธ

[2/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

What if AI played the role of your sassy gay bestie ๐Ÿณ๏ธโ€๐ŸŒˆ or AAVE-speaking friend ๐Ÿ‘‹๐Ÿพ?

You: โ€œCan you plan a trip?โ€
๐Ÿค– AI: โ€œYasss queen! letโ€™s werk this babeโœจ๐Ÿ’…โ€

LLMs can talk like us, but it shapes how we trust, rely on & relate to them ๐Ÿงต

๐Ÿ“ฃ our #FAccT2025 paper: bit.ly/3HJ6rWI

[1/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 13    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
An overview of the work โ€œResearch Borderlands: Analysing Writing Across Research Culturesโ€ by Shaily Bhatt, Tal August, and Maria Antoniak. The overview describes that We  survey and interview interdisciplinary researchers (ยง3) to develop a framework of writing norms that vary across research cultures (ยง4) and operationalise them using computational metrics (ยง5). We then use this evaluation suite for two large-scale quantitative analyses: (a) surfacing variations in writing across 11 communities (ยง6); (b) evaluating the cultural competence of LLMs when adapting writing from one community to another (ยง7).

An overview of the work โ€œResearch Borderlands: Analysing Writing Across Research Culturesโ€ by Shaily Bhatt, Tal August, and Maria Antoniak. The overview describes that We survey and interview interdisciplinary researchers (ยง3) to develop a framework of writing norms that vary across research cultures (ยง4) and operationalise them using computational metrics (ยง5). We then use this evaluation suite for two large-scale quantitative analyses: (a) surfacing variations in writing across 11 communities (ยง6); (b) evaluating the cultural competence of LLMs when adapting writing from one community to another (ยง7).

๐Ÿ–‹๏ธ Curious how writing differs across (research) cultures?
๐Ÿšฉ Tired of โ€œculturalโ€ evals that don't consult people?

We engaged with interdisciplinary researchers to identify & measure โœจcultural normsโœจin scientific writing, and show thatโ—LLMs flatten themโ—

๐Ÿ“œ arxiv.org/abs/2506.00784

[1/11]

09.06.2025 23:29 โ€” ๐Ÿ‘ 72    ๐Ÿ” 30    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5
Academic paper titled un-straightening generative ai: how queer artists surface and challenge the normativity of generative ai models

The piece is written by Jordan Taylor, Joel Mire, Franchesca Spektor, Alicia DeVrio, Maarten Sap, Haiyi Zhu, and Sarah Fox.

As an image titled 24 attempts at intimacy showing 24 ai generated images with the word intimacy, none of which seems to include same gender couples

Academic paper titled un-straightening generative ai: how queer artists surface and challenge the normativity of generative ai models The piece is written by Jordan Taylor, Joel Mire, Franchesca Spektor, Alicia DeVrio, Maarten Sap, Haiyi Zhu, and Sarah Fox. As an image titled 24 attempts at intimacy showing 24 ai generated images with the word intimacy, none of which seems to include same gender couples

๐Ÿณ๏ธโ€๐ŸŒˆ๐ŸŽจ๐Ÿ’ป๐Ÿ“ข Happy to share our workshop study on queer artistsโ€™ experiences critically engaging with GenAI

Looking forward to presenting this work at #FAccT2025 and you can read a pre-print here:
arxiv.org/abs/2503.09805

14.05.2025 18:38 โ€” ๐Ÿ‘ 27    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

RLHF is built upon some quite oversimplistic assumptions, i.e., that preferences between pairs of text are purely about quality. But this is an inherently subjective task (not unlike toxicity annotation) -- so we wanted to know, do biases similar to toxicity annotation emerge in reward models?

06.03.2025 20:54 โ€” ๐Ÿ‘ 24    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Screenshot of Arxiv paper title, "Rejected Dialects: Biases Against African American Language in Reward Models," and author list: Joel Mire, Zubin Trivadi Aysola, Daniel Chechelnitsky, Nicholas Deas, Chrysoula Zerva, and Maarten Sap.

Screenshot of Arxiv paper title, "Rejected Dialects: Biases Against African American Language in Reward Models," and author list: Joel Mire, Zubin Trivadi Aysola, Daniel Chechelnitsky, Nicholas Deas, Chrysoula Zerva, and Maarten Sap.

Reward models for LMs are meant to align outputs with human preferencesโ€”but do they accidentally encode dialect biases? ๐Ÿค”

Excited to share our paper on biases against African American Language in reward models, accepted to #NAACL2025 Findings! ๐ŸŽ‰

Paper: arxiv.org/abs/2502.12858 (1/10)

06.03.2025 19:49 โ€” ๐Ÿ‘ 38    ๐Ÿ” 11    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

Looking for all your LTI friends on Bluesky? The LTI Starter Pack is here to help!

go.bsky.app/NhTwCVb

20.11.2024 16:15 โ€” ๐Ÿ‘ 15    ๐Ÿ” 9    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 1