Daniel Chechelnitsky's Avatar

Daniel Chechelnitsky

@dchechel.bsky.social

PhDing @ CMU LTI

40 Followers  |  90 Following  |  9 Posts  |  Joined: 24.02.2025  |  1.6013

Latest posts by dchechel.bsky.social on Bluesky

We're investigating how publishers handle name changes and the barriers scholars face. If you've changed your name (or are considering it) and dealt with updating your academic publications, we want to hear from you.

Researchers who have changed their name for any reason, such as gender transition, marriage, divorce, immigration, cultural reasons, or citation formatting issues. Whether you've successfully updated your work, are currently trying, or decided not to because of barriers, your opinion matters.

Your input will help us advocate for better, more inclusive policies in academic publishing. It takes around 5-10 minutes to complete.

Survey Link: https://forms.cloud.microsoft/e/E0XXBmZdEP

Please share with anyone who might benefit.

We're investigating how publishers handle name changes and the barriers scholars face. If you've changed your name (or are considering it) and dealt with updating your academic publications, we want to hear from you. Researchers who have changed their name for any reason, such as gender transition, marriage, divorce, immigration, cultural reasons, or citation formatting issues. Whether you've successfully updated your work, are currently trying, or decided not to because of barriers, your opinion matters. Your input will help us advocate for better, more inclusive policies in academic publishing. It takes around 5-10 minutes to complete. Survey Link: https://forms.cloud.microsoft/e/E0XXBmZdEP Please share with anyone who might benefit.

We're surveying researchers about name changes in academic publishing.

If you've changed your name and dealt with updating publications, we want to hear your experience. Any reason counts: transition, marriage, cultural reasons, etc.

forms.cloud.microsoft/e/E0XXBmZdEP

21.10.2025 12:45 β€” πŸ‘ 12    πŸ” 17    πŸ’¬ 2    πŸ“Œ 1

tomorrow 6/20, i'm presenting this paper at #alt_FAccT, a NYC local meeting for @FAccTConference

✨🎀 paper session #3 🎀✨
πŸ—½1:30pm June 20, Fri @ MSR NYCπŸ—½

⬇️ our #FAccT2025 paper is abt β€œwhat if ur ChatGPT spoke queer slang and AAVE?”

πŸ“šπŸ”— bit.ly/not-like-us

20.06.2025 00:13 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This was done with my co-author Jeffrey Basoah; collaborators @taolongg.bsky.social, @katharinareinecke.bsky.social, @kaitlynzhou.bsky.social, and @blahtino.bsky.social; and my advisors @chryssazrv.bsky.social and @maartensap.bsky.social at @ltiatcmu.bsky.social and @istecnico.bsky.social!

[9/9]

17.06.2025 19:39 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And here are some examples where users enjoyed the interaction with the sociolectal LLMs:

😊 β€œIt just sounds more fun to interact with” -AAE participant

πŸ’… β€œI enjoy being called a diva!” -Queer slang participant

[8/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Lastly we asked users for justifications for their LLM preference. Here are a few comments about the sociolect LLMs:

πŸš«β€œAgent [AAELM] using AAE sounds like a joke and not natural.” -AAE participant

πŸš«β€œEven people who use LGBTQ slang don’t talk like that constantly...” -Queer slang participant

[7/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also were curious into seeing how each of the user perceptions impacted user reliance on LLMs. For this we observed that generally, perception variables were positively associated with reliance. πŸ˜„

[6/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also observed user perceptions: trust, social proximity, satisfaction, frustration, and explicit preference for an LLM using sociolects.

Notably, we notice how AAE participants explicitly preferred the SAELM over the AAELM, whereas this wasn’t the case for Queer slang participants. πŸ’™πŸ’š

[5/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In our study we find that AAE users rely more on the SAE LLM over AAELM, while for Queer slang users there is no difference between the SAE LLM and QSLM.

This shows that for some sociolects, users will rely more on an LLM in Standard English than one in a sociolect they use themselves. 🀎🩷

[4/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We run two parallel studies:

1: with AAE speakers using AAE LLM (AAELM) πŸ‘‹πŸΎ
2: with Queer slang speakers using Queer slang LLM (QSLM) πŸ³οΈβ€πŸŒˆ

In each, participants watched videos and were offered to use either a Standard English LLM or AAELM/QSLM to help answer questions.

[3/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our study (n=985) looks at how AAVE speakers and Queer slang speakers perceive and rely on LLMs’ use of their sociolect (i.e., a dialect centered around a social class). πŸ—£οΈ

This answers our main research question:

β€œHow do users behave and feel when engaging with a sociolectal LLM?” πŸ€·πŸ»πŸ€·πŸΎβ€β™€οΈπŸ€·πŸ½β€β™‚οΈ

[2/9]

17.06.2025 19:39 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

What if AI played the role of your sassy gay bestie πŸ³οΈβ€πŸŒˆ or AAVE-speaking friend πŸ‘‹πŸΎ?

You: β€œCan you plan a trip?”
πŸ€– AI: β€œYasss queen! let’s werk this babeβœ¨πŸ’…β€

LLMs can talk like us, but it shapes how we trust, rely on & relate to them 🧡

πŸ“£ our #FAccT2025 paper: bit.ly/3HJ6rWI

[1/9]

17.06.2025 19:39 β€” πŸ‘ 13    πŸ” 6    πŸ’¬ 1    πŸ“Œ 2
An overview of the work β€œResearch Borderlands: Analysing Writing Across Research Cultures” by Shaily Bhatt, Tal August, and Maria Antoniak. The overview describes that We  survey and interview interdisciplinary researchers (Β§3) to develop a framework of writing norms that vary across research cultures (Β§4) and operationalise them using computational metrics (Β§5). We then use this evaluation suite for two large-scale quantitative analyses: (a) surfacing variations in writing across 11 communities (Β§6); (b) evaluating the cultural competence of LLMs when adapting writing from one community to another (Β§7).

An overview of the work β€œResearch Borderlands: Analysing Writing Across Research Cultures” by Shaily Bhatt, Tal August, and Maria Antoniak. The overview describes that We survey and interview interdisciplinary researchers (Β§3) to develop a framework of writing norms that vary across research cultures (Β§4) and operationalise them using computational metrics (Β§5). We then use this evaluation suite for two large-scale quantitative analyses: (a) surfacing variations in writing across 11 communities (Β§6); (b) evaluating the cultural competence of LLMs when adapting writing from one community to another (Β§7).

πŸ–‹οΈ Curious how writing differs across (research) cultures?
🚩 Tired of β€œcultural” evals that don't consult people?

We engaged with interdisciplinary researchers to identify & measure ✨cultural norms✨in scientific writing, and show that❗LLMs flatten them❗

πŸ“œ arxiv.org/abs/2506.00784

[1/11]

09.06.2025 23:29 β€” πŸ‘ 74    πŸ” 30    πŸ’¬ 1    πŸ“Œ 5
Academic paper titled un-straightening generative ai: how queer artists surface and challenge the normativity of generative ai models

The piece is written by Jordan Taylor, Joel Mire, Franchesca Spektor, Alicia DeVrio, Maarten Sap, Haiyi Zhu, and Sarah Fox.

As an image titled 24 attempts at intimacy showing 24 ai generated images with the word intimacy, none of which seems to include same gender couples

Academic paper titled un-straightening generative ai: how queer artists surface and challenge the normativity of generative ai models The piece is written by Jordan Taylor, Joel Mire, Franchesca Spektor, Alicia DeVrio, Maarten Sap, Haiyi Zhu, and Sarah Fox. As an image titled 24 attempts at intimacy showing 24 ai generated images with the word intimacy, none of which seems to include same gender couples

πŸ³οΈβ€πŸŒˆπŸŽ¨πŸ’»πŸ“’ Happy to share our workshop study on queer artists’ experiences critically engaging with GenAI

Looking forward to presenting this work at #FAccT2025 and you can read a pre-print here:
arxiv.org/abs/2503.09805

14.05.2025 18:38 β€” πŸ‘ 25    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

RLHF is built upon some quite oversimplistic assumptions, i.e., that preferences between pairs of text are purely about quality. But this is an inherently subjective task (not unlike toxicity annotation) -- so we wanted to know, do biases similar to toxicity annotation emerge in reward models?

06.03.2025 20:54 β€” πŸ‘ 24    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Screenshot of Arxiv paper title, "Rejected Dialects: Biases Against African American Language in Reward Models," and author list: Joel Mire, Zubin Trivadi Aysola, Daniel Chechelnitsky, Nicholas Deas, Chrysoula Zerva, and Maarten Sap.

Screenshot of Arxiv paper title, "Rejected Dialects: Biases Against African American Language in Reward Models," and author list: Joel Mire, Zubin Trivadi Aysola, Daniel Chechelnitsky, Nicholas Deas, Chrysoula Zerva, and Maarten Sap.

Reward models for LMs are meant to align outputs with human preferencesβ€”but do they accidentally encode dialect biases? πŸ€”

Excited to share our paper on biases against African American Language in reward models, accepted to #NAACL2025 Findings! πŸŽ‰

Paper: arxiv.org/abs/2502.12858 (1/10)

06.03.2025 19:49 β€” πŸ‘ 38    πŸ” 11    πŸ’¬ 1    πŸ“Œ 2

Looking for all your LTI friends on Bluesky? The LTI Starter Pack is here to help!

go.bsky.app/NhTwCVb

20.11.2024 16:15 β€” πŸ‘ 15    πŸ” 9    πŸ’¬ 6    πŸ“Œ 1

@dchechel is following 20 prominent accounts