We're investigating how publishers handle name changes and the barriers scholars face. If you've changed your name (or are considering it) and dealt with updating your academic publications, we want to hear from you.
Researchers who have changed their name for any reason, such as gender transition, marriage, divorce, immigration, cultural reasons, or citation formatting issues. Whether you've successfully updated your work, are currently trying, or decided not to because of barriers, your opinion matters.
Your input will help us advocate for better, more inclusive policies in academic publishing. It takes around 5-10 minutes to complete.
Survey Link: https://forms.cloud.microsoft/e/E0XXBmZdEP
Please share with anyone who might benefit.
We're surveying researchers about name changes in academic publishing.
If you've changed your name and dealt with updating publications, we want to hear your experience. Any reason counts: transition, marriage, cultural reasons, etc.
forms.cloud.microsoft/e/E0XXBmZdEP
21.10.2025 12:45 β π 12 π 17 π¬ 2 π 1
tomorrow 6/20, i'm presenting this paper at #alt_FAccT, a NYC local meeting for @FAccTConference
β¨π€ paper session #3 π€β¨
π½1:30pm June 20, Fri @ MSR NYCπ½
β¬οΈ our #FAccT2025 paper is abt βwhat if ur ChatGPT spoke queer slang and AAVE?β
ππ bit.ly/not-like-us
20.06.2025 00:13 β π 5 π 1 π¬ 0 π 0
This was done with my co-author Jeffrey Basoah; collaborators @taolongg.bsky.social, @katharinareinecke.bsky.social, @kaitlynzhou.bsky.social, and @blahtino.bsky.social; and my advisors @chryssazrv.bsky.social and @maartensap.bsky.social at @ltiatcmu.bsky.social and @istecnico.bsky.social!
[9/9]
17.06.2025 19:39 β π 4 π 0 π¬ 0 π 0
And here are some examples where users enjoyed the interaction with the sociolectal LLMs:
π βIt just sounds more fun to interact withβ -AAE participant
π
βI enjoy being called a diva!β -Queer slang participant
[8/9]
17.06.2025 19:39 β π 0 π 0 π¬ 1 π 0
Lastly we asked users for justifications for their LLM preference. Here are a few comments about the sociolect LLMs:
π«βAgent [AAELM] using AAE sounds like a joke and not natural.β -AAE participant
π«βEven people who use LGBTQ slang donβt talk like that constantly...β -Queer slang participant
[7/9]
17.06.2025 19:39 β π 0 π 0 π¬ 1 π 0
We also were curious into seeing how each of the user perceptions impacted user reliance on LLMs. For this we observed that generally, perception variables were positively associated with reliance. π
[6/9]
17.06.2025 19:39 β π 0 π 0 π¬ 1 π 0
We also observed user perceptions: trust, social proximity, satisfaction, frustration, and explicit preference for an LLM using sociolects.
Notably, we notice how AAE participants explicitly preferred the SAELM over the AAELM, whereas this wasnβt the case for Queer slang participants. ππ
[5/9]
17.06.2025 19:39 β π 0 π 0 π¬ 1 π 0
In our study we find that AAE users rely more on the SAE LLM over AAELM, while for Queer slang users there is no difference between the SAE LLM and QSLM.
This shows that for some sociolects, users will rely more on an LLM in Standard English than one in a sociolect they use themselves. π€π©·
[4/9]
17.06.2025 19:39 β π 0 π 0 π¬ 1 π 0
We run two parallel studies:
1: with AAE speakers using AAE LLM (AAELM) ππΎ
2: with Queer slang speakers using Queer slang LLM (QSLM) π³οΈβπ
In each, participants watched videos and were offered to use either a Standard English LLM or AAELM/QSLM to help answer questions.
[3/9]
17.06.2025 19:39 β π 0 π 0 π¬ 1 π 0
Our study (n=985) looks at how AAVE speakers and Queer slang speakers perceive and rely on LLMsβ use of their sociolect (i.e., a dialect centered around a social class). π£οΈ
This answers our main research question:
βHow do users behave and feel when engaging with a sociolectal LLM?β π€·π»π€·πΎββοΈπ€·π½ββοΈ
[2/9]
17.06.2025 19:39 β π 0 π 1 π¬ 1 π 0
What if AI played the role of your sassy gay bestie π³οΈβπ or AAVE-speaking friend ππΎ?
You: βCan you plan a trip?β
π€ AI: βYasss queen! letβs werk this babeβ¨π
β
LLMs can talk like us, but it shapes how we trust, rely on & relate to them π§΅
π£ our #FAccT2025 paper: bit.ly/3HJ6rWI
[1/9]
17.06.2025 19:39 β π 13 π 6 π¬ 1 π 2
An overview of the work βResearch Borderlands: Analysing Writing Across Research Culturesβ by Shaily Bhatt, Tal August, and Maria Antoniak. The overview describes that We survey and interview interdisciplinary researchers (Β§3) to develop a framework of writing norms that vary across research cultures (Β§4) and operationalise them using computational metrics (Β§5). We then use this evaluation suite for two large-scale quantitative analyses: (a) surfacing variations in writing across 11 communities (Β§6); (b) evaluating the cultural competence of LLMs when adapting writing from one community to another (Β§7).
ποΈ Curious how writing differs across (research) cultures?
π© Tired of βculturalβ evals that don't consult people?
We engaged with interdisciplinary researchers to identify & measure β¨cultural normsβ¨in scientific writing, and show thatβLLMs flatten themβ
π arxiv.org/abs/2506.00784
[1/11]
09.06.2025 23:29 β π 74 π 30 π¬ 1 π 5
Academic paper titled un-straightening generative ai: how queer artists surface and challenge the normativity of generative ai models
The piece is written by Jordan Taylor, Joel Mire, Franchesca Spektor, Alicia DeVrio, Maarten Sap, Haiyi Zhu, and Sarah Fox.
As an image titled 24 attempts at intimacy showing 24 ai generated images with the word intimacy, none of which seems to include same gender couples
π³οΈβππ¨π»π’ Happy to share our workshop study on queer artistsβ experiences critically engaging with GenAI
Looking forward to presenting this work at #FAccT2025 and you can read a pre-print here:
arxiv.org/abs/2503.09805
14.05.2025 18:38 β π 25 π 4 π¬ 0 π 0
RLHF is built upon some quite oversimplistic assumptions, i.e., that preferences between pairs of text are purely about quality. But this is an inherently subjective task (not unlike toxicity annotation) -- so we wanted to know, do biases similar to toxicity annotation emerge in reward models?
06.03.2025 20:54 β π 24 π 3 π¬ 1 π 0
Screenshot of Arxiv paper title, "Rejected Dialects: Biases Against African American Language in Reward Models," and author list: Joel Mire, Zubin Trivadi Aysola, Daniel Chechelnitsky, Nicholas Deas, Chrysoula Zerva, and Maarten Sap.
Reward models for LMs are meant to align outputs with human preferencesβbut do they accidentally encode dialect biases? π€
Excited to share our paper on biases against African American Language in reward models, accepted to #NAACL2025 Findings! π
Paper: arxiv.org/abs/2502.12858 (1/10)
06.03.2025 19:49 β π 38 π 11 π¬ 1 π 2
Looking for all your LTI friends on Bluesky? The LTI Starter Pack is here to help!
go.bsky.app/NhTwCVb
20.11.2024 16:15 β π 15 π 9 π¬ 6 π 1
Persona for personalization
Emerging Languages in Social Media
Computational Social Science | NLP
Checkout my X @lrzneedresearch
https://web2.qatar.cmu.edu/~yunzex
Assistant Professor at CMU HCII | https://techsolidaritylab.com/
NLP researcher in multilinguality and AI ethics. Barista, improv comedian, guitar player, Queer in AI organizer and meditation teacher.
PhD student at @ds-hamburg.bsky.social.
Website: pranav-a.github.io
researching AI [evaluation, governance, accountability]
Seeking research scientist or post-doc roles in ethics/fairness/safety | academic transfag interested in the harms of language technologies | he/they | see also mxeddie.github.io | Eddie Ungless on LinkedIn
- computer science professor @ vassar college
- avid kirby fan, 1st ever trans minnesota butter princess π³οΈββ§οΈ, cat + gf lover
- work: AI, HCI, mental health, child welfare, carcerality, trans stuff
- hudson valley (fall-spring) / minneapolis (summer)
PhD @CMU LTI
https://eeelisa.github.io/
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT). June 2026 in Montreal, Canada π¨π¦ #FAccT2026
https://facctconference.org/
Professor of HCII and LTI at Carnegie Mellon School of Computer Science.
jeffreybigham.com
At CMU, the (Blue)sky's the limit. United by curiosity and driven by passion, we reach across disciplines, forge new ground and deploy our expertise to make real change that benefits humankind.
Master's @ltiatcmu.bsky.social
https://reecursion.github.io
I research interactive and explainable Sociotechnical Artificial Intelligence, building models that push the frontier of learnability and generalizability through data-deep exploration as well as technologies that support human collaboration and learning
Professor at UW Allen School for Comp Science & Engineering, researching HCI and computing ethics, co-founder of LabintheWild.org
https://homes.cs.washington.edu/~reinecke/
https://www.labinthewild.org/
Assistant Professor Instituto Superior TΓ©cnico (IST) and IT.
Interested in understanding uncertainty in data, models, life. NLP, ML and climbing fan.
NLP lab at NAIST in Nara, Japan π¦ nlp.naist.jp/en/
CompLing group (CLAUSE) at Bielefeld U (PI: Sina ZarrieΓ). We work on: NLG, Language & Vision, Pragmatics & Dialogue, HateSpeech, BabyLMs, DH, and more!
clause-bielefeld.github.io
π¦οΈπ CS PhD @ColumbiaHCI β’ human-computer interaction researcher β’ creativity β¨ β’ prev @cornellcis @cornellucomm
at least this link is working: cs.columbia.edu/~long
Incoming Assistant Professor @cornellbowers.bsky.social
Researcher @togetherai.bsky.social
Previously @stanfordnlp.bsky.social @ai2.bsky.social @msftresearch.bsky.social
https://katezhou.github.io/
NLP PhD student at @naist-nlp.bsky.social
Our mission is to raise awareness of queer issues in AI, foster a community of queer researchers and celebrate the work of queer scientists. More about us: queerinai.com