We find that 1) acceptance of AI varies widely depending on use case context, 2) judgments differ between demographic groups, and 3) people use both cost-benefit AND rule-based reasoning to make their decisions where diverging strategies show higher disagreement.
20.10.2025 08:51 β π 0 π 0 π¬ 0 π 0
To build consensus around AI use cases, it's imperative to understand how people, especially lay-users, reason about AI use cases. We asked 197 participants to make decisions on individual AI use cases and share their reasoning process.
20.10.2025 08:51 β π 0 π 0 π¬ 1 π 0
Our tool Riveterπͺ used for a creative and interesting study of fan fiction! Riveter helps you work with βconnotation framesβ (verb lexica) to measure biases in your dataset. @julianeugarten.bsky.socialβs overview and explanations are really clear, highly recommend!
github.com/maartensap/r...
16.09.2025 13:40 β π 29 π 5 π¬ 1 π 0
Using Riveter to map gendered power dynamics in Hades/Persephone fan fiction
| Transformative Works and Cultures
Proud to see my article 'Using Riveter to map gendered power dynamics in Hades/Persephone fan fiction' in @journal.transformativeworks.org, my favorite academic journal.
Want to know how fanfiction portrays power dynamics between these two? Read on!
journal.transformativeworks.org/index.php/tw...
16.09.2025 09:04 β π 25 π 6 π¬ 2 π 1
πFor the SoLaR workshop
@COLM_conf
we are soliciting opinion abstracts to encourage new perspectives and opinions on responsible language modeling, 1-2 of which will be selected to be presented at the workshop.
Please use the google form below to submit your opinion abstract β¬οΈ
08.08.2025 12:40 β π 8 π 4 π¬ 1 π 0
We are accepting papers for the following two tracks!
π€ ML track: algorithms, math, computation
π Socio-technical track: policy, ethics, human participant research
17.06.2025 18:00 β π 0 π 0 π¬ 0 π 0
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025
COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada
Interested in shaping the progress of safe AI and meeting leading researchers in the field? SoLaR@COLM 2025 is looking for paper submissions / reviewers!
Submit your paper / sign up to review by June 23
CFP and workshop info: solar-colm.github.io
Reviewer sign up: docs.google.com/forms/d/e/1F...
17.06.2025 17:59 β π 7 π 2 π¬ 1 π 0
How does the public conceptualize AI? Rather than self-reported measures, we use metaphors to understand the nuance and complexity of peopleβs mental models. In our #FAccT2025 paper, we analyzed 12,000 metaphors collected over 12 months to track shifts in public perceptions.
02.05.2025 01:19 β π 49 π 14 π¬ 3 π 1
Life update! Excited to announce that Iβll be starting as an assistant professor at Cornell Info Sci in August 2026! Iβll be recruiting students this upcoming cycle!
An abundance of thanks to all my mentors and friends who helped make this possible!!
24.04.2025 02:03 β π 76 π 8 π¬ 19 π 0
Website: solar-colm.github.io
With:
@usmananwar.bsky.social @liweijiang.bsky.social @valentinapy.bsky.social @sharonlevy.bsky.social Daniel Tan @akhilayerukola.bsky.social @jiminmun.bsky.social Ruth Appel @sumeetrm.bsky.social @davidskrueger.bsky.social Sheila McIlraith @maartensap.bsky.social
12.05.2025 15:25 β π 2 π 1 π¬ 0 π 0
π’ The SoLaR workshop will be collocated with COLM!
@colmweb.org
SoLaR is a collaborative forum for researchers working on responsible development, deployment and use of language models.
We welcome both technical and sociotechnical submissions, deadline July 5th!
12.05.2025 15:25 β π 17 π 6 π¬ 1 π 0
Check out our work on improving LLM's ability to seek information through asking better questions! π«
21.02.2025 16:16 β π 7 π 0 π¬ 0 π 0
MIT // researching fairness, equity, & pluralistic alignment in LLMs
previously @ MIT media lab, mila / mcgill
i like language and dogs and plants and ultimate frisbee and baking and sunsets
https://elinorp-d.github.io
researching AI [evaluation, governance, accountability]
Assistant Professor @ RutgersCS
Responsible AI
sharonlevy.github.io
Assistant Professor at Carnegie Mellon. Machine Learning and social impact. https://bryanwilder.github.io/
MLE@IBM watsonx | previously @virginiatech, @ibmresearch
| NLP, HCI, Computational Social Science Researcher | Opinions are my own!
Incoming Assistant Professor @cornellbowers.bsky.social
Researcher @togetherai.bsky.social
Previously @stanfordnlp.bsky.social @ai2.bsky.social @msftresearch.bsky.social
https://katezhou.github.io/
Postdoc in AI at the Allen Institute for AI & the University of Washington.
π https://valentinapy.github.io
Masterβs student @ltiatcmu.bsky.social. he/him
Asst Prof at Cornell Info Sci and Cornell Tech. Responsible AI
https://angelina-wang.github.io/
Assistant professor of CS at UC Berkeley, core faculty in Computational Precision Health. Developing ML methods to study health and inequality. "On the whole, though, I take the side of amazement."
https://people.eecs.berkeley.edu/~emmapierson/
Prof (CS @Stanford), Co-Director @StanfordHAI, Cofounder/CEO @theworldlabs, CoFounder @ai4allorg #AI #computervision #robotics #AI-healthcare
PhD student in Statistics & Data Science at CMU
Causal inference, machine learning, nonparametric statistics with applications to social science
jungholeestat.github.io
PhD student @uwnlp.bsky.social @uwcse.bsky.social | visiting researcher @MetaAI | previously @jhuclsp.bsky.social
https://stellalisy.com
PhD Student in Machine Learning at CMU. yewonbyun.github.io
Working on #NLProc for social good.
Currently at LTI at CMU. π³βπ
Professor at UW; Researcher at Meta. LMs, NLP, ML. PNW life.
PhD student @ CMU LTI. efficiency/data in NLP/ML
PhD-ing @ LTI, CMU; Intern @ NVIDIA. Doing Reasoning with Gen AI!