If you're at #CSCW2025, check out this wonderful panel on LLMs in conversation research. Having FOMO already-- but go listen to my wonderful colleagues!
21.10.2025 16:09 β π 5 π 0 π¬ 0 π 0@hopeschroeder.bsky.social
Studying NLP, CSS, and Human-AI interaction. PhD student @MIT. Previously at Microsoft FATE + CSS, Oxford Internet Institute, Stanford Symbolic Systems hopeschroeder.com
If you're at #CSCW2025, check out this wonderful panel on LLMs in conversation research. Having FOMO already-- but go listen to my wonderful colleagues!
21.10.2025 16:09 β π 5 π 0 π¬ 0 π 0Hello #COLM2025! Excited to be kicking off the NLP4Democracy workshop this morning. We are in 520E (behind A/B/C) - check out our amazing program! sites.google.com/andrew.cmu.e...
10.10.2025 13:20 β π 4 π 1 π¬ 0 π 0We may have the chance to hire an outstanding researcher 3+ years post PhD to join Tarleton Gillespie, Mary Gray and me in Cambridge MA bringing critical sociotechnical perspectives to bear on new technologies.
jobs.careers.microsoft.com/global/en/jo...
Thanks for sharing- not just our paper but also learned a lot from this list! :)
24.07.2025 09:06 β π 1 π 0 π¬ 0 π 0Awesome work and great presentation! Congrats!! β‘οΈ
23.07.2025 13:42 β π 0 π 0 π¬ 0 π 0Talking about this work tomorrow (Wed, July 23rd) at #IC2S2 in NorrkΓΆping during the 11 am session on LLMs, Annotation, and Synthetic Data! Come hear about this and more!
22.07.2025 21:10 β π 15 π 2 π¬ 0 π 0π βΊοΈ
22.07.2025 21:03 β π 0 π 0 π¬ 0 π 0Implications vary by task and domain. Researchers should clearly define their annotation constructs before reviewing LLM annotations. We are subject to anchoring bias that can affect our evaluations, or even our research findings!
Read more: arxiv.org/abs/2507.15821
Using LLM-influenced labels, even when a crowd of humans reviews them and is aggregated into a set of crowd labels, can lead to 1) different findings when used in data analysis and 2) different results when used as a basis of evaluating LLM performance on the task.
22.07.2025 08:34 β π 2 π 0 π¬ 1 π 0What happens if we use LLM-influenced labels as ground truth when evaluating LLM performance on these tasks? We can seriously overestimate LLM performance on these tasks. F1 scores for some tasks were +.5 higher when evaluated using LLM-influenced labels as ground truth!
22.07.2025 08:34 β π 3 π 0 π¬ 1 π 0Howeverβ¦ annotators STRONGLY took LLM suggestions: just 40% of human crowd labels overlap with LLM baselines, but overlap jumps to over 80% when LLM suggestions are given (varied crowd thresholds and conditions shown in graph). Beware: Humans are subject to anchoring bias!
22.07.2025 08:33 β π 6 π 1 π¬ 1 π 0Some findings: β οΈ reviewing LLM suggestions did not make annotators go faster, and often slowed them down! OTOH, having LLM assistance made annotators βmore self-confidentβin their task and content understanding at no identified cost to their tested task understanding.
22.07.2025 08:33 β π 5 π 2 π¬ 1 π 0We conducted experiments where over 410 unique annotators generated with over 7,000 annotations across three LLM assistance conditions of varying strengths against a control, using two different models, and two different complex, subjective annotation tasks.
22.07.2025 08:33 β π 3 π 1 π¬ 1 π 0LLMs can be fast and promising annotators, so letting human annotators "review" first-pass LLM annotations in interpretive tasks is tempting. How does this impact productivity, annotators, evaluating LLM performance on subjective tasks and downstream data analysis?
22.07.2025 08:32 β π 2 π 0 π¬ 1 π 0π£οΈ Excited to share our new #ACL2025 Findings paper: βJust Put a Human in the Loop? Investigating LLM-Assisted Annotation for Subjective Tasksβ with Jad Kabbara and Deb Roy. Arxiv: arxiv.org/abs/2507.15821
Read about our findings ‡οΈ
Thanks for attending and for your comments!!
25.06.2025 08:28 β π 1 π 0 π¬ 0 π 0Come hear much more at our paper talk tomorrow (Wednesday, 6/25) in Meta Research and Critiques, 9:12 am at New Stage A! Read the paper here: dl.acm.org/doi/10.1145/...
24.06.2025 14:50 β π 4 π 1 π¬ 0 π 0*What should FAccT do?* We discuss a need for the conference to clarify its policies next year, engage scholars from different disciplines when considering policy on this delicate subject, and engage authors in reflexive practice upstream of paper-writing, potentially through CRAFT.
24.06.2025 14:50 β π 1 π 0 π¬ 1 π 0*Are disclosed features connected to described impacts?* Disclosed features are much less commonly described in terms of impacts the feature had on the research, which may leave a gap for readers to jump to conclusions about how the disclosed feature impacted the work.
24.06.2025 14:50 β π 3 π 1 π¬ 1 π 0*What do authors disclose in positionality statements?* We conducted fine-grained annotation of the statements. We find academic background and training are disclosed most often, but identity features like race and gender are also common.
24.06.2025 14:49 β π 2 π 0 π¬ 1 π 0We reviewed papers from the entire history of FAccT for the presence of positionality statements. We find 2024 marked a significant proportional increase in papers that included positionality statements, likely as a result of PC recommendations:
24.06.2025 14:48 β π 0 π 0 π¬ 1 π 0In 2024, the FAccT PCs recommended that authors write positionality statements (blog post: medium.com/@alexandra.o...)
24.06.2025 14:47 β π 1 π 0 π¬ 1 π 0With ongoing reflection on the impact of computing on society, and the role of researchers in shaping impacts, positionality statements have become more common in computing venues, but little is known about their contents or the impact of conference policy on their presence.
24.06.2025 14:46 β π 1 π 0 π¬ 1 π 01) Thrilled to be at #FAccT for the first time this week, representing a meta-research paper on positionality statements at FAccT from 2018-2024, in collaboration with @s010n.bsky.social and Akshansh Pareek, "Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT"
24.06.2025 14:46 β π 10 π 1 π¬ 1 π 0Democracy needs you! Super excited to be co-organizing this NLP for Democracy workshop at #COLM2025 with a wonderful group. Abstracts due June 19th!
06.06.2025 15:59 β π 8 π 2 π¬ 0 π 0Exciting news: The Fairness, Accountability, Transparency and Ethics (FATE) group at Microsoft Research NYC is hiring a predoctoral fellow!!! π
www.microsoft.com/en-us/resear...
a whole cluster of postdocs and phd positions in Tartu in Digital Humanities / Computational Social Science / AI under the umbrella of big European projects.
consider sharing please!
π£π§ͺ
$25k grants for those who:
1. are working on research on STEM and education (including AI and CS, graduate education and MSIs, and scholarship that aims to reduce inequality), and
2. have had a recently terminated or cancelled grant from NSF.
Early-career scholars prioritized
Last poster session starting now in Pacifico North! Come say hi π
29.04.2025 06:30 β π 0 π 0 π¬ 0 π 0Paper: dl.acm.org/doi/10.1145/...
Try the prototype: forage.ccc-mit.org