One day left to apply!
03.11.2025 16:15 β π 2 π 2 π¬ 0 π 0@willie-agnew.bsky.social
Queer in AI π³οΈβπ | postdoc at cmu HCII | ostem |william-agnew.com | views my own | he/they
One day left to apply!
03.11.2025 16:15 β π 2 π 2 π¬ 0 π 0We're all part of a giant, non-consenual experiment (how to make money with llms) that's causing untold harm. These things are incredibly manipulative and addictive. www.wired.com/visual-story...
31.10.2025 01:35 β π 4 π 1 π¬ 0 π 0"I made this policymaker aware of this problem, and that later showed up in something they wrote"--it seems really hard to show a causal relationship. Curious what others have done (or maybe y'all have gotten your names on bills) 2/2
29.10.2025 22:40 β π 0 π 0 π¬ 0 π 0I've been working on my research statements and its been quite challenging showing tangible impacts of my policy work. Like, I know what they are, but its usually "I got these lines in an NDA'd draft changed" 1/
29.10.2025 22:40 β π 2 π 0 π¬ 1 π 0Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how βAI Surrogatesβ entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
21.10.2025 20:24 β π 274 π 115 π¬ 8 π 24This is an especially worrying statement when facial recognition can be inaccurate, biased, and generally make mistakes. Basing whether someone should be in the US or not, and ignoring physical media like a birth certificate, is ripe for disaster bsky.app/profile/jose...
29.10.2025 16:18 β π 2514 π 876 π¬ 54 π 32Queer in AI and oSTEM are launching our 2025 Grad School Application Mentorship program! Queer graduate school applicants, you can apply at openreview.net/group?id=Que... to get feedback on your application materials (e.g., CV, personal statement, etc). More info @ www.queerinai.com/grad-app-aid 1/3
13.10.2025 00:35 β π 3 π 2 π¬ 2 π 4One week left to apply!
28.10.2025 18:05 β π 1 π 3 π¬ 0 π 0Applications close November 4th!
24.10.2025 18:24 β π 2 π 1 π¬ 0 π 0NeurIPS 2025 is fast approaching, and preparations are underway!! We are excited to have an incredible program this year, and look forward to have your company to make it even more memorable! π³οΈβππ³οΈββ§οΈ
Please fill this form to sign-up for our events!πββοΈ
forms.gle/RY9AyZ4JHiqW...
Two spuddies will be at #CSCW2025:
@yuxiwu.com will present our work on designing citizen harm reporting interfaces for privacy (and is on the job market!).
Isadora Krsek will present our work on user reactions to AI-identified self-disclosure risks online.
If you know of any openings please send them my way!
16.10.2025 23:19 β π 0 π 0 π¬ 0 π 0I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!
16.10.2025 23:19 β π 7 π 6 π¬ 2 π 0β¨Iβm on the academic job market β¨
Iβm a PhD candidate at @hcii.cmu.edu studying tech, labor, and resistance π©π»βπ»πͺπ½π₯
I research how workers and communities contest harmful sociotechnical systems and shape alternative futures through everyday resistance and collective action
More info: cella.io
Weβre grateful to have been able to help these scientists, engineers, and future medical professionals on their journeys, and want to help more! Please share this widely with your colleagues and networks to help us get this aid to those who need it. 5/5
09.10.2025 00:37 β π 1 π 1 π¬ 0 π 0Our Application Financial Aid Program provided over $250,000 to more than 250 LGBTQIA+ scholars from over 30 countries, allowing them to apply to grad and medical schools in the first place, apply to more schools, and help them keep paying for rent, groceries, and other essentials. 4/5
09.10.2025 00:37 β π 3 π 1 π¬ 1 π 0Applying to graduate schools is expensive. Queer in AI and oSTEM have been running the Financial Aid Program since 2020, aiming to alleviate the burden of application and test fees for queer STEM scholars applying to graduate programs. Applicants from all countries are welcomed. 3/5
09.10.2025 00:37 β π 1 π 1 π¬ 1 π 0To make this program a grand success, and to ensure the most impact possible, please consider donating to support our cause at www.paypal.com/donate/?host... 2/5
09.10.2025 00:37 β π 1 π 1 π¬ 1 π 0We are launching our Graduate School Application Financial Aid Program (www.queerinai.com/grad-app-aid) for 2025-2026. Weβll give up to $750 per person to LGBTQIA+ STEM scholars applying to graduate programs. Apply at openreview.net/group?id=Que.... 1/5
09.10.2025 00:37 β π 7 π 9 π¬ 1 π 0Queer in AI @ COLM 2025. Thursday, October 9 5:30 to 10 pm Eastern Time. There is a QR code to sign up which is linked in the post.
Attending COLM next week in Montreal? π¨π¦ Join us on Thursday for a 2-part social! β¨ 5:30-6:30 at the conference venue and 7:00-10:00 offsite! π Sign up here: forms.gle/oiMK3TLP8ZZc...
01.10.2025 14:40 β π 4 π 4 π¬ 0 π 0There are a lot of programs that say they are open to anyone with a PhD but only accept faculty π
30.09.2025 23:59 β π 1 π 0 π¬ 0 π 0π£ Accepted to #AIES2025: What do the audio datasets powering generative audio models actually contain? (led by @willie-agnew.bsky.social)
Answer: Lots of old audio content that is mostly English, often biased, and of dubious copyright / permissioning status.
Paper: www.sauvik.me/papers/65/s...
This is never going to stop as long as these misinfo/propaganda giants exist.
18.09.2025 23:11 β π 1 π 0 π¬ 0 π 0Super cool paper!
12.09.2025 14:51 β π 2 π 0 π¬ 0 π 0We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.
π¨ New paper alert π¨ Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.
Paper: arxiv.org/pdf/2509.08825
the secret to getting large orgs to not post pics of your keynote is to wear the kiffiyeh on stage
09.09.2025 14:01 β π 85 π 7 π¬ 0 π 0I considered writing a long carefully constructed argument laying out the harms and limitations of AI, but instead I wrote about being a hater. Only humans can be haters.
27.08.2025 17:04 β π 3638 π 1348 π¬ 130 π 365It is deeply selfish to settle this case, as surely most of the AI copyright lawsuits are going to be settled. The fact that the vast majority of lawsuits in this country are settled before tech giants face any real or consequences is such a travesty www.wired.com/story/anthro...
29.08.2025 00:10 β π 377 π 96 π¬ 6 π 9www.forbes.com/sites/emilyb... "Along with Hikvision and Dahua, another sanctioned Chinese surveillance company, Uniview helped the Chinese government write standards for race-based surveillance in 2020."
27.08.2025 13:52 β π 4 π 1 π¬ 0 π 0