Willie Agnew's Avatar

Willie Agnew

@willie-agnew.bsky.social

Queer in AI πŸ³οΈβ€πŸŒˆ | postdoc at cmu HCII | ostem |william-agnew.com | views my own | he/they

646 Followers  |  804 Following  |  90 Posts  |  Joined: 09.11.2024  |  2.174

Latest posts by willie-agnew.bsky.social on Bluesky

One day left to apply!

03.11.2025 16:15 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
AI AS THERAPIST

We're all part of a giant, non-consenual experiment (how to make money with llms) that's causing untold harm. These things are incredibly manipulative and addictive. www.wired.com/visual-story...

31.10.2025 01:35 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

"I made this policymaker aware of this problem, and that later showed up in something they wrote"--it seems really hard to show a causal relationship. Curious what others have done (or maybe y'all have gotten your names on bills) 2/2

29.10.2025 22:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I've been working on my research statements and its been quite challenging showing tangible impacts of my policy work. Like, I know what they are, but its usually "I got these lines in an NDA'd draft changed" 1/

29.10.2025 22:40 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI Surrogates and illusions of generalizability in cognitive science Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…

Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how β€œAI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/

21.10.2025 20:24 β€” πŸ‘ 274    πŸ” 115    πŸ’¬ 8    πŸ“Œ 24

This is an especially worrying statement when facial recognition can be inaccurate, biased, and generally make mistakes. Basing whether someone should be in the US or not, and ignoring physical media like a birth certificate, is ripe for disaster bsky.app/profile/jose...

29.10.2025 16:18 β€” πŸ‘ 2514    πŸ” 876    πŸ’¬ 54    πŸ“Œ 32
QueerInAI 2024 Grad Mentor Welcome to the OpenReview homepage for QueerInAI 2024 Grad Mentor

Queer in AI and oSTEM are launching our 2025 Grad School Application Mentorship program! Queer graduate school applicants, you can apply at openreview.net/group?id=Que... to get feedback on your application materials (e.g., CV, personal statement, etc). More info @ www.queerinai.com/grad-app-aid 1/3

13.10.2025 00:35 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 2    πŸ“Œ 4

One week left to apply!

28.10.2025 18:05 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Applications close November 4th!

24.10.2025 18:24 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Queer in AI @ NeurIPS 2025: Social Sign-In Survey Please note that most fields below are optional. Also, this form is exclusively for the events held in San Diego (Not for EurIPS or NeurIPS Mexico City) Please note only the organizers have access to...

NeurIPS 2025 is fast approaching, and preparations are underway!! We are excited to have an incredible program this year, and look forward to have your company to make it even more memorable! πŸ³οΈβ€πŸŒˆπŸ³οΈβ€βš§οΈ

Please fill this form to sign-up for our events!πŸƒβ€β™€οΈ
forms.gle/RY9AyZ4JHiqW...

21.10.2025 17:28 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Two spuddies will be at #CSCW2025:

@yuxiwu.com will present our work on designing citizen harm reporting interfaces for privacy (and is on the job market!).

Isadora Krsek will present our work on user reactions to AI-identified self-disclosure risks online.

20.10.2025 13:49 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

If you know of any openings please send them my way!

16.10.2025 23:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!

16.10.2025 23:19 β€” πŸ‘ 7    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Cella M. Sum –

✨I’m on the academic job market ✨

I’m a PhD candidate at @hcii.cmu.edu studying tech, labor, and resistance πŸ‘©πŸ»β€πŸ’»πŸ’ͺ🏽πŸ’₯

I research how workers and communities contest harmful sociotechnical systems and shape alternative futures through everyday resistance and collective action

More info: cella.io

09.10.2025 14:39 β€” πŸ‘ 60    πŸ” 31    πŸ’¬ 3    πŸ“Œ 4

We’re grateful to have been able to help these scientists, engineers, and future medical professionals on their journeys, and want to help more! Please share this widely with your colleagues and networks to help us get this aid to those who need it. 5/5

09.10.2025 00:37 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Our Application Financial Aid Program provided over $250,000 to more than 250 LGBTQIA+ scholars from over 30 countries, allowing them to apply to grad and medical schools in the first place, apply to more schools, and help them keep paying for rent, groceries, and other essentials. 4/5

09.10.2025 00:37 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Applying to graduate schools is expensive. Queer in AI and oSTEM have been running the Financial Aid Program since 2020, aiming to alleviate the burden of application and test fees for queer STEM scholars applying to graduate programs. Applicants from all countries are welcomed. 3/5

09.10.2025 00:37 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Donate to oSTEM Incorporated Help support oSTEM Incorporated by donating or sharing with your friends.

To make this program a grand success, and to ensure the most impact possible, please consider donating to support our cause at www.paypal.com/donate/?host... 2/5

09.10.2025 00:37 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Grad App Aid β€” Queer in AI

We are launching our Graduate School Application Financial Aid Program (www.queerinai.com/grad-app-aid) for 2025-2026. We’ll give up to $750 per person to LGBTQIA+ STEM scholars applying to graduate programs. Apply at openreview.net/group?id=Que.... 1/5

09.10.2025 00:37 β€” πŸ‘ 7    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Queer in AI @ COLM 2025. Thursday, October 9 5:30 to 10 pm Eastern Time. There is a QR code to sign up which is linked in the post.

Queer in AI @ COLM 2025. Thursday, October 9 5:30 to 10 pm Eastern Time. There is a QR code to sign up which is linked in the post.

Attending COLM next week in Montreal? πŸ‡¨πŸ‡¦ Join us on Thursday for a 2-part social! ✨ 5:30-6:30 at the conference venue and 7:00-10:00 offsite! 🌈 Sign up here: forms.gle/oiMK3TLP8ZZc...

01.10.2025 14:40 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

There are a lot of programs that say they are open to anyone with a PhD but only accept faculty πŸ˜‘

30.09.2025 23:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“£ Accepted to #AIES2025: What do the audio datasets powering generative audio models actually contain? (led by @willie-agnew.bsky.social)

Answer: Lots of old audio content that is mostly English, often biased, and of dubious copyright / permissioning status.

Paper: www.sauvik.me/papers/65/s...

27.09.2025 21:05 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

This is never going to stop as long as these misinfo/propaganda giants exist.

18.09.2025 23:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Super cool paper!

12.09.2025 14:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

12.09.2025 10:33 β€” πŸ‘ 268    πŸ” 96    πŸ’¬ 6    πŸ“Œ 21

the secret to getting large orgs to not post pics of your keynote is to wear the kiffiyeh on stage

09.09.2025 14:01 β€” πŸ‘ 85    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
I Am An AI Hater I am an AI hater. This is considered rude, but I do not care, because I am a hater.

I considered writing a long carefully constructed argument laying out the harms and limitations of AI, but instead I wrote about being a hater. Only humans can be haters.

27.08.2025 17:04 β€” πŸ‘ 3638    πŸ” 1348    πŸ’¬ 130    πŸ“Œ 365
Preview
Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors Anthropic faced the prospect of more than $1 trillion in damages, a sum that could have threatened the company’s survival if the case went to trial.

It is deeply selfish to settle this case, as surely most of the AI copyright lawsuits are going to be settled. The fact that the vast majority of lawsuits in this country are settled before tech giants face any real or consequences is such a travesty www.wired.com/story/anthro...

29.08.2025 00:10 β€” πŸ‘ 377    πŸ” 96    πŸ’¬ 6    πŸ“Œ 9
Preview
Intel Worked With Chinese Firms Sanctioned For Enabling Human Rights Abuses As the U.S. government takes a 10% stake in Intel, Forbes has learned the tech company partnered with sanctioned Chinese surveillance firms Uniview, Hikvision and Cloudwalk.

www.forbes.com/sites/emilyb... "Along with Hikvision and Dahua, another sanctioned Chinese surveillance company, Uniview helped the Chinese government write standards for race-based surveillance in 2020."

27.08.2025 13:52 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@willie-agnew is following 20 prominent accounts