Willie Agnew

Willie Agnew

@willie-agnew.bsky.social

Queer in AI πŸ³οΈβ€πŸŒˆ | postdoc at cmu HCII | ostem |william-agnew.com | views my own | he/they

714 Followers 839 Following 150 Posts Joined Nov 2024
2 days ago
Policy brief graphic from Scholars Strategy Network featuring a quote by William Agnew of Carnegie Mellon University: "There is an urgent need for policy on AI and mental health to mitigate harms without stifling innovation." Background shows a person typing on a laptop.

In this brief, @willie-agnew.bsky.social (@hcii.cmu.edu) writes that AI chatbots pose significant risks when relied on for therapy & emotional support. He suggests policies that prevent chatbots from encouraging delusional thinking or forming relationships with users.

πŸ”— scholars.org/contribution...

3 1 0 0
15 hours ago

Our research found that people often know full well they are not talking to a human, but that they still believe the chatbot is sentient, conscious, or has a personality. My testimony encourage expanding these restrictions to match what our research has found. 3/3

0 0 0 0
15 hours ago

While our understanding of how AI harms mental health is still emerging, I have been able to use some research I've done to inform this legislation. Lawmakers have correctly identified that AI chatbots should not be misrepresented as humans, as this can lead to excessive trust and attachment. 2/3

1 0 1 0
15 hours ago

I testified Thursday to the Maryland Senate Finance committee on SB827, which aims to mitigate privacy and mental health harms from chatbots. Bills like these are being considered in many states this year, which is a huge (and in academia, underappreciated imo) win. 1/3

4 0 1 0
1 day ago

Update, coinbase is being SUED by the maryland state AG, and this law exists to kill that lawsuit. Embarrassing and shameless

0 0 0 0
1 day ago

I was at the Maryland Finance Committee yesterday to testify on SB0827. Before that there was a bill to deregulate blockchain betting and coinbase phrased allowing poorly regulated gambling as "opening Maryland to innovation". We really need to stop the narrative that every new tech is "innovative".

0 1 1 0
4 days ago
Screenshot of an academic paper titled "The Algorithmic Gaze of Image Quality Assessment: An Audit and Trace Ethnography of the LAION-Aesthetics Predictor" authored by Jordan Taylor, William Agnew, Maarten Sap, Sarah E. Fox, and Haiyi Zhu

πŸŽ¨πŸ’» What is a β€œhigh-quality” or β€œaesthetic” image according to generative AI developers?

Happy to share that our investigation of the LAION-Aesthetics Predictor has been accepted at #FAccT2026! 🧡 (1/5)

Take a look at a preprint here: arxiv.org/abs/2601.09896

22 5 1 0
1 week ago
Post image

New paper from team @aial.ie! aial.ie/research/gpa...

EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their model’s training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.

1/

127 74 2 3
1 week ago

to stay employed at the margins of academia are relentless and unforgiving. Hopefully soon I can find some stability or the peace to quit this field. 2/2

5 0 2 0
1 week ago

5/5 facct papers accepted, and also 2/2 chi posters (for some reason much more competitive than you might think). I'm pretty burnt out though, I haven't gotten a single phone call in two years of being on the job market, and the pressures to publish lots while also piecing together small grants 1/2

14 1 1 2
1 week ago

the pipeline of technology from the war on terror to surveilling and hurting protestors in the US should make it crystal clear that its only a matter of time until an AI model is deciding whether to shoot/arrest/pepper spray/etc you. 2/2

2 0 0 0
1 week ago

The splitscreen of AI companies capitulating to the department of war and the US starting a war beacuse it can is surreal but unsuprising. Even if you're lucky enough to not be in a country where AI is being deployed for war now, 1/2

3 0 1 0
2 weeks ago

and that, my fellow academics, is one reason not to hire war criminals: so they can't show up years post (their) war arguing in favor of the scaffolding behind their own atrocities with the imprimatur and status of your university

2,474 494 46 6
3 weeks ago

Not yet but keep an eye on chi posters!

1 0 1 0
3 weeks ago

I've got a satirical paper on ethically aligning an AI used to kill people in the works but I'm worried I'm gonna get scooped by (mis)Anthropic

2 0 1 0
3 weeks ago

My current tell for AI written content is self-importance, excessive formatting, and being a yapper.

2 0 0 0
1 month ago
CHI'26 Workshop on Developing Standards and Documentation For LLM Use as Simulated Research Participants Workshop Motivation

We have extended the submission deadline to February 20th! Authors will be invited to collaborate on a position paper on standards for LLM in UX research use, documentation, and validation. sites.google.com/andrew.cmu.e...

0 0 0 0
1 month ago

The Workshop on Developing Standards and Documentation For LLM Use in HCI Human Subjects Research aims to bring the HCI community together to develop standards, guidance, and documentation for the use of large language models (LLMs) as simulated research authors. 1/2

2 1 1 0
1 month ago

U.K. might lose a prime minister because a guy who worked for him knew another guy who hung out with Epstein. Meanwhile the U.S. opposition party is telling our President, who was Epstein's best friend, that his secret police should get better training so their public street murders look less messy.

32,621 9,746 528 350
1 month ago
Preview
HB 2599 (AI in therapy) oral and written testimony Oral testimony Chair Bronoske, Ranking Member Schmick, members of the committee, I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, served on the state Automated Decision Systems Wor...

I testified live on HB 2599, and so did @willie-agnew.bsky.social

I also sent in extensive written testimony -- thanks @wolvendamien.bsky.social @histoftech.bsky.social @emilymbender.bsky.social @anthropunk.bsky.social for all the feedback on this!

privacy.thenexus.today/hb-2599-hw/

11 7 4 0
1 month ago

This bill is similar to legislation in Nevada and Illinois, and with a few tweaks would provide a powerful tool to shield Washingtonians from harm! 2/2

1 0 0 0
1 month ago
Preview
House Health Care & Wellness - TVW Public Hearing:

Honored to have remotely testified at the WA House Committee on Health Care & Wellness hearing in favor of HB2599, which would place thoughtful restrictions on AI use in therapy and restrict AI from providing therapy. tvw.org/video/house-... 1/2

5 0 1 0
1 month ago

Something I do for fun is "inspect source" on AI startup websites and I often find 2000+ lines of code for a site that is 2 pages of text, some hyper links, and a couple logo placements. The website appears to work, but good luck understanding, modifying, or verifying anything about that mess.

4 1 0 0
1 month ago

3 more days until the submission deadline for our AI for Peace workshop @ ICLR 2026.

Check the details at the link below.

Looking forward to receiving your submissions! Let’s make it together a great workshop and a place for meaningful discussion on this rarely touched but very important topic!

12 5 0 0
1 month ago
CHI W Workshop Motivation

We invite authors to submit perspectives, extended abstracts, and position papers between one and four pages (excluding references) on LLM use in UX research. Authors will be invited to collaborate on a position paper on standards for LLM in UX research. sites.google.com/andrew.cmu.e... 2/2

0 1 0 0
1 month ago
CHI W Workshop Motivation

The Workshop on Developing Standards and Documentation For LLM Use in HCI Human Subjects Research aims to bring the HCI community together to develop standards, guidance, and documentation for the use of large language models (LLMs) as simulated research authors. 1/2

5 0 1 0
1 month ago

You still have 8 days to submit your work at our AI For Peace workshop @ ICLR 2026! πŸ“’πŸ“’πŸ“’

6 2 0 1
1 month ago

I've really been inspired by grassroots sousveillance of ICE. How can computer scientists support this more? There aren't often chances to use what our field is inherently so good at for libratory purposes.

2 1 1 0
1 month ago
Preview
Microsoft Makes AI Mandatory For Employees: What It Means For Your Career We explore what that means for future careers & why thriving in the age of intelligent machines will require more than technical know-how; it necessitates the human edge.

Yes www.forbes.com/sites/bernar...

0 0 1 0
1 month ago

I worry what its like to have experienced addiction, delusions, or intense relationships with chatbots, successfully get out of it, then work somewhere that forces you to use them (or just use google to search). Like trying to be sober with an employer thats gone all in on the ballmer curve.

4 0 2 0