In this brief, @willie-agnew.bsky.social (@hcii.cmu.edu) writes that AI chatbots pose significant risks when relied on for therapy & emotional support. He suggests policies that prevent chatbots from encouraging delusional thinking or forming relationships with users.
π scholars.org/contribution...
Our research found that people often know full well they are not talking to a human, but that they still believe the chatbot is sentient, conscious, or has a personality. My testimony encourage expanding these restrictions to match what our research has found. 3/3
While our understanding of how AI harms mental health is still emerging, I have been able to use some research I've done to inform this legislation. Lawmakers have correctly identified that AI chatbots should not be misrepresented as humans, as this can lead to excessive trust and attachment. 2/3
I testified Thursday to the Maryland Senate Finance committee on SB827, which aims to mitigate privacy and mental health harms from chatbots. Bills like these are being considered in many states this year, which is a huge (and in academia, underappreciated imo) win. 1/3
Update, coinbase is being SUED by the maryland state AG, and this law exists to kill that lawsuit. Embarrassing and shameless
I was at the Maryland Finance Committee yesterday to testify on SB0827. Before that there was a bill to deregulate blockchain betting and coinbase phrased allowing poorly regulated gambling as "opening Maryland to innovation". We really need to stop the narrative that every new tech is "innovative".
π¨π» What is a βhigh-qualityβ or βaestheticβ image according to generative AI developers?
Happy to share that our investigation of the LAION-Aesthetics Predictor has been accepted at #FAccT2026! π§΅ (1/5)
Take a look at a preprint here: arxiv.org/abs/2601.09896
New paper from team @aial.ie! aial.ie/research/gpa...
EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their modelβs training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.
1/
to stay employed at the margins of academia are relentless and unforgiving. Hopefully soon I can find some stability or the peace to quit this field. 2/2
5/5 facct papers accepted, and also 2/2 chi posters (for some reason much more competitive than you might think). I'm pretty burnt out though, I haven't gotten a single phone call in two years of being on the job market, and the pressures to publish lots while also piecing together small grants 1/2
the pipeline of technology from the war on terror to surveilling and hurting protestors in the US should make it crystal clear that its only a matter of time until an AI model is deciding whether to shoot/arrest/pepper spray/etc you. 2/2
The splitscreen of AI companies capitulating to the department of war and the US starting a war beacuse it can is surreal but unsuprising. Even if you're lucky enough to not be in a country where AI is being deployed for war now, 1/2
and that, my fellow academics, is one reason not to hire war criminals: so they can't show up years post (their) war arguing in favor of the scaffolding behind their own atrocities with the imprimatur and status of your university
Not yet but keep an eye on chi posters!
I've got a satirical paper on ethically aligning an AI used to kill people in the works but I'm worried I'm gonna get scooped by (mis)Anthropic
My current tell for AI written content is self-importance, excessive formatting, and being a yapper.
We have extended the submission deadline to February 20th! Authors will be invited to collaborate on a position paper on standards for LLM in UX research use, documentation, and validation. sites.google.com/andrew.cmu.e...
The Workshop on Developing Standards and Documentation For LLM Use in HCI Human Subjects Research aims to bring the HCI community together to develop standards, guidance, and documentation for the use of large language models (LLMs) as simulated research authors. 1/2
U.K. might lose a prime minister because a guy who worked for him knew another guy who hung out with Epstein. Meanwhile the U.S. opposition party is telling our President, who was Epstein's best friend, that his secret police should get better training so their public street murders look less messy.
I testified live on HB 2599, and so did @willie-agnew.bsky.social
I also sent in extensive written testimony -- thanks @wolvendamien.bsky.social @histoftech.bsky.social @emilymbender.bsky.social @anthropunk.bsky.social for all the feedback on this!
privacy.thenexus.today/hb-2599-hw/
This bill is similar to legislation in Nevada and Illinois, and with a few tweaks would provide a powerful tool to shield Washingtonians from harm! 2/2
Honored to have remotely testified at the WA House Committee on Health Care & Wellness hearing in favor of HB2599, which would place thoughtful restrictions on AI use in therapy and restrict AI from providing therapy. tvw.org/video/house-... 1/2
Something I do for fun is "inspect source" on AI startup websites and I often find 2000+ lines of code for a site that is 2 pages of text, some hyper links, and a couple logo placements. The website appears to work, but good luck understanding, modifying, or verifying anything about that mess.
3 more days until the submission deadline for our AI for Peace workshop @ ICLR 2026.
Check the details at the link below.
Looking forward to receiving your submissions! Letβs make it together a great workshop and a place for meaningful discussion on this rarely touched but very important topic!
We invite authors to submit perspectives, extended abstracts, and position papers between one and four pages (excluding references) on LLM use in UX research. Authors will be invited to collaborate on a position paper on standards for LLM in UX research. sites.google.com/andrew.cmu.e... 2/2
The Workshop on Developing Standards and Documentation For LLM Use in HCI Human Subjects Research aims to bring the HCI community together to develop standards, guidance, and documentation for the use of large language models (LLMs) as simulated research authors. 1/2
You still have 8 days to submit your work at our AI For Peace workshop @ ICLR 2026! π’π’π’
I've really been inspired by grassroots sousveillance of ICE. How can computer scientists support this more? There aren't often chances to use what our field is inherently so good at for libratory purposes.
I worry what its like to have experienced addiction, delusions, or intense relationships with chatbots, successfully get out of it, then work somewhere that forces you to use them (or just use google to search). Like trying to be sober with an employer thats gone all in on the ballmer curve.