More reports of Gmail turning over email content to ICE are coming out now.
techcrunch.com/2026/02/10/g...
Who gets to define what the story is with AI? Abeba Birhane, Louise Amoore, and John Thornhill are talking about how narratives of inevitability and FOMO, setting up any opposition of this narrative as anti-tech. Excited to hear about counter-narratives!
here’s some music to supplement your perusing #unknowingrabbit #ai #fooled
Do you feel like you can tell when something is AI or not? Here’s an example that has fooled the masses, a prime example for measuring cognitive effects of AI outputs on users and be thinking about ways to easily differentiate AI-produced products
Read more here:
dl.acm.org/doi/10.1145/...
on the paypal page under "use this donation for" "Queer in AI + oSTEM Grad School Application Program" should be default selected, but if its not just select that and your donation will go to the scholarship
I know that lots of very important causes are asking for funding right now but I wanted to personally endorse this one, please donate anything you can!
#Donate today: www.paypal.com/donate/?host...
@queerinai.com @ostem.org
A great opportunity for phd students!
#trustworthyai #fellowship
cra.org/cra-and-micr...
It is always rewarding to chat with students about my research! So thankful for the opportunity ☺️
#GenAIUCSD25 #genai #research
I will be talking about this work (and some other in progress work) at the #GenAISummit at UCSD CSE tomorrow!
genaisummit2025.ucsd.edu
We conclude with our proposal: latent-value modeling for determining trustworthiness. We differentiate between the user perspective and the LLM perspective for latent values. Our contribution poses as a reflexive tool and opportunity for various stakeholders to refine their contributed values. (4/4)
We discuss extant approaches to value alignment, such as direct implementation of human values and behavioral approaches (e.g., RLHF) and the significant challenges they face for determining trust. (3/4)
Along the way, we discuss some alternative means to managing stochasticity (e.g., eliminating stochasticity and representing stochasticity to users), before proposing value alignment for trustworthiness. (2/4)
📄 How do we think about trusting systems that exhibit stochasticity (*cough* LLMs)? In this work, we explicate the tension stochasticity poses with traditional black-box approaches to trust and propose causal modeling of latent values to directly determine trustworthiness
arxiv.org/pdf/2501.16461
In case you missed it, I am at the conference all week and on the job market, so happy to chat all things research (e.g. RAI, ethics, user agency, content moderation, human-centered AI) #NeurIPS2024 @neuripsconf.bsky.social
Interested in hearing more? Stop by tomorrow and let's chat~ #contentmoderation #participatorydesign #dataannotation
In this work, we collab w/ lips.social to answer the following:
How is community moderation (CM) different from algorithmic moderation? What value is added through tagging content differently? What kinds of people/groups need to be involved in tagging vs moderating? How does CM limit expression?
Planning your #NeurIPS2024 itinerary?
Stop by the #QueerinAI poster session from 6:30-8pm PST tomorrow (Tuesday) to hear about some of my ongoing work: "Diversifying Data Annotation: Measuring Differences in Community-Driven Content Moderation"