This is big news. www.ft.com/content/96b5...
05.09.2025 21:15 β π 46 π 19 π¬ 4 π 4@chiennifer.bsky.social
{currently on the job market} Embedded Ethics Postdoc @ Stanford University Researcher in RAI/Ethics and AI/ML through the lens of user agency Prev. DeepMind, benshi.ai, UCSD, Wellesley '19 https://cseweb.ucsd.edu/~jjchien/
This is big news. www.ft.com/content/96b5...
05.09.2025 21:15 β π 46 π 19 π¬ 4 π 4hereβs some music to supplement your perusing #unknowingrabbit #ai #fooled
07.08.2025 15:33 β π 1 π 1 π¬ 0 π 0Do you feel like you can tell when something is AI or not? Hereβs an example that has fooled the masses, a prime example for measuring cognitive effects of AI outputs on users and be thinking about ways to easily differentiate AI-produced products
Read more here:
dl.acm.org/doi/10.1145/...
on the paypal page under "use this donation for" "Queer in AI + oSTEM Grad School Application Program" should be default selected, but if its not just select that and your donation will go to the scholarship
30.06.2025 21:04 β π 0 π 0 π¬ 0 π 0I know that lots of very important causes are asking for funding right now but I wanted to personally endorse this one, please donate anything you can!
#Donate today: www.paypal.com/donate/?host...
@queerinai.com @ostem.org
A great opportunity for phd students!
#trustworthyai #fellowship
cra.org/cra-and-micr...
It is always rewarding to chat with students about my research! So thankful for the opportunity βΊοΈ
#GenAIUCSD25 #genai #research
I will be talking about this work (and some other in progress work) at the #GenAISummit at UCSD CSE tomorrow!
genaisummit2025.ucsd.edu
We conclude with our proposal: latent-value modeling for determining trustworthiness. We differentiate between the user perspective and the LLM perspective for latent values. Our contribution poses as a reflexive tool and opportunity for various stakeholders to refine their contributed values.β¨(4/4)
06.02.2025 00:35 β π 0 π 0 π¬ 0 π 0We discuss extant approaches to value alignment, such as direct implementation of human values and behavioral approaches (e.g., RLHF) and the significant challenges they face for determining trust. (3/4)
06.02.2025 00:35 β π 2 π 0 π¬ 1 π 0Along the way, we discuss some alternative means to managing stochasticity (e.g., eliminating stochasticity and representing stochasticity to users), before proposing value alignment for trustworthiness.β¨(2/4)
06.02.2025 00:35 β π 1 π 0 π¬ 1 π 0π How do we think about trusting systems that exhibit stochasticity (*cough* LLMs)? In this work, we explicate the tension stochasticity poses with traditional black-box approaches to trust and propose causal modeling of latent values to directly determine trustworthiness
arxiv.org/pdf/2501.16461
In case you missed it, I am at the conference all week and on the job market, so happy to chat all things research (e.g. RAI, ethics, user agency, content moderation, human-centered AI) #NeurIPS2024 @neuripsconf.bsky.social
11.12.2024 06:01 β π 1 π 0 π¬ 0 π 0Interested in hearing more? Stop by tomorrow and let's chat~ #contentmoderation #participatorydesign #dataannotation
10.12.2024 04:24 β π 1 π 0 π¬ 0 π 0In this work, we collab w/ lips.social to answer the following:
How is community moderation (CM) different from algorithmic moderation? What value is added through tagging content differently? What kinds of people/groups need to be involved in tagging vs moderating? How does CM limit expression?
Planning your #NeurIPS2024 itinerary?
Stop by the #QueerinAI poster session from 6:30-8pm PST tomorrow (Tuesday) to hear about some of my ongoing work: "Diversifying Data Annotation: Measuring Differences in Community-Driven Content Moderation"