Jennifer Chien, PhD

Jennifer Chien, PhD

@chiennifer.bsky.social

{currently on the job market} Embedded Ethics Postdoc @ Stanford University Researcher in RAI/Ethics and AI/ML through the lens of user agency Prev. DeepMind, benshi.ai, UCSD, Wellesley '19 https://cseweb.ucsd.edu/~jjchien/

63 Followers 91 Following 16 Posts Joined Dec 2024
1 month ago
Preview
Google sent personal and financial information of student journalist to ICE | TechCrunch The tech giant handed over the personal information of a journalist and student who attended a pro-Palestinian protest in 2024. This is the latest example of ICE using its controversial subpoena power...

More reports of Gmail turning over email content to ICE are coming out now.
techcrunch.com/2026/02/10/g...

826 573 23 96
1 month ago
Post image Post image Post image

Who gets to define what the story is with AI? Abeba Birhane, Louise Amoore, and John Thornhill are talking about how narratives of inevitability and FOMO, setting up any opposition of this narrative as anti-tech. Excited to hear about counter-narratives!

1 0 0 0
6 months ago
Preview
AI start-up Anthropic settles landmark copyright suit for $1.5bn Case will compensate authors but could raise costs of training large language models

This is big news. www.ft.com/content/96b5...

46 19 4 4
7 months ago
Video thumbnail

here’s some music to supplement your perusing #unknowingrabbit #ai #fooled

1 1 0 0
7 months ago
Video thumbnail

Do you feel like you can tell when something is AI or not? Here’s an example that has fooled the masses, a prime example for measuring cognitive effects of AI outputs on users and be thinking about ways to easily differentiate AI-produced products

Read more here:
dl.acm.org/doi/10.1145/...

1 0 1 0
8 months ago

on the paypal page under "use this donation for" "Queer in AI + oSTEM Grad School Application Program" should be default selected, but if its not just select that and your donation will go to the scholarship

0 0 0 0
8 months ago
Post image

I know that lots of very important causes are asking for funding right now but I wanted to personally endorse this one, please donate anything you can!

#Donate today: www.paypal.com/donate/?host...

@queerinai.com @ostem.org

1 0 1 0
10 months ago
Post image

A great opportunity for phd students!
#trustworthyai #fellowship
cra.org/cra-and-micr...

1 0 0 0
1 year ago

It is always rewarding to chat with students about my research! So thankful for the opportunity ☺️
#GenAIUCSD25 #genai #research

3 0 1 0
1 year ago

I will be talking about this work (and some other in progress work) at the #GenAISummit at UCSD CSE tomorrow!

genaisummit2025.ucsd.edu

0 0 0 1
1 year ago
Post image

We conclude with our proposal: latent-value modeling for determining trustworthiness. We differentiate between the user perspective and the LLM perspective for latent values. Our contribution poses as a reflexive tool and opportunity for various stakeholders to refine their contributed values.
(4/4)

0 0 0 0
1 year ago

We discuss extant approaches to value alignment, such as direct implementation of human values and behavioral approaches (e.g., RLHF) and the significant challenges they face for determining trust. (3/4)

2 0 1 0
1 year ago

Along the way, we discuss some alternative means to managing stochasticity (e.g., eliminating stochasticity and representing stochasticity to users), before proposing value alignment for trustworthiness.
(2/4)

1 0 1 0
1 year ago
Post image

📄 How do we think about trusting systems that exhibit stochasticity (*cough* LLMs)? In this work, we explicate the tension stochasticity poses with traditional black-box approaches to trust and propose causal modeling of latent values to directly determine trustworthiness

arxiv.org/pdf/2501.16461

1 0 1 1
1 year ago

In case you missed it, I am at the conference all week and on the job market, so happy to chat all things research (e.g. RAI, ethics, user agency, content moderation, human-centered AI) #NeurIPS2024 @neuripsconf.bsky.social

1 0 0 0
1 year ago
Post image

Interested in hearing more? Stop by tomorrow and let's chat~ #contentmoderation #participatorydesign #dataannotation

1 0 0 0
1 year ago
Lips Share your art, essays, poetry (anything!) on Lips without biased censorship or trolls. Create your free account to be a part of an Open & Honest online experience. Built especially for women and LGBT...

In this work, we collab w/ lips.social to answer the following:
How is community moderation (CM) different from algorithmic moderation? What value is added through tagging content differently? What kinds of people/groups need to be involved in tagging vs moderating? How does CM limit expression?

1 0 1 0
1 year ago

Planning your #NeurIPS2024 itinerary?
Stop by the #QueerinAI poster session from 6:30-8pm PST tomorrow (Tuesday) to hear about some of my ongoing work: "Diversifying Data Annotation: Measuring Differences in Community-Driven Content Moderation"

6 1 1 1