Vera Liao

Vera Liao

@qveraliao.bsky.social

Researcher@MSR, incoming Associate Prof@UMich. Studying human-AI interaction

528 Followers 164 Following 7 Posts Joined Nov 2024
9 months ago
Post image

♦️ Our next #AI & #Society Salon is soon 🎙️ Join us on 11 June 17:00 CET for a Salon with Marco Donnarumma, performance artist and researcher.

We will discuss human body, tech and power.

Register: www.eventbrite.com/e/regaining-...

2 1 1 0
9 months ago

Thanks for coming and sharing 😀

1 0 0 0
9 months ago
Post image Post image Post image Post image

Wonderful talk by @qveraliao.bsky.social on bridging the socio-technical gap in AI.

7 1 1 0
10 months ago

I am presenting tomorrow (Wednesday) my TOCHI Microsoft work “Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions” at ~11:55am in G301 :) should be fun 🥳

work done w/ @jennwv.bsky.social @qveraliao.bsky.social @adamfourney.bsky.social @gaganbansal.bsky.social 🤩

6 2 0 0
10 months ago

Happy to see this out at #CHI2025! Another effort to push for a more central role of designer in the development of LLM-powered applciations through designerly adaptation, enabling mutual shaping of UX design and LLM adaptation (prompting). And we made a Figma widget for it👇

9 1 0 0
10 months ago

Mon April 28: I'll be presenting our 🏅 paper on fostering appropriate reliance on LLMs (w/ @jennwv.bsky.social, @qveraliao.bsky.social, @tanialombrozo.bsky.social, Olga Russakovsky) in the 4:20-5:50pm paper session (G303)

🧵 bsky.app/profile/sunn...
📌 programs.sigchi.org/chi/2025/pro...

8 4 1 0
11 months ago

Congratulations!!

2 0 0 0
1 year ago
Preview
CHI’25 Preprint Collection Looking for current research on HCI + AI? Here’s a list.

📢 Looking for current research on #HCI + #AI? Here's a collection of 200+ #CHI2025 preprints, collected via arXiv and your suggestions: medium.com/human-center...

20 6 1 0
1 year ago
Preview
Overreliance on AI: Risk Identification and Mitigation Framework This article describes a framework that helps product teams identify, assess, and mitigate overreliance risk in AI products.

NEW from my team: a framework that walks AI product teams step-by-step through understanding and mitigating the risk of overreliance on AI. This happens when ppl accept incorrect AI outputs, b/c we …

learn.microsoft.com/en-us/ai/pla...

4 3 1 0
1 year ago

Another very cool work led by the very cool @sunniesuhyoung.bsky.social, coming out at #CHI2025. Check it out 👇

4 1 0 0
1 year ago
Flyer of the salon "regaining power over AI", 4th episode with Carolien Sinders

Our next #AI & #Society Salon is soon 🎙️ Join us on 18 February 18:00pm CET for a Salon with @carolinesinders.bsky.social, artist and researcher.

Register here: www.eventbrite.com/e/regaining-...

14 7 1 1
1 year ago
Post image

As #AI gains a growing space in creation and art, how are the public discourses on AI in the arts shaping creative work?

It what we investigate in a new paper with @katecrawford.bsky.social , @qveraliao.bsky.social, Gonzalo Ramos and Jenny Williams : arxiv.org/abs/2502.03940

4 1 1 0
1 year ago

Bumping this up 🔉 If interested in interning with me or my colleagues apply by Friday, Jan 10 for full consideration! We are especially looking for candidates interested responsible and ethical AI considerations related to human agency, human control, anthropomorphic AI systems, and measurement

10 4 1 0
1 year ago
The image includes a shortened call for participation that reads: 
"We welcome participants who work on topics related to supporting human-centered evaluation and auditing of language models. Topics of interest include, but not limited to:
- Empirical understanding of stakeholders' needs and goals of LLM evaluation and auditing
- Human-centered evaluation and auditing methods for LLMs
- Tools, processes, and guidelines for LLM evaluation and auditing
- Discussion of regulatory measures and public policies for LLM auditing
- Ethics in LLM evaluation and auditing

Special Theme: Mind the Context. We invite authors to engage with specific contexts in LLM evaluation and auditing. This theme could involve various topics: the usage contexts of LLMs, the context of the evaluation/auditing itself, and more! The term ''context'' is purposefully left open for interpretation!

The image also includes pictures of workshop organizers, who are: Yu Lu Liu, Wesley Hanwen Deng, Michelle S. Lam, Motahhare Eslami, Juho Kim, Q. Vera Liao, Wei Xu, Jekaterina Novikova, and Ziang Xiao.

Human-centered Evalulation and Auditing of Language models (HEAL) workshop is back for #CHI2025, with this year's special theme: “Mind the Context”! Come join us on this bridge between #HCI and #NLProc!

Workshop submission deadline: Feb 17 AoE
More info at heal-workshop.github.io.

44 10 2 4
1 year ago
Linda Dounia Rebeiz Salon with Linda Dounia Rebeiz On 4 Decembre 2024 we discussed with Linda Dounia Rebeiz about AI, archiving practice and agency. Linda Dounia is an artist, designer, and writer interested in the philo...

✨ Our AI & Society Salon with artist @lindadounia.bsky.social is now online: regainingpoweroverai.org/docs/salons/...

Previous episodes: regainingpoweroverai.org/docs/salons/

Salons organised w/ Jenny Williams, Gonzalo Ramos, @qveraliao.bsky.social, @katecrawford.bsky.social

4 1 0 0
1 year ago

It is that time of year again we are looking for summer 2025 interns at FATE Montreal. Apply!

10 3 0 0
1 year ago
Human-Centered Eval@EMNLP24

Had a lot of fun teaching a tutorial on Human-Centered Evaluation of Language Technologies at #EMNLP2024, w/ @ziangxiao.bsky.social, Su Lin Blodgett, and Jackie Cheung

We just posted the slides on our tutorial website: human-centered-eval.github.io

13 3 0 0
1 year ago

Join us for another Regaining Power of AI Salon with Linda Dounia Rebeiz on December 4 👇

1 0 0 0
1 year ago

I’m putting together a starter pack for researchers working on human-centered AI evaluation. Reply or DM me if you’d like to be added, or if you have suggestions! Thank you!

(It looks NLP-centric at the moment, but that’s due to the current limits of my own knowledge 🙈)

go.bsky.app/G3w9LpE

36 10 15 1