Princeton HCI at UIST 2025. Three featured works: Paper ‘Reality Promises’ on virtual-physical decoupling illusions with invisible robots (Wed, 11:24 AM, Sydney room, best paper); Paper ‘Capybara’ on block-based programming in AR and GenAI-assisted creation (Tue, 4:42 PM, Miami room); Poster ‘Ghost Objects’ on real-world lasso and co-located virtual twin manipulation for robot instruction (Tue, 6:30 PM, Ballroom Lobby). Princeton HCI recruitment session for PhD or Postdoc applicants (Wed, 1:30 PM, Paradise Hotel Garden).
28.09.2025 13:28 — 👍 2 🔁 0 💬 0 📌 0
Thrilled that Reality Promises received a best paper award at #UIST2025.
Come see Mo Kari’s talk on the last day of the conference!
📍Wed, at 11:00 AM, in the Sydney room
27.09.2025 07:29 — 👍 0 🔁 0 💬 0 📌 0
Makeability Lab - How to Figures
How to figures makeabilitylab.cs.uw.edu
With the CHI deadline fast approaching, I'm resharing our lab's resource on making figures for HCI papers: docs.google.com/presentation...
New content suggestions always appreciated. Don't be shy to promote your own work!
02.09.2025 20:10 — 👍 12 🔁 5 💬 1 📌 0
Researchers made a robot that can make deliveries to VR. They call it Skynet.
Details here: www.uploadvr.com/invisible-mo...
23.08.2025 05:36 — 👍 13 🔁 5 💬 2 📌 3
“Reality Promises: Virtual-Physical Decoupling Illusions in Mixed Reality via Invisible Mobile Robots”
Paper: hci.princeton.edu/wp-content/u...
Full Video: youtu.be/SdDXvIB79j0
Project Page: mkari.de/reality-prom...
See you in Busan! 🇰🇷
#HCI #HRI
21.08.2025 13:45 — 👍 2 🔁 0 💬 0 📌 0
In #AR, using real-time on-device 3D Gaussian splatting, we create the illusion that physical changes occur instantaneously, while a hidden robot fulfills the “reality promise” moments later, updating the physical world to match what users already perceive visually. 🤖
21.08.2025 13:45 — 👍 3 🔁 2 💬 1 📌 0
Even virtual agents’ actions can have physical effects, with motion paths that divert attention from the hidden robot. 🐝
21.08.2025 13:45 — 👍 0 🔁 0 💬 1 📌 0
Beyond materializing physical objects (seemingly out of thin air), users can manipulate out-of-reach objects via RealityGoGo — creating the illusion of telekinesis. 🪴
21.08.2025 13:45 — 👍 0 🔁 0 💬 1 📌 0
In #VR, users can experience “magical” interactions, such as moving distant virtual objects with the Go-Go technique. How might we similarly extend people’s abilities in the physical world? 🪄
Excited to share Reality Promises, our #UIST2025 paper, led by the amazing Mo Kari ✨
21.08.2025 13:45 — 👍 2 🔁 1 💬 1 📌 1
Check out Lauren Wang’s #UIST2025 poster on GhostObjects: life-size, world-aligned virtual twins for fast and precise robot instruction, with real-world lasso selection, multi-object manipulation, and snap-to-default placement.
This is the first piece in her ongoing work on #AR for #HRI 🤖👓
19.08.2025 16:11 — 👍 0 🔁 1 💬 0 📌 0
Poster for the Cognitive Tools Lab at CogSci 2025, scheduled for Thursday, July 31. The poster is titled “Using gesture and language to establish multimodal conventions in collaborative physical tasks.” It features an image of a hand pointing to a 2×2 grid, with an arrow indicating movement from the bottom-left square to the top-left square. A quote reads, “... the green block pointing this way,” and the gesture is labeled “Complementary position & orientation.” Headshots of the four authors, Maeda, Tsai, Fan, and Abtahi, appear at the bottom. The session is listed as Poster Session 1 at 1:00 pm.
📢 Find Judy Fan (@judithfan.bsky.social) at #CogSci2025 during Poster Session 1 (⏰Tomorrow, 1–2:15 PM | 📍Salon 8) to learn about our work on understanding multimodal communication and how people form linguistic and gestural abstractions in collaborative physical tasks.
30.07.2025 22:33 — 👍 8 🔁 1 💬 0 📌 0
Sunnie standing in front of her presentation celebrating the successful defense 🎉
Vera, Andrés, Sunnie, Olga, and Jenn (on Sunnie’s laptop screen) celebrating
Group photo of everyone who joined Sunnie’s dissertation defense
Lauren, Sunnie, and Jeff (photo taken at CHI 2025)
📢 I successfully defended my PhD dissertation! Huge thanks to my committee (Olga @andresmh.com @jennwv.bsky.social @qveraliao.bsky.social @parastooabtahi.bsky.social) & everyone who supported me ❤️
📢 Next I'll join Apple as a research scientist in the Responsible AI team led by @jeffreybigham.com!
07.05.2025 20:46 — 👍 59 🔁 5 💬 6 📌 1
Tue April 29: I'll be cheering Indu Panigrahi present our LBW on interactive AI explanations (w/ Amna, Rohan, Olga, Ruth, @parastooabtahi.bsky.social) in the 10:30-11:10am and 3:40-4:20pm poster sessions (North 1F)
🧵 bsky.app/profile/para...
📌 programs.sigchi.org/chi/2025/pro...
25.04.2025 00:09 — 👍 2 🔁 1 💬 1 📌 0
In collaboration with @sunniesuhyoung.bsky.social, Amna Liaqat, Rohan Jinturkar, Olga Russakovsky, and Ruth Fong.
Excited to share that Indu will be starting as a PhD student at UIUC this fall! 🎉
18.04.2025 21:14 — 👍 2 🔁 0 💬 0 📌 0
A 3×4 grid showing bird images with visual explanations for Static, Filtering, Overlays, and Counterfactuals across three types: Heatmap, Concept, and Prototype.
Heatmap row:
Color heatmaps over birds with labels “More Important” and “Less Important.” Filtering separates “Most Important Areas” and “Least Important Areas” with a “Show More” slider. Overlays add a tooltip: “The bird part that you are hovering near is: grey bill.” Counterfactuals include prediction text—“‘Heermann’s gull’”—and editable attributes like “Back Pattern” and “Bill Color.”
Concept row:
Bar charts show the importance of features like “black bill” and “white tail.” Filtering splits “Positive” and “Negative Concepts” with sliders. Overlays label parts like “spotted belly” and “grey wing.” Counterfactuals show the prediction “pine grosbeak” with concept bars and edit options like “Tail Color.”
Prototype row:
Birds are overlaid with patches showing similarity scores (e.g., “0.98 similar”). Filtering compares “Prototypes” and “Criticisms.” Overlays highlight areas with tooltips like “grey crown.” Counterfactuals include the label “Eastern towhee” and editable features like “Belly Color” and “Wing Color.”
This is a qualitative study of how simple interactive mechanisms—filtering, overlaid annotations, and counterfactual image edits—might address existing challenges with static CV explanations, such as information overload, semantic-pixel gap, and limited opportunities for exploration.
18.04.2025 21:14 — 👍 1 🔁 0 💬 1 📌 0
Title: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.” Authors: Indu Panigrahi, Sunnie S. Y. Kim*, Amna Liaqat*, Rohan Jinturkar, Olga Russakovsky, Ruth Fong, Parastoo Abtahi. Logos: Princeton University, NSF, OpenPhil, Princeton HCI, Open Glass Lab, and Princeton Visual AI Lab. CHI 2025, April 26–May 1, 2025, Yokohama, Japan, including illustrations of Yokohama’s skyline, ferris wheel, and a pink sailboat labeled “CHI.”
Check out Indu Panigrahi’s LBW at #CHI2025: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.”
🔗 Project Page: ind1010.github.io/interactive_XAI
📄 Extended Abstract: arxiv.org/abs/2504.10745
18.04.2025 21:14 — 👍 7 🔁 2 💬 1 📌 1
Boosting this up for a last chance to come join us at #CHI2025
to assist with being an associate chairs (ACs) for the @chi.acm.org Late Breaking Work program! Please forward to anyone that you know might be interested.
30.11.2024 22:29 — 👍 11 🔁 5 💬 0 📌 0
HCI researchers starter pack. Lets you follow a bunch of HCI people at once (which the HCI list didn't let you do).
Again, ask to be added if I missed you.
go.bsky.app/p3TLwt
13.11.2024 11:05 — 👍 31 🔁 11 💬 27 📌 1
Thanks for putting this together! I recently joined, and this is very helpful—would love to be added!
19.11.2024 14:54 — 👍 1 🔁 0 💬 1 📌 0
I’m new here, so would be great to be added—thanks for putting this together!
19.11.2024 14:51 — 👍 1 🔁 0 💬 0 📌 0
Is there a way to join a starter pack?
19.11.2024 14:49 — 👍 0 🔁 0 💬 1 📌 0
(1/5) Very excited to announce the publication of Bayesian Models of Cognition: Reverse Engineering the Mind. More than a decade in the making, it's a big (600+ pages) beautiful book covering both the basics and recent work: mitpress.mit.edu/978026204941...
18.11.2024 16:25 — 👍 521 🔁 119 💬 15 📌 15
PhD Admissions Advice
Sorry about the hidden curriculum. :(
Since #AcademicBluesky seems to be a bigger thing now, I wanted to share my PhD admissions advice YouTube resources. Please pass this on to anyone you think it might help! Probably most useful for STEM and especially CS adjacent fields, but broadly applicable. cfiesler.medium.com/phd-admissio...
18.11.2024 15:32 — 👍 66 🔁 21 💬 0 📌 1
Computational models of human behavior. Prof at Aalto University. Group page: http://cbl.aalto.fi
physician-scientist, author, editor
https://www.scripps.edu/faculty/topol/
Ground Truths https://erictopol.substack.com
SUPER AGERS https://www.simonandschuster.com/books/Super-Agers/Eric-Topol/9781668067666
stanford hai postdoc, [incoming] nus assistant professor
previously: postdoc at ucsd's design lab, phd at stanford hci, bse at princeton
ejane.me
Sr. Principal Research Manager at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // http://jennwv.com
I study how people solve big problems with small brains. Starting at Dartmouth in 2026—I'm recruiting!
https://fredcallaway.com
Creative AI / creative computing @ University of the Arts London. https://researchers.arts.ac.uk/1594-rebecca-fiebrink
Creator of https://www.wekinator.org/ for creative interactive supervised learning.
I like food too.
Assistant Professor @Stanford CS @StanfordNLP @StanfordAILab
Computational Social Science & NLP
Professor - Carnegie Mellon University - Human Computer Interaction
Ubiquitous Computing - Usable Privacy and Security - Responsible AI
Co-Founder - Wombat Security (acquired) - FuguUX
Prof at UChicago computer science dept working in HCI and usable privacy and security
Assistant professor of CS at UC Berkeley, core faculty in Computational Precision Health. Developing ML methods to study health and inequality. "On the whole, though, I take the side of amazement."
https://people.eecs.berkeley.edu/~emmapierson/
Prof (CS @Stanford), Co-Director @StanfordHAI, Cofounder/CEO @theworldlabs, CoFounder @ai4allorg #AI #computervision #robotics #AI-healthcare
Professor and Chair at @uw-hcde.bsky.social; Human-Computer Interaction researcher studying tech for health, education, & families; Academic mama. Views are my own.
Husband, father, gamer, professor, researcher, tinkerer.
AR/VR researcher since 1991. Bad at Overwatch and Valorant.
Computer Scientist and Neuroscientist. Lead of the Blended Intelligence Research & Devices (BIRD) @GoogleXR. ex- Extended Perception Interaction & Cognition (EPIC) @MSFTResearch. ex- @Airbus applied maths
asst prof at penn state, workers/AI/surveillance. she/her 🏳️⚧️
Research Scientist @Google | Prev. PhD @Stanford | HCI + XR
circus geek, usable ML hipster, hot dog aficionado, gen AI @Meta (previous: design @Apple, research @GoogleAI)
Computer scientist working on Human-Computer Interaction, in particular building and using theories of interactive behavior.
Research Scientist, Adobe
www.dogadogan.com