We are presenting two papers at UbiComp 2025 (Espoo, Finland), both of which focus on mobile and wearable experience sampling. Jixin Li will be leading the presentations (and is on the job market!). Say hello if you are around and ask a lot of questions π
13.10.2025 23:48 β π 2 π 0 π¬ 0 π 0
+ This work was funded by the NIH. This shows how agencies don't contribute to public health but also results in new tools and methods for data collection and AI development.
06.07.2025 13:33 β π 0 π 0 π¬ 0 π 0
TIME study documentation
Congratulations to all the co-authors, @shirlen3.bsky.social , Jixin Li, @genevievedunton.bsky.social , Wei-Lin Wang, Don Hedeker, and Stephen Intille. The details of the TIME study can be found here: timestudydocumentation.github.io/docs/build/h... (Reach out for questions or more info π)
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
This is one of the most intense EMA data collection studies leveraging smartphones and smartwatches. The ΞΌEMA method can be leveraged reliably for large-scale personalized data collection, just-in-time adaptive interventions, and human-in-the-loop ML that requires human feedback.
06.07.2025 13:25 β π 1 π 0 π¬ 1 π 0
Finally, if you are wondering whether ΞΌEMA and EMA collect similar data. We compared the user-level variability captured by ΞΌEMA and EMA across 11 affect-based constructs. We found a moderate to strong +ve correlation between ΞΌEMA and EMA variability across constructs.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
Third, when we measure user burden at the end of 12 months of data collection (only those who completed the study), ΞΌEMA was still perceived as less burdensome among those with possible survivor bias in data collection.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
Second, we observed that regardless of the users' engagement with the data collection study (e.g., those who completed vs. withdrew vs. unenrolled), ΞΌEMA was consistently perceived as less burdensome, despite much higher interruption longitudinally.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
This means, users who withdrew or were unenrolled because of poor engagement with EMA were twice as likely to answer ΞΌEMA surveys in real-world settings. We also suspect some survivor biases among those users who successfully completed 12 months of data collection.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
First, we found that ΞΌEMA response rates were highest among those users who were unenrolled by research staff or voluntarily withdrew from data collection because of EMA burden. This response rate difference was negligible among those who completed 12 months of data collection.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
As a result, we modeled ~1.3 million ΞΌEMA surveys and 14.9K EMA surveys collected across N = 177 participants, resulting in ~50K data collection days.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
But for ΞΌEMA, each interruption presented only one micro-question with a yes/no type answer that can be responded to with a quick micro-interaction (taking hardly 2 seconds). In EMA, users answered long surveys with multiple back-to-back questions.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
We used data collected in the TIME study, where users responded to surveys using ΞΌEMA and EMA in a burst-based longitudinal experiment. The ΞΌEMA method collected data on a smartwatch 4 times/hr for ~270 days. The EMA method collected data once/hr for ~90 days.
06.07.2025 13:25 β π 0 π 0 π¬ 1 π 0
π₯³ Pleased to share that our paper "Longitudinal User Engagement with Microinteraction Ecological Momentary Assessment (ΞΌEMA)" has been accepted at #IMWUT. In this paper, we conducted the first large-scale longitudinal comparison of ΞΌEMA and #EMA over a period of one year.
06.07.2025 13:25 β π 3 π 0 π¬ 1 π 0
Totally love the fact that Copilot in RStudio uses my code comments in markdown as prompts. Has improved my productivity 2X for mundane data programming
21.01.2025 02:02 β π 3 π 0 π¬ 0 π 0
Whatβs common between Twitter (or X) vs. Bluesky? They both donβt care about Indians. X amplifies raciest attacks on Indians. Bluesky is busy discussing if Elon was on H1B or not. If you are an Indian on non-immigrant visa, you are on your own. Just do your thing, work hard, and keep progressing
29.12.2024 23:35 β π 2 π 0 π¬ 0 π 0
I just found out (from the post below) that the DARPA project I've been working on for the past couple of years was unexpectedly axed. The goal of the project: investigate foreign influence campaigns. Feeling like this is a sign of things to come...
05.12.2024 05:48 β π 4 π 1 π¬ 0 π 0
Also given that data was collected in early adoption of the platform, will it change the way people use it?
27.11.2024 15:01 β π 0 π 0 π¬ 0 π 0
Iβm treating the starter pack labels as a broad intent search query and so posts that are not relevant from the child accounts can be automatically hidden from my timeline. Just thinking aloud
27.11.2024 04:13 β π 1 π 0 π¬ 1 π 0
Yeah that would be nice. Right now those tabs are manual. But also, letβs say I follow βHCI researchersβ starter pack. Iβm following folks with the expectation that itβll be about relevant HCI research. In a perfect world, Iβll happy to not see non-HCI posts from those accounts
27.11.2024 04:13 β π 2 π 0 π¬ 1 π 0
I hope @bsky.app uses starter pack names as some kind of content preference labels. If I follow someone from starter pack named βexperience sampling researchersβ, itβll be cool to see only filtered posts from that user on experience sampling π€
26.11.2024 23:05 β π 3 π 0 π¬ 1 π 0
Also wondering hope these EMA surveys are delivered. On a mobile phone, numeric rating scales tend to be below the field biasing responses. Usually fully labeled 5 point scale has worked better for us. Of course you compromise on sensitivity
26.11.2024 00:48 β π 0 π 0 π¬ 0 π 0
Hey Chris! Fellow Quant UXR here! Can you add me to the pack? π
25.11.2024 23:10 β π 1 π 0 π¬ 1 π 0
Add me!
25.11.2024 14:11 β π 0 π 0 π¬ 1 π 0
data-is-better-together (Data Is Better Together)
Building better datasets together
I am very excited to launch a new community initiative next week.
Let's build the largest open community dataset to evaluate and improve image generation models.
Follow:
huggingface.co/data-is-bett...
And stay tuned here
24.11.2024 17:51 β π 85 π 12 π¬ 1 π 4
:wave: Hello!
About me
Thanks Micheal! We also have an entire portfolio of work addressing non response and response quality biases with EMAs. Most up to date list here: adityaponnada.github.io//
24.11.2024 17:12 β π 1 π 0 π¬ 1 π 0
Compared to a random selection of survey questions, our proposed method reduces imputation errors by 15-50% and survey length by 34-56% across real-world datasets, making surveys personalized to each user with reduced burden.
24.11.2024 04:13 β π 1 π 0 π¬ 1 π 0
CS PhD at Stanford
HCI, Sensing, and Human Health
π³π΅
shardulsapkota.com
currently researching the social history of the early internet (think algorithms, architectures, and archives)
masters x2 @ stanford / bachelors @ northeastern
first gen π | π³οΈβπ
The Centre for Human-Computer Interaction Design @citystgeorges.bsky.socialβ¬, University of London conducts leading user-centred research. Home of the MSc HCID. https://hcid.city/ #HCID2025: https://hcidopenday.co.uk/
Operations Director. Previously TIME, VICE News and the Village Voice. Posting about media, technology and the future of work. Learn more: https://tylerborchers.com/
Professor at Northeastern University β’ award-winning indie game dev β’ PhD in game design, aging, motivation & learning β’ helps game devs to be mindful of older players
Data Science Education & Consulting
Website: https://statisticsglobe.com/
YouTube: youtube.com/@StatisticsGlobe
Asst Prof of Information @ UMich thinking about assumptions built into AI
Blog: https://argmin.substack.com/
Webpage: https://people.eecs.berkeley.edu/~brecht/
AI accountability, audits & eval. Keen on participation & practical outcomes. CS PhDing @UCBerkeley.
asst prof @ cornell info sci | fairness in tech, public health & services | alum of MSR, Stanford ICME, NERA Econ, MIT Math | she/her | koenecke.infosci.cornell.edu
Machine learning researcher, working on causal inference and healthcare applications
Assistant Prof of AI & Decision-Making @MIT EECS
I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL.
I work on value (mis)alignment in AI systems.
https://people.csail.mit.edu/dhm/
AI Policy Fellow @ Princeton | PhD Carnegie Mellon | privacy, accountability, & algorithmic systems
Director, Center for Tech Responsibility@Brown. FAccT OG. AI Bill of Rights coauthor. Former tech advisor to President Biden @WHOSTP. He/him/his. Posts my own.
Assistant Professor of Sociology at NYU. Classification, prediction, and AI in decision-making, social policy, and law.
www.simonezhang.com
Associate Professor of Sociology, Princeton
http://brandonstewart.org/
π³οΈβπ just a simple country AI ethicist | Assistant Professor, Western University π¨π¦ | he/his/him | | no all-male panels |#BLM | π³οΈββ§οΈ ally | views my own
https://starkcontrast.co/
https://starlingcentre.ca/
ML for healthcare and health equity. Assistant Professor at UC Berkeley and UCSF.
https://irenechen.net/
Assistant Professor at UC Berkeley and UCSF.
Machine Learning and AI for Healthcare. https://alaalab.berkeley.edu/
robust/fair/trustworthy ML @ UW Madison // minneapolis // she/her