Along the way, the work received recognition including Best Application of AI (Michigan #AI Symposium), Best Poster (Michigan #HCAI), and I also presented it at #Stanford Trust and Safety Conference and it sparked a lot of great conversations!
Paper, summary, and demo: rayhan.io/diymod
02.02.2026 04:03 β π 0 π 0 π¬ 0 π 0
This was a full end-to-end HCI project (needfinding β design β build β multi-round evaluation), and one of the most fun (and intense) things Iβve built from the ground up.
02.02.2026 04:03 β π 0 π 0 π¬ 1 π 0
Our CHI'26 paper + system transforms social media content in real-time, based on your definition of harm. Rather than removing content, it can obfuscate, re-render, or modify text and images, softening what a user wants to avoid while preserving what they still find valuable in the same post.
02.02.2026 04:03 β π 0 π 0 π¬ 1 π 0
So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible?
What if that meant transforming content instead of suppressing it?
02.02.2026 04:03 β π 0 π 0 π¬ 1 π 0
When platforms decide globally whatβs "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.
02.02.2026 04:03 β π 0 π 0 π¬ 1 π 0
What if moderation didnβt mean suppression for the user? #CHI2026
For the past year, my advisor @farnazj.bsky.social and I have been exploring a frustration: content moderation is centralized and binary, even though harm is often subjective.
02.02.2026 04:03 β π 3 π 0 π¬ 1 π 1
So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible? What if that meant transforming content instead of suppressing it?
3/n
02.02.2026 03:56 β π 0 π 0 π¬ 0 π 0
When platforms decide globally whatβs "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.
2/n
02.02.2026 03:55 β π 0 π 0 π¬ 1 π 0
ACM Conference on Computer Supported Cooperative Work & Social Computing. CSCW 2026 will be in Salt Lake City, Utah from October 10β14, 2026: https://cscw.acm.org/2026/.
Follow our community using our Starter Pack: https://go.bsky.app/SPumuMT
assistant professor in information, university of michigan. i have big intellectual feelings about language and technology.
https://tisjune.github.io/
she/her
βͺ~ α(α)α PhD student @ umich researching accessibility, diy tech, AR
she/her
jayl.in
PhD candidate at Cornell University. HCI researcher examining trust & safety issues in the Majority World.
Assistant Professor @ Princeton
Previously: EPFL π¨π, UFMG π§π·
Interests: Computational Social Science, Platforms, GenAI, Moderation
Professor of Internet Use & Society, University of Zurich
William Allan Neilson Professor, Smith College (Fall 2024)
Formerly: Northwestern U, Princeton U
eszter.com
She/her. Professor at University of Maryland's iSchool. Director of the HCIL. General Chair for CSCW 2025. Research: privacy, surveillance, data ethics. https://jessicavitak.com
Come for the academic expertise. Stay for the cookie content.
Assistant Professor of Computer Science @JohnsHopkins,
CS Postdoc @Stanford,
PHD @EPFL,
Computational Social Science, NLP, AI & Society
https://kristinagligoric.com/
interested in digital ecosystems and beautiful data.
Researcher@MSR, incoming Associate Prof@UMich. Studying human-AI interaction
Professor - Carnegie Mellon University - Human Computer Interaction
Ubiquitous Computing - Usable Privacy and Security - Responsible AI
Co-Founder - Wombat Security (acquired) - FuguUX
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.
official Bluesky account (check usernameπ)
Bugs, feature requests, feedback: support@bsky.app