Rayhan Rashed's Avatar

Rayhan Rashed

@rayhan.io.bsky.social

Human-AI Interaction, Situated in Social Computing

6 Followers  |  13 Following  |  8 Posts  |  Joined: 22.11.2024  |  1.371

Latest posts by rayhan.io on Bluesky


Along the way, the work received recognition including Best Application of AI (Michigan #AI Symposium), Best Poster (Michigan #HCAI), and I also presented it at #Stanford Trust and Safety Conference and it sparked a lot of great conversations!

Paper, summary, and demo: rayhan.io/diymod

02.02.2026 04:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This was a full end-to-end HCI project (needfinding β†’ design β†’ build β†’ multi-round evaluation), and one of the most fun (and intense) things I’ve built from the ground up.

02.02.2026 04:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our CHI'26 paper + system transforms social media content in real-time, based on your definition of harm. Rather than removing content, it can obfuscate, re-render, or modify text and images, softening what a user wants to avoid while preserving what they still find valuable in the same post.

02.02.2026 04:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible?
What if that meant transforming content instead of suppressing it?

02.02.2026 04:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

When platforms decide globally what’s "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.

02.02.2026 04:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What if moderation didn’t mean suppression for the user? #CHI2026
For the past year, my advisor @farnazj.bsky.social and I have been exploring a frustration: content moderation is centralized and binary, even though harm is often subjective.

02.02.2026 04:03 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible? What if that meant transforming content instead of suppressing it?

3/n

02.02.2026 03:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

When platforms decide globally what’s "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.

2/n

02.02.2026 03:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@rayhan.io is following 13 prominent accounts