Joseph Seering's Avatar

Joseph Seering

@josephseering.bsky.social

Assistant Prof at KAIST School of Computing. HCI, AI, T&S.

491 Followers  |  85 Following  |  62 Posts  |  Joined: 24.05.2023  |  2.0551

Latest posts by josephseering.bsky.social on Bluesky

These five papers are starting points for the work we’re doing, and our next round of work is already well underway. I’m excited to be able to share our successes so far, but equally excited for what’s still to come!

16.04.2025 11:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Even when platforms do not provide tools that support restorative processes, creative users will build them themselves. This paper shows how user-created appeals systems are constructed, what goals the users have, and what these processes can accomplish.

16.04.2025 11:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This work, also led by Juhoon Lee
@juhoonlee.bsky.social with support from Bich Ngoc (Rubi) Doan and Jonghyun Jee, maps the complex and impressive systems that users have built in order to incorporate custom appeals processes into their Discord servers.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A final paper in this line of work β€” also to be presented at CSCW 2025 β€” offers some hope in this regard, looking at user-created and managed appeals systems in Discord Communities. joseph.seering.org/papers/Lee_e...

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It is deeply concerning that the spaces where today’s young people are developing social skills have been designed without any clear place for apologies. We need young people to be learning conflict resolution skills that are more nuanced than just "ban or block and move on".

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A number of Discord moderators gave feedback on the bot and some tested it in their servers, but a major takeaway was how alien apologies seem to have become to the process of online safety.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Bich Ngoc (Rubi) Doan built β€œApoloBot”, a Discord bot designed to facilitate apologies as part of the restorative processes in Discord servers. This system supports moderators throughout the process of initiating and monitoring apology-giving.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

On this note, a third paper to be presented at CHI 2025 tackles this issue more broadly from a design perspective, noting how modern social platforms rarely provide features to support one of the most fundamental human communication processes: apologies. joseph.seering.org/papers/Doan_...

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Together, these two papers argue that online child safety cannot be understood solely as the process of preventing harm to children, but rather must be seen as the process of developing better opportunities for young users to learn and grow online.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Teens we interviewed were learning organization, management, and conflict resolution skills that they might otherwise have few opportunities to practice, and they took deep, genuine pride in the communities they had helped to build.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This may at first seem concerning -- and the paper outlines some of the potential risks -- but it should also be understood as an incredible growth opportunity for young people if they are sufficiently well supported.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Though we usually think of moderation as a role for adults, a striking number of Discord servers (and likely other online social spaces) are moderated in part by teens. We found servers with many thousands of users that had 14 and 15-year-olds on the volunteer moderation teams.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Another fantastic paper focused on the safety experiences of young users, led by Jina Yoon and collaborating with
@axz.bsky.social, will be presented at CSCW 2025. joseph.seering.org/papers/Yoon_...

16.04.2025 11:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

The way Roblox has energized young users to learn how to create -- via scripting, 3d modeling, etc -- is fantastic, but this paper shows the challenges that arise when young users flock to spaces that have traditionally been much more adult-friendly: online developer communities.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Roblox is, in my opinion, the most under-studied social space in all of HCI. It is massively popular among young users, but almost totally ignored by adults. Roblox works on a model similar to YouTube for games, where assets and experiences are created by users.

16.04.2025 11:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Another paper, by Yubin Choi and Jeanne Choi, to be presented at CHI 2025, explores quite a different context, though still within the game space: online developer communities for Roblox. joseph.seering.org/papers/Choi_...

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Wonderful also to collaborate with Juho Kim and Jeong-woo Jang, as well as some of their students!

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I love this paper for its granular attention to the details and features that support communication in League of Legends. Riot Games has put a lot of effort into creating a plethora of ways to communicate, but the real problems here may exist at a more structural level.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In an in-depth, in-the-moment exploration of League of Legends team communication, this paper shows how the conditions of play have created an environment where the very act of communication leads to distrust.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One of our first papers, led by Juhoon Lee
@juhoonlee.bsky.social to be presented at CHI 2025, visits a classic context for online safety: online MOBA games. This paper challenges the narrative that conflict can be prevented by facilitating smoother communication. joseph.seering.org/papers/Lee_e...

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We've covered a range of topics within a broad vision for online safety -- not only focused on detection and enforcement, but on understanding the conditions necessary for safety and the diversity of possible approaches to improving safety.

16.04.2025 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Hello Bluesky! I've been quiet for a while on the social media front -- it's been a very busy two years with an international move and setting up a new lab at KAIST, but I wanted to take a moment to highlight some of my students' fantastic new work in the domain of online safety.

16.04.2025 11:13 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The 2025 Jang Young Sil Fellow Program is open for applications for 1 year postdoc positions at KAIST. Deadline is 3/13 at 4PM (KST). If you are interested in applying, please reach out and I can provide more details.

05.03.2025 05:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I had an interesting conversation a couple of years ago about whether ~AI-generated content creators should be handled the same as human content creators from a T&S perspective. At the time, it was an academic conversation, but it seems to be increasingly relevant now.

26.02.2025 05:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Generally speaking, if community moderators want a feature enough to build it themselves, it's often worth considering for wider deployment. Many of the most powerful user-facing moderation tools on platforms started as third party concepts built by users themselves to meet their specific needs.

14.02.2025 04:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is a great feature idea, and FWIW very similar features are used in community moderation where moderators can leave notes about particular users to remind themselves and other mods. Mostly this is done via third party tools, but some first party too. No reason it wouldn't work on bsky.

14.02.2025 04:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There are probably a zillion faculty teaching intro UX/HCI classes who would love to have you for a guest lecture, if that sounds like a good warm-up.

27.01.2025 11:49 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
character.ai | Personalized AI for every moment of your day Chat with millions of AI Characters anytime, anywhere. Super-intelligent chat bots that hear you, understand you, and remember you. Free to use with no ads.

Yes, likely destructive, but also somewhat unsurprising. Given the extreme popularity of platforms like c.ai, it's expected that Facebook would try to capitalize on that trend.

(Nevermind the legitimate concerns about the impact of c.ai on young users)

31.12.2024 01:12 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I wonder whether there was any serious discussion about not implementing this. It may seem like a no-brainer, but there's a serious discussion to be had about value added vs increased safety costs.

29.12.2024 15:56 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Twitch Safety Center

If there were a good way to consistently/efficiently take off-service conduct into account, far more platforms would do it, but it's really hard to build a good process for that.

As a comparison case, Twitch has a pretty interesting off-service conduct policy:
safety.twitch.tv/s/article/Co...

14.12.2024 14:09 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@josephseering is following 20 prominent accounts