Alice Hunsberger's Avatar

Alice Hunsberger

@aagh.bsky.social

Trust & Safety loudmouth. (VP of T&S at PartnerHero; writes T&S Insider; host of Trust in Tech podcast) Intro thread: https://bsky.app/profile/aagh.bsky.social/post/3lahft5kv232c More here: https://alicelinks.com/about-alice

1,577 Followers  |  430 Following  |  144 Posts  |  Joined: 19.06.2023  |  2.1433

Latest posts by aagh.bsky.social on Bluesky

Frontline teams will tell you that hate from customers is nothing new, but I really do think it will ramp up over the next few years, and I want to make sure our frontline teams don't suffer for it.

/🧡

21.01.2025 17:34 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Create resources and support for your users/ customers who may also be the target of harassment and hate. Signal to them in signs/ messages/ FAQs/ knowledge bases/ etc. that your team will support them.

21.01.2025 17:34 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Let your team use fake names when responding to the public, or (even better) don't use names at all.

Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.

Allow folks to make decisions based on what's right for them.

21.01.2025 17:34 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Create psychological safety for your team.
Let them know that you have their back.
Listen to them.
Check in on them.

Make sure they have benefits that cover mental health support.
We're all going to need it.

21.01.2025 17:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It can be tempting to ask employees who are part of a marginalized community to help you create inclusive policies.

@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.

21.01.2025 17:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Create a "no questions asked" escalation policy, so that frontline staff can escalate to a manager if they feel unsafe or unable to answer a question.

Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.

21.01.2025 17:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Write policies about expected user/ customer behavior, make them public, and hold people to them.

"We will ban you if you disrespect or threaten our staff", for example.

Or "We will ban you if you report trans people simply for being trans."

21.01.2025 17:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Get really clear with senior leaders of the company you work for about corporate values and how to uphold them.

Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.

21.01.2025 17:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The 47th president proudly spoke against trans people and anti-racism in his inauguration speech: people will now feel more empowered to spew hate.

If you manage support, marketing, trust & safety, etc. create a playbook NOW to support & empower your team to respond to hateful customers.

🧡

21.01.2025 17:28 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0

Huge thanks to the integrity institute and TSPA/ TrustCon for enabling these kinds of discussions among t&s folks.

If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!

10.01.2025 21:34 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

3️⃣ @anikacolliernavaroli.com writes about the harms for moderators from marginalized communities asked to work on content that attacks them.

www.cjr.org/tow_center/b...

10.01.2025 21:32 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

2️⃣ @jenniolsonsf.bsky.social from GLAAD talks about advocating for the LGBTQ+ community with Meta; the challenges of balancing free speech w/ protecting marginalized communities; & suggestions for folks working at social media platforms to advocate for change.

integrityinstitute.org/podcast/its-...

10.01.2025 21:30 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

1️⃣ Nadah Feteih discusses how tech workers (in integrity and t&s teams) can speak up about ethical issues at their workplace; activism from within the industry; compelled identity labor, balancing speaking up and staying silent, and more.

integrityinstitute.org/podcast/work...

10.01.2025 21:28 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Many of us working at tech companies are having to make moral and ethical decisions when it comes to where we work, what we work on, and what we speak up about. It's super difficult to know what to do, or even what your options are!

🧡 with resources

10.01.2025 21:26 β€” πŸ‘ 19    πŸ” 8    πŸ’¬ 2    πŸ“Œ 0
Alice dressed as the β€œthis is fine” dog meme. She has on dog ears and is holding a coffee mug while the office behind her is burning.

Alice dressed as the β€œthis is fine” dog meme. She has on dog ears and is holding a coffee mug while the office behind her is burning.

Alice grimacing while holding a β€œthis is fine” meme toy. It’s a dog sitting on a dumpster on fire.

Alice grimacing while holding a β€œthis is fine” meme toy. It’s a dog sitting on a dumpster on fire.

Every t&s professional I know this week.

❀️ to those who are doing their best in wild times.

10.01.2025 21:13 β€” πŸ‘ 17    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

THIS IS WHAT STOOD OUT TO ME. As someone who had to deal with user-report only systems for years… they do not work.

10.01.2025 15:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It's fascinating because right now content moderation and general vibes is a main differentiator between Threads and X. When Threads feels more like X, they'll be closer competitors than ever before.

Looking forward to more people here on Bluesky :)

09.01.2025 17:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Actually 1 more thing:

This allows meta to dodge responsibility. β€œThe users don’t like it. They reported it. It’s not us.”

It won’t make moderation more fair or better. It’ll be less consistent.

But gives Meta an excuse that is more politically accepted right now.

09.01.2025 13:43 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This, combined with the rollback of hate policies, is REALLY going to change the vibes of Meta-run platforms.

/🧡

09.01.2025 13:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Honestly, I feel it’s often better to just not have the rule at all if you can’t proactively detect and remove violations.

Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.

09.01.2025 13:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Other users will have their content removed after being reported, but feel it’s unfair because so many other people got away with it.

09.01.2025 13:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Relying on user reports alone means that the platform will have very spotty enforcement of some rules.

Many users will get away with rules-violating behavior because it is never reported.

09.01.2025 13:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Policies are only as good as ENFORCEMENTβ€” and consistent enforcement at that.

I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.

09.01.2025 13:20 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β€” if other people are being hateful and harassing others, then users will want to fight back/ pile on/ get involved.

… or they will want to leave.

09.01.2025 13:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If they see lots of folks doing something bad and it’s not immediately removed, they assume it’s ok and they won’t report.

Even worse, they will often start exhibiting the same behavior themselves.

09.01.2025 13:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Most users don’t read policies.

They’re not experts on which kinds of hate speech are ok and not ok (especially confusing on Meta’s platforms now, after recent policy changes).

Mostly, users go along with the vibe of a place.

09.01.2025 13:19 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One thing I haven’t seen anyone talk about with Meta’s moderation changes:

now relying on manual user reports for β€œless severe” issues that still violate their policies, rather than using a combination of user reports and automated detection.

It’s bad. Here’s why:

🧡

09.01.2025 13:18 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It was a good read!

08.01.2025 15:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Instagram blocked teens from searching LGBTQ-related content for months Posts with LGBTQ+ hashtags were hidden under Meta's β€œsensitive content” policy which restricts "sexually suggestive content"

@taylorlorenz.bsky.social's article on restricting LGBTQ content: www.usermag.co/p/instagram-...

07.01.2025 18:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

They claim it was a mistake and have reversed it

(And I understand; I have made mistakes too!)

But it's an interesting illustration of how speech & censorship goes both ways: it's hard to be Pro Free Speech (allowing speech against identities), while restricting content about those identities.

07.01.2025 18:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@aagh is following 20 prominent accounts