The first PhilMod talk for 2026 will focus on the epistemology of posting and reposting!
Our guest will be Neri Marsili, Associate Professor in philosophy at the University of Turin.
Link to register: www.eventbrite.co.uk/e/philmod-ta...
The first PhilMod talk for 2026 will focus on the epistemology of posting and reposting!
Our guest will be Neri Marsili, Associate Professor in philosophy at the University of Turin.
Link to register: www.eventbrite.co.uk/e/philmod-ta...
I know you think I can see this post, but I can't.
20.02.2025 18:17 — 👍 1 🔁 0 💬 0 📌 0Or shadow banning
20.02.2025 17:01 — 👍 1 🔁 0 💬 1 📌 0Also possible the changes haven’t kicked in yet?
18.02.2025 15:38 — 👍 0 🔁 0 💬 0 📌 0I thought the fact-checkers worked for third-party org and were not internal moderators. Find anything on Meta ending partnerships with third-party org?
18.02.2025 15:36 — 👍 1 🔁 0 💬 1 📌 0
Yesterday UCM Philosophy hosted @etiennebrown.bsky.social (San José State) for his talk "Recommended Selves: Authenticity and Algorithmic Filtering"
#philsky #philsci
www.youtube.com/watch?v=fDYz...
I don’t know the culture at Open AI, but I see your point. But doesn’t that mean we need to be more careful about who we allow to develop powerful models? Are we not really in trouble if Open AI doesn’t care that much?
08.02.2025 21:24 — 👍 0 🔁 0 💬 0 📌 0Thanks! Two questions: (1) Does open source not entail the risk that people who don’t seriously care about safety will develop models too fast? (e.g. DeepSeek). (2) Can you not fight the concentration of private power through gov. regulation (public AI enterprises, antitrust, etc.)?
08.02.2025 16:41 — 👍 1 🔁 0 💬 3 📌 0This is how I’ve always understood Aristotelians’ point about practical wisdom.
06.02.2025 17:02 — 👍 1 🔁 0 💬 0 📌 0Time for an emergency session of Intro to Early Modern Philosophy?
02.02.2025 19:45 — 👍 15 🔁 2 💬 0 📌 0"We don’t want the teenage kids to encounter bullying content only then to have to report it; we want them to be spared the burdens of encountering it in the first place."
31.01.2025 17:22 — 👍 0 🔁 0 💬 0 📌 0Contrary to most takes I've read, Jeff argues that increasing the confidence an AI classifier requires to flag speech as hate speech is a defensible option, but ceasing to use classifiers to detect hate speech (as Meta is doing) is not.
31.01.2025 17:22 — 👍 0 🔁 0 💬 1 📌 0Nuanced, cool-headed take on recent content moderation changes at Meta by the inimitable Jeff Howard.
31.01.2025 17:22 — 👍 0 🔁 0 💬 1 📌 0I think that’s a flex. Dadcred.
18.01.2025 19:39 — 👍 0 🔁 0 💬 0 📌 0I don't mind being rejected by a journal. What I resent is being rejected for good reasons.
14.01.2025 11:50 — 👍 110 🔁 11 💬 2 📌 0And just to try to keep BlueSky a positive space: I think that debates about decentralized moderation are fascinating, and I'm grateful for advocates of decentralization to have proposed a way to diminish the power of Musk, Zuckerberg, etc. I see this as public service.
17.01.2025 16:57 — 👍 1 🔁 0 💬 0 📌 010/10 Let us say I try to make it so that you can't view hate speech on your FreeSpeechSky app; it's not obvious at all that I'm trying to regulate the public sphere. You might argue that I'm trying to unduly regulate something akin to your living room.
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 09/ I’ll end here, but I recognize that things are more complicated. One further idea to consider is that decentralization – and social media generally – blurs the distinction between public and private speech.
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 08/ And the Court ruled that Canadians would not want other Canadians to have access to violent pornography, partly on the grounds that it harmed women as a group (that's another debate, of course).
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 07/One example that comes to mind is R. v. Butler (1992), the Canadian Supreme Court case that led to a ban on certain kinds of violent pornography. The Court explicitly considered the question, “What do Canadians want other Canadians to be able to see in the public sphere?”.
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 06/ The question – “What speech do you believe other people should see?” – has always been important in the legal regulation of speech.
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 05/ In other words, my worry about hate speech is not that I will see it; it’s that the general circulation of hate speech in the public sphere has bad consequences. It’s demeaning. It’s psychologically harmful. It creates a social environment in which physical violence is more likely. Etc.
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 04/ Now, people who believe hate speech should be regulated do not primarily believe that because they personally don’t want to see it. They also believe other people should not see it. E.g. I don’t want Muslims to be constantly bombarded with slurs.
17.01.2025 16:54 — 👍 0 🔁 0 💬 1 📌 02/ Decentralized moderation, labels, and feeds allow users to customize the speech they will be exposed to online. For instance, I can choose not to see hate speech, but you can choose to see it.
17.01.2025 16:46 — 👍 0 🔁 0 💬 0 📌 0Thanks again to @beaudoin.social for this essential thread. For me, the worry with decentralization is not that it will lead to echo chambers but that it will make it harder to curb the circulation of dangerous speech.
17.01.2025 16:45 — 👍 2 🔁 0 💬 1 📌 0There is so much opacity about moderation and algo. recommendation that it’s hard to overstate how important this is.
17.01.2025 06:43 — 👍 2 🔁 0 💬 0 📌 0There is an interesting tension in the philosophy of moderation between decentralization (i.e. customize your own personal public sphere) and democratization (same speech rules for everyone, but made democratically). This thread about decentralization is informative!
17.01.2025 06:14 — 👍 3 🔁 0 💬 1 📌 0
"Bluesky moderation lists create echo chambers."
A short thread about decentralized moderation on Bluesky and why it changes everything.* 🧵
*Based on nearly a thousand hours spent exploring the platform's code.