Ellen Judson 's Avatar

Ellen Judson

@ellenejudson.bsky.social

Disinformation investigator, tech policy nerd, philosophy, human rights, climate. Views my own. She/her Also on https://www.linkedin.com/in/ellen-judson-75918241/

2,933 Followers  |  623 Following  |  101 Posts  |  Joined: 17.11.2024  |  2.5802

Latest posts by ellenejudson.bsky.social on Bluesky

Preview
Memorandum of Understanding between UK and OpenAI on AI opportunities

🧡 The govt and OpenAi announced a Memorandum of Understanding yesterday on 'AI Opportunities'

There's lots of words like 'growth' 'development' and 'trust' and interestingly, 'sovereign' - "this partnership will support the UK’s goal to build sovereign AI in the UK"

www.gov.uk/government/p...

22.07.2025 08:26 β€” πŸ‘ 16    πŸ” 16    πŸ’¬ 5    πŸ“Œ 6

genuine question: when civil society’s very ability and permission to function is under attack in the US, what can we do on this side of the pond to support? πŸ™πŸ•ŠοΈ

30.01.2025 18:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Rev. Mariann Edgar Budde to Trump: "I ask you to have mercy upon the people in our country who are scared now. There are gay, lesbian, and transgender people in Democratic, Republican, and independent families, some who fear for their lives ... and the vast majority of immigrants are not criminals"

21.01.2025 18:49 β€” πŸ‘ 32668    πŸ” 8700    πŸ’¬ 1534    πŸ“Œ 2505

wouldn’t it be lovely if in return for all this guaranteed access to data and energy AI companies were required to demonstrate they meet high standards in how they develop and what they are producing

14.01.2025 08:02 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Hateful Conduct | Transparency Center Meta regularly publishes reports to give our community visibility into community standards enforcement, government requests and internet disruptions

It’s crushing reading the specific carve outs added to the Meta policy allowing specific forms of hate to be directed at LGBT+ people and the cuts to the policy making it easier to demonise protected groups

(see 7 Jan tab for the tracked changes)

transparency.meta.com/en-gb/polici...

08.01.2025 18:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In the meantime, I hope the UK sees through the siren call of these so-called 'free speech' measures from across the pond and doesn't try to follow suit, but works to ensure that the Online Safety regime genuinely protects users' rights #OnlineSafetyAct

07.01.2025 20:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We'll see what the specific policy changes are in the next few weeks - but the outlook doesn't look good

07.01.2025 20:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Meta slow to review hate speech on Senate candidate pages | Global Witness As the US election approaches, a Global Witness investigation finds that Meta's moderation systems are struggling to keep pace with hate speech

and b), that leaves a serious chunk of serious harms unaddressed. Especially since 'for less severe policy violations, we’re going to rely on someone reporting an issue before we take any action.' See this investigation I worked on - www.globalwitness.org/en/campaigns...

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Of course, the rebuttal is that for 'really bad' things (illegal content), they will still crack down on it. A) accurate identification of all and only illegal content is v v difficult bsky.app/profile/elle...

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Elsewhere in the announcement, the initial move to more content moderation is blamed on societal and political pressure. It implies they should have resisted Society in order to uphold fundamental principles. But that apparently doesn't apply to resisting political pressure to reduce moderation...

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

that's the point of human rights constraining (at least in theory) political decision-making

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If people launching attacks against and inciting violence against immigrants and LGBT+ people is becoming more common, platforms should be working even harder to protect them - not saying 'well, that's just what people think nowadays'

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This change smacks of a seriously insidious kind of majoritarianism: that because lots of people are now saying a thing, that that means that thing ought to be allowed to be widely said.

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My heart goes out to those in the US, especially immigrants and trans people, who it looks like could be especially badly affected by these policy changes.

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.

We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.

Similarly, to move from 'our rules are prone to over-enforcement' (even if true) to 'so we should scrap rules' is a stretch.

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Fact-checking is by no means perfect. But it does mean you can give users more context and information easily, and demoting things that fail fact-check mitigates against reach and virality WITHOUT content takedown, to help preserve freedom of expression

07.01.2025 20:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Text reads: "Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor."

Text reads: "Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor."

The leaps made in this announcement are glaring. If speech constitutes legitimate debate, then any consequence - including 'intrusive labels' - is apparently the same thing as censorship

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Mark Zuckerberg (@zuck) on Threads 5/ Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.

But that, of course, doesn't help with perception of bias, which is the real worry with the new administration (key word in this post - 'concern') www.threads.net/@zuck/post/D...

07.01.2025 20:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And if your fact-checkers really truly are too biased - then you should invest in more, train more, support more, not just abandon the whole concept of independent fact-checking! (I'm seeing a theme here...)

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And the claims about bias are just a smokescreen. Everyone has bias - not just experts, but also social media users and social media platforms. Independent fact-checkers are part of an information ecosystem that helps to reduce bias by verifying information.

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So much of the Meta announcement is saying 'we make loads and loads of mistakes'. It's such a bizarre justification for not meeting your responsibilities as a social media platform. We make too many mistakes trying to protect people well enough, so we're just not going to?

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
More than 140 Kenya Facebook moderators diagnosed with severe PTSD Exclusive: Diagnoses part of lawsuit being brought against parent company Meta and outsourcer Samasource Kenya

- the need for which is demonstrated in this story from last month of the horrific experiences of Facebook moderators www.theguardian.com/media/2024/d...

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And yes, in some cases over-moderation is a problem with content moderation. But a good way to help that is to improve your content moderation systems - invest more, train more, support your moderators, build on local expertise - rather than just stop doing it. Which takes money and commitment -

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Facebook admits it was used to 'incite offline violence' in Myanmar Facebook says it is tackling problems highlighted in an independent report on its role in ethnic violence.

A reminder that this is the real world - and the real harms - we are talking about: www.bbc.co.uk/news/world-a...

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Meta describes some of the consequences of their vision of free expression as necessarily 'messy', 'good, bad and ugly'. This is a classic move to discredit people who raise the alarm about online harms as just not being able to deal with the nuance of the real world.

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The idea that these changes to content moderation and fact checking will better challenge existing power structures would be laughable if it weren't so horrifying. The marketplace of ideas in which the best just float to the top is a myth that serves, well, those in power

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Meta's announcements today, rolling back fact-checking and lowering content moderation standards to try to politically position themselves as an X-like opponent of so-called 'censorship' are both depressing and alarming

07.01.2025 20:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
More Speech and Fewer Mistakes | Meta We're ending our third party fact-checking program and moving to a Community Notes model.

more speech and a whole bunch of mistakes: a thread about.fb.com/news/2025/01...

07.01.2025 20:53 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

But the worry remains, that the limited basis in reducing the encountering of illegal content hinders the ability of these frameworks to compel platforms to make serious and fundamental change. /fin

16.12.2024 20:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In sum: on first read through, the clarity is to be applauded, and the risk assessments look like they should be fairly thorough - although we won't get to read them. Mitigation measures less clear (Ofcom highlights that these Codes are iterative, though, which is encouraging)

16.12.2024 20:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@ellenejudson is following 20 prominent accounts