AI and Fraternity, Abeba Birhane, AI Accountability Lab
I envision a future where human dignity, justice, peace, kindness, care, respect, accountability, and rights and freedoms serve as the north stars that guide AI development and use. Realising these ideals can’t happen without intentional tireless work, dialogues, and confrontations of ugly realities – even if they are uncomfortable to deal with. This starts with deciphering hype from reality. Pervasive narratives portray AI as a magical, fully autonomous entity approaching a God-like omnipotence and omniscience. In reality, audits of AI systems reveal a consistent failure to deliver on grandiose promises and suffer from all kinds of shortcomings, issues often swept under the rug. AI in general, and GenAI in particular, encodes and exacerbates historical stereotypes, entrenches harmful societal norms, and amplifies injustice. A robust body of evidence demonstrates that — from hiring, welfare allocation, medical care allocation to anything in between — deployment of AI is widening inequity, disproportionately impacting people at the margins of society and concentrating power and influence in the hands of few. Major actors—including Google, Microsoft, Amazon, Meta, and OpenAI—have willingly aligned with authoritarian regimes and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, refrain from using AI for weapons development, while retaliating against critique. The aforementioned vision can’t and won’t happen without confrontation of these uncomfortable facts. This is precisely why we need active resistance and refusal of unreliable and harmful AI systems; clearly laid out regulation and enforcement; and shepherding of the AI industry towards transparency and accountability of responsible bodies. "Machine agency" must be in service of human agency and empowerment, a coexistence that isn't a continuation of modern tech corporations’ inequality-widening,
so I am one of the 12 people (including the “god-fathers of AI”) that will be at the Vatican this September for a two full-day working group on the Future of AI
here is my Vatican approved short provocation on 'AI and Fraternity' for the working group
04.08.2025 11:31 — 👍 523 🔁 158 💬 31 📌 15
This was a big topic at #Metascience2025 and should be top of mind for anyone who cares about #equity in education, healthcare, climate, you name it.
03.08.2025 19:40 — 👍 9 🔁 2 💬 1 📌 0
Join us in signing and sharing this petition to prevent layoffs of over 400 people (both academic & professional services staff) at Lancaster University. We are all at risk and it's a scary time.
30.07.2025 20:14 — 👍 38 🔁 42 💬 2 📌 1
on the one hand we should want people to change their mind so we can stop what's happening now. On the other hand there need to be costs for the ghoulish behavior that got us here, as these people absolutely will do this again given half a chance
28.07.2025 12:34 — 👍 2373 🔁 630 💬 46 📌 25
Maybe some dots to join between the event above and UCL topping the new repression league table? bsky.app/profile/feli...
26.07.2025 13:43 — 👍 0 🔁 0 💬 0 📌 0
this reminds me of the period in american politics when pundits would demand that any black politician of note — really, any black person in politics with a national audience — “condemn farrakhan”
24.07.2025 01:01 — 👍 13104 🔁 2040 💬 477 📌 138
If you listen, you’ll hear “now” being used like this all over the place in desperate attempts at self-exculpation. Getting your dissent on record in the hope you’ve made it under the wire. But “now” makes quite clear how you’ve endorsed genocidal acts all the way through
24.07.2025 06:51 — 👍 73 🔁 36 💬 4 📌 0
Caitlín Doherty, Everything Else — Sidecar
In Dubai.
So many reasons why there is a boycott of Dubai. (Simply don’t go! Cancel any ticket you have!)
Here Caitlín Doherty writes of her shame at having visited @newleftreview.bsky.social newleftreview.org/sidecar/post...
20.07.2025 16:26 — 👍 4 🔁 2 💬 0 📌 0
AI for Good [Appearance?]
Reflections on the last minute censorship of my keynote at the AI for Good Summit 2025
A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk
aial.ie/blog/2025-ai...
11.07.2025 14:01 — 👍 130 🔁 79 💬 3 📌 11
yes this is an example of tech regression (unusual for beeb). This was a thing ten years ago?
11.07.2025 11:00 — 👍 1 🔁 0 💬 1 📌 0
Screen shot of web page reading:
From authority to similarity: how Google transformed its knowledge infrastructure using computer vision
Authors
Warren Pearce, Maud Borie, Laura Bruschi, Daniele Dell'Orto, Matthew Hanchard, Elena Pilipets, Alessandro Quets, and Zijing Xu
Data visualisation showing the ranking of Google Images results for climate change in Australia, Brazil, China, Mexico, Netherlands and Nigeria. Some images such as 'earht in hand' and 'landscape' appear multiple times across different countries
Data visualisation showing Google Images search results for biodiversity loss in Australia, Brazil, China, Mexico, Netherlands and Nigeria. As with climate change, some images appear multiple times across different countries, such as scientific charts and 'lonely animal', but there is slightly more diversity than for climate change
Data visualisation showing that most search results from Google Images are different than those from Google Search. This applies for both climate change and biodiversity loss, and across all six countries.
How has computer vision changed Google's knowledge infrastructure? 🤔
*Extremely* happy that our pre-print now up at SocArXiv. Our amazing team dig into Google Images, the #AI technology driving it, and the impacts for users.
osf.io/preprints/so...
#STS #digitalmethods @digitalmethods.net
11.07.2025 07:29 — 👍 1 🔁 0 💬 0 📌 0
This is an absolutely fantastic listen! Thanks @fotis-tsiroukis.bsky.social and @sabinaleonelli.bsky.social
10.07.2025 21:57 — 👍 2 🔁 0 💬 1 📌 0
Thanks! Yes agreed. I don't think that the episode was particularly representative of the conference. Now I reflect on it, I think that it was AI driving much of the disagreement at the conference.
09.07.2025 16:02 — 👍 1 🔁 0 💬 0 📌 0
My pleasure!
08.07.2025 20:45 — 👍 1 🔁 0 💬 0 📌 0
a couple of hours before my keynote, I went through an intense negotiation with the organisers (for over a hour) where we went through my slides and had to remove anything that mentions 'Palestine' 'Israel' and replace 'genocide' with 'war crimes'
1/
08.07.2025 09:58 — 👍 1346 🔁 652 💬 37 📌 63
my keynote happening in a few mins. registration here to stream it
aiforgood.itu.int/summit25/reg...
08.07.2025 08:55 — 👍 113 🔁 23 💬 8 📌 6
I'd heard a little about this incident; this is a great summary of what happened, and the issues around it. Bizarre that, at a meeting like this, such a question was ruled out of bounds. But perhaps the worst sin at any science-related meeting is to create a sense of embarrassment.
04.07.2025 13:17 — 👍 10 🔁 1 💬 3 📌 0
When the QUESTION for a speaker gets applause 👏 from the audience at an academic conference,
But the chair tries to shut it down, saying "This is not the place to discuss this,"
And the chair gets booed by the audience,
You know you've got something worth discussing. Recommended reading.
04.07.2025 13:57 — 👍 7 🔁 1 💬 0 📌 0
At the EPC Congress this year, Chi Obwura's keynote went seamlessly from 'technology can cause harm as well as good, social media is very worrying' to 'here are the 10 things were doing to accelerate AI adoption'. Not one of those 10 was governance or ethics.
05.07.2025 14:00 — 👍 2 🔁 1 💬 0 📌 1
This is a great summary of a very peculiar interaction. I wonder how Geraint Rees is coping with the new OfS duty that requires him and his UCL colleagues to “support constructive dialogue on contentious subjects”.
04.07.2025 15:29 — 👍 3 🔁 1 💬 0 📌 0
Yes. Captures a telling moment about the current politics of tech and AI in particular. If you are at UCL you will also read this and nod in recognition
04.07.2025 17:08 — 👍 4 🔁 1 💬 1 📌 0
Thanks! Agreed it would not have changed bigger picture, but the accountability is important. I note that in her talk she said “I am not a scientist”, so this was perhaps an unfamiliar situation.
05.07.2025 08:50 — 👍 1 🔁 0 💬 1 📌 0
They weren’t directly given opportunity, but could easily have happened if they were a bit proactive. Bit surprised they didn’t as they would have regained control of the situation. Perhaps they were not expecting criticism.
05.07.2025 07:00 — 👍 0 🔁 0 💬 0 📌 0
SWOyate citizen born in Dakota homelands. Prof & Canada Research Chair in Indigenous Peoples, Technoscience, and Society, UAlberta, Faculty of Native Studies. YEG/MSP/LAX.
https://kimtallbear.substack.com; https://www.youtube.com/@ktallbear
UMass Amherst, Initiative for Digital Public Infrastructure, Global Voices, Berkman Klein Center. Formerly Center for Civic Media, MIT Media Lab.
climate *zeitgeist* reporter for @washingtonpost.com. DM for Signal.
▪︎ Reading, writing & thinking about the moral foundation of scholarship
▪︎ Author of Doing Good Social Science: http://bit.ly/3EgFA2z
▪︎ More about my work: immersiveresearch.co.uk
▪︎ DM for speaking/workshop requests
Professor. Sociologist. NYTimes Opinion Columnist. Books: THICK, LowerEd. Forthcoming: 1)Black Mothering & Daughtering and 2)Mama Bears.
Beliefs: C.R.E.A.M. + the internet ruined everything good + bring back shame.
“I’m just here so I don’t get fined.”
Sheffield's new quality newspaper — sent via email to over 30,000 readers. Join for free at the link below. Always looking for new writers and new stories — DM us.
www.sheffieldtribune.co.uk
University of St Andrews; Executive Chair, Arts & Humanities Research Council; International Champion and Creative Industries Sector Champion UK Research and Innovation
All views my own, but none of the poetry.
ResearchScientist @oii.ox.ac.uk UniversityOfOxford:
EmergingTechnologies, MentalHealth/Wellbeing/Learning,
MetaScience, ResearchIntegrity, MixedMethods.
People&Planet, SocialEquity, ActiveTravel, Sustainability.
FirstGen ImposterInAcademia
🏴/🇮🇪 + 🇩🇪/🇯🇲 - Academic | Centre-left | DIY | Judoka | Parent | Suburbanite | Time poor | Work in progress | Owns opinions | He/him - https://sciences.social/@MatthewHanchard
Assistant Professor of Sociology, NYU. Core Faculty, CSMaP. Research Fellow Oxford Sociology. Computational social science, Methods, Conflict, Communication. Webpage: cjbarrie.com
Senior writer at @chronicle.com, writing about scholarship, scholars, and society / stephanie.lee@chronicle.com / Signal: stephaniemlee.07 / stephaniemlee.com / San Francisco
Climate & AI Lead @HuggingFace, TED speaker, WiML board member, TIME AI 100 (She/her/Dr/🦋)
Disputatious librarian. Does copyleft and schol comms stuff. Obsessive sports watcher.
scholarly communication specialist at Utrecht University library
I do interdisciplinary research to understand the opportunities and challenges in personal data use: the technical, the social, and the policy and regulatory […]
[bridged from https://mastodon.me.uk/@drdrmc on the fediverse by https://fed.brid.gy/ ]
Dean for Research Culture @UniversityLeeds.
Professor of Language Development @SDDS_UK; IckleProject; @leedscdu lead.
Science writer and author of books including Bright Earth, The Music Instinct, Beyond Weird, How Life Works.
Co-founder of Project Implicit, Society for Improving Psychological Science, and the Center for Open Science; Professor at the University of Virginia
PhD: ecofascism, far-right ecologies, securitisation, migration, enviro politics, IPE @sheffielduni.bsky.social
Director: @weareopus.bsky.social @nowthenmag.bsky.social
Trustee: Arts on the Run and Migration Matters Festival.
She/her/bunny hugger/AuDHD
(Science) journalist and author.
Opinion editor, Research Professional News.
Books: Lost Animals; People Will Talk ; In the Beat of a Heart.
www.johnwhitfield.co.uk