Warren Pearce's Avatar

Warren Pearce

@warrenpearce.bsky.social

Academic. Digital methods. Climate change. Expertise. Imagery. Occasional music posts. #AcademicSky #STS https://www.sheffield.ac.uk/socstudies/people/academic-staff/warren-pearce

552 Followers  |  210 Following  |  87 Posts  |  Joined: 14.09.2023  |  2.3615

Latest posts by warrenpearce.bsky.social on Bluesky

Preview
Demis Hassabis on our AI future: ‘It’ll be 10 times bigger than the Industrial Revolution – and maybe 10 times faster’ The head of Google’s DeepMind says artificial intelligence could usher in an era of ‘incredible productivity’ and ‘radical abundance’. But who will it benefit? And why does he wish the tech giants had...

Dude has some magic beans for you.

“the energy required is going to be a lot for AI systems, but the amount we’re going to get back, even just narrowly for climate [solutions] from these models, it’s going to far outweigh the energy costs.”

Here's the thing./1

www.theguardian.com/technology/2...

04.08.2025 22:00 — 👍 10    🔁 5    💬 2    📌 0
AI and Fraternity, Abeba Birhane, AI Accountability Lab  

I envision a future where human dignity, justice, peace, kindness, care, respect, accountability, and rights and freedoms serve as the north stars that guide AI development and use. Realising these ideals can’t happen without intentional tireless work, dialogues, and confrontations of ugly realities – even if they are uncomfortable to deal with. This starts with deciphering hype from reality. Pervasive narratives portray AI as a magical, fully autonomous entity approaching a God-like omnipotence and omniscience. In reality, audits of AI systems reveal a consistent failure to deliver on grandiose promises and suffer from all kinds of shortcomings, issues often swept under the rug. AI in general, and GenAI in particular, encodes and exacerbates historical stereotypes, entrenches harmful societal norms, and amplifies injustice. A robust body of  evidence demonstrates that — from hiring, welfare allocation, medical care allocation to anything in between — deployment of AI is widening inequity, disproportionately impacting people at the margins of society and concentrating power and influence in the hands of few. Major actors—including Google, Microsoft, Amazon, Meta, and OpenAI—have willingly aligned with authoritarian regimes and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, refrain from using AI for weapons development, while retaliating against critique. The aforementioned vision can’t and won’t happen without confrontation of these uncomfortable facts. This is precisely why we need active resistance and refusal of unreliable and harmful AI systems; clearly laid out regulation and enforcement; and shepherding of the AI industry towards transparency and accountability of responsible bodies. "Machine agency" must be in service of human agency and empowerment, a coexistence that isn't a continuation of modern tech corporations’ inequality-widening,

AI and Fraternity, Abeba Birhane, AI Accountability Lab I envision a future where human dignity, justice, peace, kindness, care, respect, accountability, and rights and freedoms serve as the north stars that guide AI development and use. Realising these ideals can’t happen without intentional tireless work, dialogues, and confrontations of ugly realities – even if they are uncomfortable to deal with. This starts with deciphering hype from reality. Pervasive narratives portray AI as a magical, fully autonomous entity approaching a God-like omnipotence and omniscience. In reality, audits of AI systems reveal a consistent failure to deliver on grandiose promises and suffer from all kinds of shortcomings, issues often swept under the rug. AI in general, and GenAI in particular, encodes and exacerbates historical stereotypes, entrenches harmful societal norms, and amplifies injustice. A robust body of evidence demonstrates that — from hiring, welfare allocation, medical care allocation to anything in between — deployment of AI is widening inequity, disproportionately impacting people at the margins of society and concentrating power and influence in the hands of few. Major actors—including Google, Microsoft, Amazon, Meta, and OpenAI—have willingly aligned with authoritarian regimes and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, refrain from using AI for weapons development, while retaliating against critique. The aforementioned vision can’t and won’t happen without confrontation of these uncomfortable facts. This is precisely why we need active resistance and refusal of unreliable and harmful AI systems; clearly laid out regulation and enforcement; and shepherding of the AI industry towards transparency and accountability of responsible bodies. "Machine agency" must be in service of human agency and empowerment, a coexistence that isn't a continuation of modern tech corporations’ inequality-widening,

so I am one of the 12 people (including the “god-fathers of AI”) that will be at the Vatican this September for a two full-day working group on the Future of AI

here is my Vatican approved short provocation on 'AI and Fraternity' for the working group

04.08.2025 11:31 — 👍 523    🔁 158    💬 31    📌 15

This was a big topic at #Metascience2025 and should be top of mind for anyone who cares about #equity in education, healthcare, climate, you name it.

03.08.2025 19:40 — 👍 9    🔁 2    💬 1    📌 0

Join us in signing and sharing this petition to prevent layoffs of over 400 people (both academic & professional services staff) at Lancaster University. We are all at risk and it's a scary time.

30.07.2025 20:14 — 👍 38    🔁 42    💬 2    📌 1
Preview
Communications (Chapter 26) - A Critical Assessment of the Intergovernmental Panel on Climate Change A Critical Assessment of the Intergovernmental Panel on Climate Change - December 2022

After this, we focused more specifically on the IPCC and the problem of 'appropriated' communication (like 12 years), which you have presented some nice examples of in this thread! doi.org/10.1017/9781009082099.032

30.07.2025 11:33 — 👍 0    🔁 0    💬 0    📌 0

on the one hand we should want people to change their mind so we can stop what's happening now. On the other hand there need to be costs for the ghoulish behavior that got us here, as these people absolutely will do this again given half a chance

28.07.2025 12:34 — 👍 2373    🔁 630    💬 46    📌 25
Preview
The role of mundane resistance in the spectacular failure of the smart home - Murray Goulden, Lewis Cameron, 2025 A decade on from the launch of Amazon's Alexa – the smart home's breakout product – the vision of semi-automated, pervasively sensed domesticity remai...

My new paper with Lewis Cameron on 'The Role of Mundane Resistance in the Spectacular Failure of the Smart Home' is now available from Big Data & Society.

journals.sagepub.com/doi/10.1177/...

28.07.2025 08:39 — 👍 3    🔁 2    💬 1    📌 1
Preview
Brits can get around Discord's age verification thanks to Death Stranding's photo mode, bypassing the measure introduced with the UK's Online Safety Act. We tried it and it works—thanks, Kojima The UK's new act blocks access to adult content without identification. Turns out, you only need a copy of Death Stranding and a phone to get around it.

Credit card age verification provided by KWS for @bsky.app is very broken, and I do not wish to provide a face scan.

On the other hand, the face scan option is apparently not hard to fool www.pcgamer.com/hardware/bri...

27.07.2025 09:53 — 👍 0    🔁 0    💬 1    📌 0

Maybe some dots to join between the event above and UCL topping the new repression league table? bsky.app/profile/feli...

26.07.2025 13:43 — 👍 0    🔁 0    💬 0    📌 0

this reminds me of the period in american politics when pundits would demand that any black politician of note — really, any black person in politics with a national audience — “condemn farrakhan”

24.07.2025 01:01 — 👍 13104    🔁 2040    💬 477    📌 138

If you listen, you’ll hear “now” being used like this all over the place in desperate attempts at self-exculpation. Getting your dissent on record in the hope you’ve made it under the wire. But “now” makes quite clear how you’ve endorsed genocidal acts all the way through

24.07.2025 06:51 — 👍 73    🔁 36    💬 4    📌 0
Preview
Caitlín Doherty, Everything Else — Sidecar In Dubai.

So many reasons why there is a boycott of Dubai. (Simply don’t go! Cancel any ticket you have!)

Here Caitlín Doherty writes of her shame at having visited @newleftreview.bsky.social newleftreview.org/sidecar/post...

20.07.2025 16:26 — 👍 4    🔁 2    💬 0    📌 0
Preview
Hello World! Hello, welcome to my new Making Science Public Blog. I started blogging on the old Making Science Public blog maintained by the University of Nottingham in 2012. We have transferred all the hundred…

I have taken the plunge and launched my own new 'Making Science Public' blog on WordPress. Here is my first post makingsciencepublic.com/2025/07/12/h...

18.07.2025 11:23 — 👍 39    🔁 13    💬 2    📌 1
AI for Good [Appearance?] Reflections on the last minute censorship of my keynote at the AI for Good Summit 2025

A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk

aial.ie/blog/2025-ai...

11.07.2025 14:01 — 👍 130    🔁 79    💬 3    📌 11

yes this is an example of tech regression (unusual for beeb). This was a thing ten years ago?

11.07.2025 11:00 — 👍 1    🔁 0    💬 1    📌 0
Screen shot of web page reading: 
From authority to similarity: how Google transformed its knowledge infrastructure using computer vision
Authors
Warren Pearce, Maud Borie, Laura Bruschi, Daniele Dell'Orto, Matthew Hanchard, Elena Pilipets, Alessandro Quets, and Zijing Xu

Screen shot of web page reading: From authority to similarity: how Google transformed its knowledge infrastructure using computer vision Authors Warren Pearce, Maud Borie, Laura Bruschi, Daniele Dell'Orto, Matthew Hanchard, Elena Pilipets, Alessandro Quets, and Zijing Xu

Data visualisation showing the ranking of Google Images results for climate change in Australia, Brazil, China, Mexico, Netherlands and Nigeria. Some images such as 'earht in hand' and 'landscape' appear multiple times across different countries

Data visualisation showing the ranking of Google Images results for climate change in Australia, Brazil, China, Mexico, Netherlands and Nigeria. Some images such as 'earht in hand' and 'landscape' appear multiple times across different countries

Data visualisation showing Google Images search results for biodiversity loss in Australia, Brazil, China, Mexico, Netherlands and Nigeria. As with climate change, some images appear multiple times across different countries, such as scientific charts and 'lonely animal', but there is slightly more diversity than for climate change

Data visualisation showing Google Images search results for biodiversity loss in Australia, Brazil, China, Mexico, Netherlands and Nigeria. As with climate change, some images appear multiple times across different countries, such as scientific charts and 'lonely animal', but there is slightly more diversity than for climate change

Data visualisation showing that most search results from Google Images are different than those from Google Search. This applies for both climate change and biodiversity loss, and across all six countries.

Data visualisation showing that most search results from Google Images are different than those from Google Search. This applies for both climate change and biodiversity loss, and across all six countries.

How has computer vision changed Google's knowledge infrastructure? 🤔

*Extremely* happy that our pre-print now up at SocArXiv. Our amazing team dig into Google Images, the #AI technology driving it, and the impacts for users.

osf.io/preprints/so...

#STS #digitalmethods @digitalmethods.net

11.07.2025 07:29 — 👍 1    🔁 0    💬 0    📌 0

This is an absolutely fantastic listen! Thanks @fotis-tsiroukis.bsky.social and @sabinaleonelli.bsky.social

10.07.2025 21:57 — 👍 2    🔁 0    💬 1    📌 0

Thanks! Yes agreed. I don't think that the episode was particularly representative of the conference. Now I reflect on it, I think that it was AI driving much of the disagreement at the conference.

09.07.2025 16:02 — 👍 1    🔁 0    💬 0    📌 0

My pleasure!

08.07.2025 20:45 — 👍 1    🔁 0    💬 0    📌 0

a couple of hours before my keynote, I went through an intense negotiation with the organisers (for over a hour) where we went through my slides and had to remove anything that mentions 'Palestine' 'Israel' and replace 'genocide' with 'war crimes'

1/

08.07.2025 09:58 — 👍 1346    🔁 652    💬 37    📌 63

my keynote happening in a few mins. registration here to stream it

aiforgood.itu.int/summit25/reg...

08.07.2025 08:55 — 👍 113    🔁 23    💬 8    📌 6
Preview
Sam Altman’s AI Empire Relies on Brutal Labor Exploitation Firms like OpenAI are developing AI in a way that has deeply ominous implications for workers in many different fields. The current trajectory of AI can only be changed through direct confrontation wi...

Following this 👆

…new Jacobin article clarifies the massive exploitation and extraction built into #AI’s current trajectory 📉

This needs to be front and centre of any public discussion of the technology’s potential benefits 🧐 jacobin.com/2025/07/altm...

#AcademicSky

07.07.2025 20:44 — 👍 4    🔁 3    💬 0    📌 1

I'd heard a little about this incident; this is a great summary of what happened, and the issues around it. Bizarre that, at a meeting like this, such a question was ruled out of bounds. But perhaps the worst sin at any science-related meeting is to create a sense of embarrassment.

04.07.2025 13:17 — 👍 10    🔁 1    💬 3    📌 0

When the QUESTION for a speaker gets applause 👏 from the audience at an academic conference,

But the chair tries to shut it down, saying "This is not the place to discuss this,"

And the chair gets booed by the audience,

You know you've got something worth discussing. Recommended reading.

04.07.2025 13:57 — 👍 7    🔁 1    💬 0    📌 0

At the EPC Congress this year, Chi Obwura's keynote went seamlessly from 'technology can cause harm as well as good, social media is very worrying' to 'here are the 10 things were doing to accelerate AI adoption'. Not one of those 10 was governance or ethics.

05.07.2025 14:00 — 👍 2    🔁 1    💬 0    📌 1

This is a great summary of a very peculiar interaction. I wonder how Geraint Rees is coping with the new OfS duty that requires him and his UCL colleagues to “support constructive dialogue on contentious subjects”.

04.07.2025 15:29 — 👍 3    🔁 1    💬 0    📌 0

Yes. Captures a telling moment about the current politics of tech and AI in particular. If you are at UCL you will also read this and nod in recognition

04.07.2025 17:08 — 👍 4    🔁 1    💬 1    📌 0

Thanks! Agreed it would not have changed bigger picture, but the accountability is important. I note that in her talk she said “I am not a scientist”, so this was perhaps an unfamiliar situation.

05.07.2025 08:50 — 👍 1    🔁 0    💬 1    📌 0

They weren’t directly given opportunity, but could easily have happened if they were a bit proactive. Bit surprised they didn’t as they would have regained control of the situation. Perhaps they were not expecting criticism.

05.07.2025 07:00 — 👍 0    🔁 0    💬 0    📌 0

@warrenpearce is following 20 prominent accounts