Gabriel Geiger's Avatar

Gabriel Geiger

@gabrielgeiger.bsky.social

Investigative journalist at Lighthouse Reports. Algorithms & surveillance. https://www.lighthousereports.com/team/gabriel-geiger/

720 Followers  |  142 Following  |  69 Posts  |  Joined: 15.12.2023  |  2.0324

Latest posts by gabrielgeiger.bsky.social on Bluesky

Said this in the other place and perhaps worth repeating here: this is just such a careful and almost brutally honest analysis of the ways that our efforts to "fix" things at the technical artifact level can fundamentally fall short of making meaningful system wide impact. Lots to learn here.

12.06.2025 02:42 β€” πŸ‘ 18    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0

Thanks Deb! Your work has been an inspiration!

12.06.2025 16:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It was a huge pleasure working with @eileenguo.bsky.social‬ @jeroenvanraalte.bsky.social‬ β€ͺ@jusbraun.bsky.social‬ β€ͺ@asilverman.bsky.social‬ @evaconstantaras.bsky.social

11.06.2025 19:00 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Amsterdam wilde met AI de bijstand eerlijker en efficiΓ«nter maken. Het liep anders Al vaker ging de overheid de mist in met algoritmes bedoeld om uitkeringsfraude te bestrijden. De gemeente Amsterdam wilde het allemaal anders doen, maar kwam erachter: een ethisch algoritme is een il...

And in Dutch with partner @trouw.nl‬ www.trouw.nl/verdieping/a...

11.06.2025 19:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Inside Amsterdam’s high-stakes experiment to create fair welfare AI The Dutch city thought it could break a decade-long trend of implementingΒ discriminatory algorithms. Its failure raises the question: can these programs ever be fair?

Read what happened in English with partner @technologyreview.com www.technologyreview.com/2025/06/11/1...

11.06.2025 19:00 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Amsterdam followed all of these Responsible AI principles, invested hundreds of thousands of euros, consulted experts and stakeholders, ran bias audits -- and yet their system still failed. We set out to understand why.

11.06.2025 19:00 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Responsible AI is also becoming a lucrative industry. The Big 5 consultancies are now offering (and monetising) Ethical AI audits, alongside a cottage industry of start-ups. Responsible AI has produced a slew of frameworks and checklists, many of which have never been meaningfully tested.

11.06.2025 19:00 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This story unfolded alongside a growing trend: β€œResponsible AI”, a constellation of think tanks, academics, non-profits, and multinational institutions purporting to make algorithmic systems fair, accountable and transparent.

11.06.2025 19:00 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

We also interviewed officials, data scientists, politicians, experts, lawyers β€” and the welfare recipients who would be scored by the system.

11.06.2025 19:00 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So over the past two years, we’ve been patiently looking over Amsterdam’s shoulder. @lighthousereports.com and our partners at @technologyreview.com and @trouwgained unprecedented technical access to the city’s system.

11.06.2025 19:00 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A lot of coverage in this space already has the answer before it begins. @jusbraun.bsky.social is definitely not someone who thinks that way, and after pestering me constantly with late night phone calls, convinced me that pursuing this story was worthwhile

11.06.2025 19:00 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

When I first heard about the city of Amsterdam's attempt to build an ethical welfare fraud detection model, I didn’t want to pursue the story. At the time, my justification was that my energy was better spent investigating more problematic deployments where the harms are clearest.

11.06.2025 19:00 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I’ve spent a lot of time investigating shitty AI cynically used to pursue vulnerable people in welfare systems. When I talk about this work, I invariably get asked: is it possible to make these systems fair?

11.06.2025 19:00 β€” πŸ‘ 10    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Preview
Inside Amsterdam’s high-stakes experiment to create fair welfare AI The Dutch city thought it could break a decade-long trend of implementingΒ discriminatory algorithms. Its failure raises the question: can these programs ever be fair?

New from me @gabrielgeiger.bsky.social + Justin-Casimir Braun:

Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.

Our deep dive why: www.technologyreview.com/2025/06/11/1...

11.06.2025 17:04 β€” πŸ‘ 139    πŸ” 72    πŸ’¬ 6    πŸ“Œ 23

Delighted to see this project shortlisted for the Sigma awards alongside a host of other incredible data journalism projects from around the world.

12.05.2025 11:36 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

So happy that our project Β«Inequality for the lawΒ» is on the shortlist. πŸ† Lucky to have been collaborating with the talented people at @lighthousereports.com. @elenadebre.bsky.social @gabrielgeiger.bsky.social @jusbraun.bsky.social

09.05.2025 18:19 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
AI Spotlight Series - AI Spotlight Training For Southeastern European Journalists - May 1-2, 2025 Thessaloniki, Greece - Instructors: Karen Hao and Gabriel Geiger - Journalists from Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Greece, Kosovo, Montenegro, North Macedonia, Romania, Serbia, Slovenia, and Turkey are eligible to apply to this in-person training in Greece.

AI Spotlight Series - AI Spotlight Training For Southeastern European Journalists - May 1-2, 2025 Thessaloniki, Greece - Instructors: Karen Hao and Gabriel Geiger - Journalists from Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Greece, Kosovo, Montenegro, North Macedonia, Romania, Serbia, Slovenia, and Turkey are eligible to apply to this in-person training in Greece.

The Pulitzer Center and #iMEdD are partnering in a two day AI Spotlight Series in-person training on reporting on AI!

We are looking for Southeastern Europe journalists to join coaches @karenhao.bsky.social and @gabrielgeiger.bsky.social in #Greece.

Apply now!
πŸ‘‰ bit.ly/AISptSG25

05.03.2025 14:54 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

On this episode of #Backlight, award-winning reporters @karenhao.bsky.social and @gabrielgeiger.bsky.social explain how to report on #AI.

07.03.2025 11:28 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
AI Spotlight Series - AI Spotlight Training For Southeastern European Journalists - May 1-2, 2025 Thessaloniki, Greece - Instructors: Karen Hao and Gabriel Geiger.

AI Spotlight Series - AI Spotlight Training For Southeastern European Journalists - May 1-2, 2025 Thessaloniki, Greece - Instructors: Karen Hao and Gabriel Geiger.

The Pulitzer Center, in partnership with #iMEdD, is seeking applications from journalists based in Southeastern Europe to attend a two-day, in-person training program on AI reporting led coaches @karenhao.bsky.social and @gabrielgeiger.bsky.social.

Apply now! πŸ‘‰ bit.ly/AISpotGT2

24.02.2025 14:51 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

If you’re a reporter looking to take on AI accountability investigations, consider giving this a listen!

17.02.2025 16:59 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Few people in the algorithmic accountability space come with as unique of a perspective and portfolio as Soizic!

04.02.2025 18:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

That doesn’t stop the agency from blasting its fraud estimates out to the public this week. Nor does it stop the Swedish media regurgitating it, albeit now with a little asterisk section at the bottom citing our reporting :)

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What we do know is that convictions for welfare fraud are rare. In 2022, the agency referred 1686 cases to the public prosecutor for suspected welfare fraud. The result? 166 convictions. ofc there may be other reasons for this. But point is nobody knows the scale of welfare fraud

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Fair enough, but why has the agency chosen this definition? In an email, the agency refused to justify why it chose this definition of fraud, writing that the release of this information would β€œmake it easier for those who want to commit fraud.” xD

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Of course both of these are based on nothing. As an expert in SvD pointed out: β€œThey have decided that anyone who has at least two β€˜incorrect’ days in their applications is cheating. Another definition is that anyone who makes a mistake when it's sunny outside is cheating.”

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This 2 day threshold has large ramifications for its fraud estimates. If you set the threshold at 4 days the estimate of β€œfraud” would drop dramatically.

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Instead, they assume that everyone who has more than 2 incorrect days in their application has done so intentionally and fraudulently without ever actually determining if this is true.

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We looked into the methodology behind the estimate, which is based on random investigations. We found that while these investigations do check for mistakes, they do not check if these mistakes are intentional

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But is it true? With SvD, we took a closer look. www.svd.se/a/JblmJX/for...

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This number and accompanying image of extensive fraud has been regurgitated continually by Swedish media and highlighted in government reports. It has also driven regulatory changes that have handed the social security agency new powers in the name of combatting fraud, including its use of AI

03.02.2025 14:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@gabrielgeiger is following 20 prominent accounts