Mark Graham's Avatar

Mark Graham

@geoplace.bsky.social

Prof at the Oxford Internet Institute Director of @towardsfairwork.bsky.social Publications: www.markgraham.space Studies: Digital Economies, Digital Geographies, Economic Geography, Gig Economy, Data Work, AI Production Networks, Cities Eat the rich

3,428 Followers  |  748 Following  |  72 Posts  |  Joined: 30.01.2024
Posts Following

Posts by Mark Graham (@geoplace.bsky.social)

Preview
She Came Out of the Bathroom Naked, Employee Says Bank details, sex and naked people who seem unaware they are being recorded. Behind Meta’s new smart glasses lies a hidden workforce, uneasy about peering into the most intimate parts of other people’...

A striking piece on how Kenyan data workers at Sama are reviewing and annotating private data captured by Meta’s “smart glasses.”

I’ve visited the Nairobi site. Most people in Europe would be stunned by how much of their everyday data is being processed and labeled there

www.svd.se/a/K8nrV4/met...

04.03.2026 08:05 — 👍 9    🔁 1    💬 0    📌 2
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

The paper is here:

journals.sagepub.com/doi/10.1177/...

You can see and play with all the data here:
inequalities.ai

@geoplace.bsky.social

28.01.2026 13:29 — 👍 6    🔁 1    💬 0    📌 0
Post image

THE SILICON GAZE

After reading a really interesting paper from @oii.ox.ac.uk (link below), I asked ChatGPT (version 5.2) to give an ranking of countries by IQ, 'extrapolating' and 'estimating' where data was not available.

I then asked it to provide an 'approximate' heat map of the estimates

1/2

28.01.2026 13:29 — 👍 6    🔁 3    💬 2    📌 0
Preview
‘Biased’ AI says Cambridge is harder-working than boozy Oxford Conclusion highlights the limitations of large-language models, according to the researchers, who asked ChatGPT for one-word answers which gave ‘binary’ results

New coverage from The Times on research co-authored by Prof. @geoplace.bsky.social highlights how ChatGPT demonstrates biased outputs, reflecting long-standing inequalities embedded in AI training data.

Read more: www.thetimes.com/uk/technolog...

26.01.2026 10:06 — 👍 5    🔁 2    💬 0    📌 0
Preview
OpenAI’s ChatGPT has a Western bias, study finds ChatGPT’s viewpoints are shaped by the predominantly Western, white, male developers and platform owners who built it, a study finds.

New coverage from @euronews.com on Prof. @geoplace.bsky.social's research, which finds that answers from OpenAI’s ChatGPT favour wealthy, Western countries and sideline much of the Global South.

Read more:

www.euronews.com/next/2026/01...

22.01.2026 10:26 — 👍 2    🔁 1    💬 0    📌 0

"From these empirics, we argue that bias is (...) an intrinsic feature of generative AI, rooted in historically uneven data ecologies and design choices (...) that accounts for the complex ways in which LLMs privilege certain places while rendering others invisible."
AI scare us cos it's based on us

21.01.2026 07:16 — 👍 5    🔁 1    💬 0    📌 0

'The silicon gaze: A typology of biases and inequality in LLMs through the lens of place'.

Develops "a five-part typology of bias (availability, pattern, averaging, trope, and proxy) that accounts for the complex ways in which LLMs privilege certain places while rendering others invisible."

21.01.2026 15:16 — 👍 3    🔁 1    💬 1    📌 0
Post image

Researchers FranciscKerche, Matthew Zook and @geoplace.bsky.social show how bias emerges in ChatGPT outputs. For example, responses to queries rank Ipanema, Leblon and Lagoa as having the happiest people compared to Complexo do Alemão, Complexo da Maré and Rio Comprido as the unhappiest. 2/4

20.01.2026 15:45 — 👍 1    🔁 1    💬 1    📌 0
Post image

The team has created a public website inequalities.ai where anyone can explore how ChatGPT rates countries, cities and neighbourhoods across a range of lifestyle indicators including food, culture and quality of life. 3/4

20.01.2026 10:17 — 👍 10    🔁 3    💬 1    📌 1
Post image

Researchers Francisco Kerche, Prof Matthew Zook and @geoplace.bsky.social find that ChatGPT reproduces global biases. For example, responses rank Brighton, London and Bristol as having the sexiest people in the UK whilst Grimsby, Accrington and Barnsley are rated lowest. More: bit.ly/4bF4K9B

20.01.2026 14:46 — 👍 1    🔁 1    💬 0    📌 0
Post image

New study from @oii.ox.ac.uk and the University of Kentucky sheds light on how bias manifests in ChatGPT outputs. For example, London boroughs Bloomsbury, Hampstead and the City of London are rated as having the smartest people with Croydon, Tottenham and Hillingdon rated the lowest. 1/2

20.01.2026 14:46 — 👍 4    🔁 1    💬 1    📌 0
Preview
AI 'reveals' the most racist towns in the UK - Burnley tops list When asked which UK towns and cities are the most racist, ChatGPT claims that Burnley tops the list. This is followed by Bradford, Belfast, Middlesbrough, Barnsley, and Blackburn.

“ChatGPT isn't an accurate representation of the world. It rather just reflects and repeats the enormous biases within its training data” @geoplace.bsky.social @oii.ox.ac.uk speaking to @dailymail.co.uk about his new co-authored study with University of Kentucky. www.dailymail.co.uk/sciencetech/...

20.01.2026 13:51 — 👍 6    🔁 2    💬 0    📌 0
Post image

News alert! New study from @oii.ox.ac.uk and the University of Kentucky finds that ChatGPT amplifies global inequalities. Researchers find that large language models reflect historic biases in the data sets they learn from whilst shaping how people see the world. More here: bit.ly/4bF4K9B 1/4

20.01.2026 10:17 — 👍 15    🔁 8    💬 1    📌 0
Preview
AI thinks these are the most racist places in the UK ChatGPT answers often repeat negative stereotypes and reinforce prejudices, study shows

New @oii.ox.ac.uk and University of Kentucky study shows how ChatGPT amplifies global inequalities, with LLMs reflecting historic biases in training data. With thanks to @telegraph.co.uk for sharing the study. @geoplace.bsky.social
www.telegraph.co.uk/business/202...

20.01.2026 12:10 — 👍 7    🔁 2    💬 0    📌 0
Post image

Researchers Francisco Kerche, Matt Zook and @geoplace.bsky.social find responses generated by ChatGPT consistently rate wealthier, western regions as ‘better’, smarter’, ‘happier’ and ‘more innovative’. 2/4

20.01.2026 10:17 — 👍 2    🔁 1    💬 1    📌 0
Post image

Place is not a neutral category in AI systems. Our findings show how historical and institutional patterns of documentation become legible as common sense in LLM outputs.

You can explore all of our data and create your own maps at inequalities.ai

20.01.2026 10:11 — 👍 3    🔁 0    💬 0    📌 0
Post image Post image

One recurring issue is the use of proxies: quantifiable stand-ins (rankings, lists, awards) used to answer questions that are not straightforwardly measurable. This tends to advantage already-visible places.

20.01.2026 10:11 — 👍 2    🔁 0    💬 1    📌 0
Post image

We used forced-choice prompts to elicit comparative judgements about places. This makes latent preferences and stereotypes easier to detect than in open-ended responses.

20.01.2026 10:11 — 👍 1    🔁 0    💬 1    📌 0
Post image

The paper develops a typology of five recurrent biases in LLM place representations: availability, pattern, averaging, trope, and proxy. The maps illustrate how these surface across regions.

20.01.2026 10:11 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image

A large share of place-based answers in LLMs appear to be shaped by uneven visibility in the underlying data. This is particularly evident for places that are sparsely documented online.

20.01.2026 10:11 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image

We introduce the term “silicon gaze” to describe patterned inequalities in how LLMs represent place. The paper sets out a typology and maps the resulting spatial distributions.

20.01.2026 10:11 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image

Our new paper audits ChatGPT’s place-based judgements using 20 million pairwise comparisons. We find systematic geographic biases in how places are described and evaluated.

journals.sagepub.com/doi/10.1177/... (authors: Francisco W. Kerche, Matthew Zook, Mark Graham)

20.01.2026 10:11 — 👍 14    🔁 10    💬 1    📌 2
Preview
Artificial intelligence (AI) and employment Artificial intelligence (AI) is becoming more common in UK workplaces. How is it being used, and what are the impacts on job opportunities and working conditions?

The OII's Prof. @geoplace.bsky.social contributed to POST UK's report on AI and employment, which considers the factors driving adoption and the issues that might make this challenging.

Read the full report here: post.parliament.uk/research-bri...

08.01.2026 11:51 — 👍 2    🔁 1    💬 0    📌 0
Preview
Workers powering the AI industry face terrible conditions, but they shouldn’t have to – interview Mark Graham, founder of the Fairwork initiative, notes that most of the human labour in the AI supply chain is data work in low-income countries done under poor conditions.

Workers powering the AI industry face terrible conditions, but they shouldn’t have to.

Interview with me in Yahoo News: www.yahoo.com/news/article...

16.12.2025 11:37 — 👍 4    🔁 2    💬 0    📌 0
Post image Post image

Fairwork’s AI Supply Chain Assessment: Appen report is now LIVE.

- 15 changes were implemented by Appen during the assessment period.

- The report also highlights areas for further progress, including pay, worker protections, and transparency.

Read the full report here: fair.work/en/fw/public...

16.12.2025 10:25 — 👍 2    🔁 1    💬 0    📌 0
Post image

If AI is going to be fair, its supply chains have to be too. That’s why we’ve launched Fairwork Certification, working with lead firms to push higher standards all the way down their chains. Details here:

fair.work/wp-content/u...

10.12.2025 10:59 — 👍 0    🔁 0    💬 0    📌 0
Post image

A new @towardsfairwork.bsky.social assessment of Sama is out.

It looks at the people doing the invisible data work that keeps AI running for companies in sectors from driverless cars to online retail.

Read the scorecard and our report here:

🔗 fair.work/en/fw/public...

10.12.2025 10:59 — 👍 3    🔁 2    💬 1    📌 0
Preview
„Die Arbeitsbedingungen sind brutal“: Über die geheimen Malocher hinter ChatGPT Hinter jedem KI-Bild steckt Handarbeit: Menschen in Kenia, Indien oder auf den Philippinen schuften stundenlang für Hungerlöhne, um Maschinen klüger zu machen. Im Gespräch enthüllt Mark Graham die uns...

ICYMI: New interview with @geoplace.bsky.social @towardsfairwork.bsky.social speaking to German newspaper Der Freitag about the hidden human cost of the AI revolution. www.freitag.de/autoren/der-...

12.11.2025 10:35 — 👍 1    🔁 1    💬 0    📌 0
Preview
„Die Arbeitsbedingungen sind brutal“: Über die geheimen Malocher hinter ChatGPT Hinter jedem KI-Bild steckt Handarbeit: Menschen in Kenia, Indien oder auf den Philippinen schuften stundenlang für Hungerlöhne, um Maschinen klüger zu machen. Im Gespräch enthüllt Mark Graham die uns...

KI klingt nach Zukunft – aber sie lebt von menschlicher, unsichtbarer Arbeit. Ich habe für @freitag.de mit @geoplace.bsky.social über sein Buch „Feeding the Machine“ gesprochen – und darüber, was KI wirklich kostet. Leseempfehlung für alle, die hinter die Kulissen der "KI-Unternehmen" schauen wollen

10.11.2025 08:39 — 👍 123    🔁 74    💬 5    📌 6

I’ve got a new chapter out with Adam Badger, Alessio Bertolini, Fabian Ferrari & Funda Ustek Spilda in the forthcoming Handbook of Labour Geography. In it, we unpack the Fairwork action-research method.

Read: www.elgaronline.com/edcollchap/b...

30.10.2025 15:32 — 👍 1    🔁 0    💬 0    📌 0