Manuel Tonneau's Avatar

Manuel Tonneau

@manueltonneau.bsky.social

PhD candidate @oiioxford.bsky.social NLP, Computational Social Science @WorldBank manueltonneau.com

714 Followers  |  554 Following  |  55 Posts  |  Joined: 20.09.2023  |  2.4742

Latest posts by manueltonneau.bsky.social on Bluesky

Preview
Home - Somewhere On Earth Productions SOMEWHERE ON EARTH PRODUCTIONS: We are here to connect technology and business to people and new possibilities.

ICYMI: Listen to @manueltonneau.bsky.social @oii.ox.ac.uk's interview with the SOEP podcast talking about his new research into hate speech, online platforms and disparities in content moderation across different European countries. Available here: bit.ly/4ntsiRU

01.10.2025 13:46 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

🚨Hiring a fully funded (3.5 years) PhD for the @ldnsocmedobs.bsky.social to research social media and politics. Candidates should have quantitative/computational skills and/or be interested in content curation/moderation. UK home candidates only unfortunately. www.royalholloway.ac.uk/media/hquftp...

29.09.2025 17:21 β€” πŸ‘ 4    πŸ” 14    πŸ’¬ 1    πŸ“Œ 3
Post image

πŸ“£ New Preprint!
Have you ever wondered what the political content in LLM's training data is? What are the political opinions expressed? What is the proportion of left- vs right-leaning documents in the pre- and post-training data? Do they correlate with the political biases reflected in models?

29.09.2025 14:54 β€” πŸ‘ 45    πŸ” 14    πŸ’¬ 2    πŸ“Œ 0
Post image

Social media feeds today are optimized for engagement, often leading to misalignment between users' intentions and technology use.

In a new paper, we introduce Bonsai, a tool to create feeds based on stated preferences, rather than predicted engagement.

arxiv.org/abs/2509.10776

16.09.2025 13:24 β€” πŸ‘ 153    πŸ” 46    πŸ’¬ 5    πŸ“Œ 7
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

12.09.2025 10:33 β€” πŸ‘ 265    πŸ” 96    πŸ’¬ 6    πŸ“Œ 20
Post image

1/ 🚨 Big news 🚨 today we’re launching Tech for Open Minds (TOM) at @DukeUβ€” a global program exploring how technology shapes open-mindedness, humility & polarization 🌍🧠
πŸ”—https://sicss.io/stories/2025-08-18

29.08.2025 16:06 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Social platforms' 'language blind spots' in content moderation bring brand safety concerns A new study of recently mandated transparency data under the EU Digital Services Act found that millions of users of social platforms in the region post in languages without any human moderation.

@themedialeader.bsky.social highlights new insights from @manueltonneau.bsky.social, @deeliu97.bsky.social, Prof. Ralph Schroeder + Prof. @computermacgyver.bsky.social, whose have found that 16mn EU-based X users β€œdo not have moderators for their language.”

uk.themedialeader.com/social-platf...

29.08.2025 12:47 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

thanks a lot for the repost!

29.08.2025 07:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
OSF

Millions of users are posting to social media and other platforms in languages with zero moderators, even within the EU.

That's the topline finding from an impressive new working paper leveraging newly mandated transparency data under the DSA led by @manueltonneau.bsky.social osf.io/preprints/so...

28.08.2025 09:41 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1

I don't have Portuguese roots but my parents liked the name, and I lived in Lisbon for a few months, so can speak um bocado :)

28.08.2025 09:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you and great point! We did not but I suppose we could find the info in the DSA Transparency Database, at least for Spanish and Portuguese. The issue I foresee though is that we'll only have info for moderation in EU countries and nothing on Latin America. Still, worth a look, thanks again!

28.08.2025 09:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Merci beaucoup pour le repost :)

28.08.2025 09:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

muito obrigado :)

28.08.2025 09:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@oii.ox.ac.uk @weizenbauminstitut.bsky.social @umassamherst.bsky.social @umich.edu

28.08.2025 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Finally tagging scholars whose work inspired this piece: @monaelswah.bsky.social @farhana-shahid.bsky.social @nicp.bsky.social @cgoanta.bsky.social Your feedback is most welcome!

28.08.2025 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This would also not have been possible without data collection efforts led by @jurgenpfeffer.bsky.social and without @claesdevreese.bsky.social @aurman21.bsky.social who made me aware of the DSA moderator count data on here a while back, thank you all!

28.08.2025 08:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Had a blast working on this paper with my wonderful coauthors @deeliu97.bsky.social @antisomniac.bsky.social @ze.vin Ralph @ethanz.bsky.social @computermacgyver.bsky.social

28.08.2025 08:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
OII | OII researchers propose recommendations for effective data governance in light of the EU’s Digital Service Act OII researchers propose a series of recommendations for effective data access and data governance in light of the EU’s Digital Service Act.

We also issue a recommendation: platforms and regulators should improve transparency by reporting moderator counts with context (eg content volume per language), ensure consistent reporting over time, and extend data coverage beyond EU languages.

28.08.2025 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So what? The main implication is that speakers of underserved languages likely receive less protection from online harms. Our analysis also nuances existing concerns: while Global South languages are consistently underserved, allocation for other non-English languages varies widely across platforms.

28.08.2025 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

For languages with moderators, we normalize mod counts by content volume per language and find that platforms allocate moderation workforce disproportionately relative to content volume, with languages primarily spoken in the Global South (Spanish, Portuguese, Arabic) consistently underserved.

28.08.2025 08:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also quantify the amount of EU-based users whose national language does not have moderators, and we’re talking about millions of users posting in languages with zero moderators.

28.08.2025 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Taking Twitter/X as an example, we then show that languages subject to moderation blind spots are generally widely spoken on social media, representing an average of 31% of all tweets during a one-day period in countries where they are the official language.

28.08.2025 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We first look at language coverage and find that while larger platforms such as YouTube and Meta have moderators in most EU languages, smaller platforms such as X and Snapchat have several language blind spots with no human moderators, particularly in Southern, Eastern and Northern Europe.

28.08.2025 08:44 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Frances Haugen: β€˜I never wanted to be a whistleblower. But lives were in danger’ The woman whose revelations have rocked Facebook tells how spending time with her mother, a priest, motivated her to speak out

Concerns about underinvestment in non-English moderation have long circulated via whistleblower leaks, but they were never quantified. The EU’s Digital Services Act is a turning point, requiring platforms to disclose moderator counts per language, making cross-lingual comparison possible.

28.08.2025 08:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Social media platforms operate globally, but do they allocate human moderation equitably across languages?

Our new WP shows the answer is no:

-Millions of users post in languages with zero moderators
-Where mods exist, mod count relative to content volume varies widely across langs

osf.io/amfws

28.08.2025 08:44 β€” πŸ‘ 18    πŸ” 11    πŸ’¬ 2    πŸ“Œ 5

Very cool piece by my colleague @antisomniac.bsky.social on how YouTube is used differently across languages. Worth a read!

13.08.2025 19:31 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ† Thrilled to share that our HateDay paper has received an Outstanding Paper Award at #ACL2025

Big thanks to my wonderful co-authors: @deeliu97.bsky.social, Niyati, @computermacgyver.bsky.social, Sam, Victor, and @paul-rottger.bsky.social!

Thread πŸ‘‡and data avail at huggingface.co/datasets/man...

31.07.2025 08:05 β€” πŸ‘ 29    πŸ” 7    πŸ’¬ 2    πŸ“Œ 1
Post image

Creators pour years into building a following, but in a growing underground market, you can simply buy accounts and inherit their audience.

In our new pre-print, we find this practice of repurposing accounts to be prevalent and consequential on YouTube!

arxiv.org/abs/2507.16045

30.07.2025 20:29 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

New! Heading to #ACL2025NLP today? Hear from @oii.ox.ac.uk researchers presenting new research and sharing recent findings which aim to help address inequalities in natural language processing models. 1/4

28.07.2025 09:18 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

Join @manueltonneau.bsky.social as he presents his co-authored paper β€˜HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter’ this afternoon. Mon 28 July, 14.00-15.00. Hall A. 2/4

28.07.2025 09:18 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

@manueltonneau is following 20 prominent accounts