Fabian Ferrari's Avatar

Fabian Ferrari

@fabianlferrari.bsky.social

๐ŸŒ Assistant Professor in Cultural AI, @utrechtuniversity.bsky.social ๐Ÿ” Governance of AI Infrastructure ๐ŸŒ www.fabianlferrari.com

355 Followers  |  248 Following  |  8 Posts  |  Joined: 02.10.2023
Posts Following

Posts by Fabian Ferrari (@fabianlferrari.bsky.social)

In early 2024, researchers were already heavily using AI for work
- Survey of 816 verified authors via Semantic Scholar
- 81% of researchers reported using LLMs in their workflow
- Top uses: information seeking & editing
- Rare for data tasks: 69๎‚‰73% never use LLMs for data cleaning or generation

In early 2024, researchers were already heavily using AI for work - Survey of 816 verified authors via Semantic Scholar - 81% of researchers reported using LLMs in their workflow - Top uses: information seeking & editing - Rare for data tasks: 69๎‚‰73% never use LLMs for data cleaning or generation

The measurement problem
LLM content has risen sharply in both review and non-review papers.
Review papers do have a higher prevalence rate.
But non-review LLM papers outnumber review papers ๎‚ฃ6x.
CS.CY ๎‚Computers & Society) faces potential 50% cuts compared to CS.CV (Computer Vision) would only face 3%

The measurement problem LLM content has risen sharply in both review and non-review papers. Review papers do have a higher prevalence rate. But non-review LLM papers outnumber review papers ๎‚ฃ6x. CS.CY ๎‚Computers & Society) faces potential 50% cuts compared to CS.CV (Computer Vision) would only face 3%

Interdisciplinary researchers โ€” who move between cultures and write in the โ€œborderlandsหฎ โ€” are experts at adapting their writing. LLMs currently are not.

Interdisciplinary researchers โ€” who move between cultures and write in the โ€œborderlandsหฎ โ€” are experts at adapting their writing. LLMs currently are not.

Private information can appear in unlikely prompts

Private information can appear in unlikely prompts

I gave a short talk at Cornell yesterday on my science-of-science work investigating how AI is being used by researchers and how we should go about crafting policies in response.

Blanket policies are hard, privacy is important, we need more measurement.

Slides: drive.google.com/file/d/1gNTK...

04.03.2026 13:23 โ€” ๐Ÿ‘ 59    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Abstract of article:

โ€œABSTRACT
Governing Artificial Intelligence (Al) is difficult, in part, because Al systems never stand still in any one place. They are usually made by private companies, hidden within proprietary infrastructures, spanning jurisdictions, behaving in ways that are difficult to predict, and talked about in messy discourses of hype and panic. I suggest here that all this dynamism and uncertainty could be tackled by understanding Al and its governance as multi-scalar phenomena. Drawing on DiCaglio's idea of a
'scalar view,' defining Al as a scalar media technology, and tracing journalism's encounters with Generative Al as scalar collisions - across
practices, organizations, data, audiences, and engineering - I argue that Al governance is
'scale work', and that multi-scalar governance offers new ways to understand Generative Al and its stakes.โ€

Abstract of article: โ€œABSTRACT Governing Artificial Intelligence (Al) is difficult, in part, because Al systems never stand still in any one place. They are usually made by private companies, hidden within proprietary infrastructures, spanning jurisdictions, behaving in ways that are difficult to predict, and talked about in messy discourses of hype and panic. I suggest here that all this dynamism and uncertainty could be tackled by understanding Al and its governance as multi-scalar phenomena. Drawing on DiCaglio's idea of a 'scalar view,' defining Al as a scalar media technology, and tracing journalism's encounters with Generative Al as scalar collisions - across practices, organizations, data, audiences, and engineering - I argue that Al governance is 'scale work', and that multi-scalar governance offers new ways to understand Generative Al and its stakes.โ€

New paper!

In @icsjournal.bsky.social I argue that governing #AI means โ€œscale workโ€ โ€” the labour of stabilizing AI *across* relationships that are usually tackled in isolation.

I use journalismโ€™s GenAI encounters as a case study, connecting siloed AI collisions

www.tandfonline.com/eprint/T7WWF...

05.12.2025 16:25 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Join us online this Thursday!

18.11.2025 09:31 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Very glad to take part on "Humanities in Times of Geopolitical Turmoil" seminar series organised by @utrechtuniversity.bsky.social and @fabianlferrari.bsky.social.

๐Ÿ“† Follow the thing AI: 20/11/2025: 15:30 - 16:30

@oii.ox.ac.uk

cdh.uu.nl/event/cdh-on...

15.10.2025 08:26 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

Happy to be part of this amazing speaker series organized by Utrecht University, especially @fabianlferrari.bsky.social . I'll talk about Latin American Critical AI studies. I'll be in such great company with @nsrnicek.bsky.social and @anavaldi.bsky.social

cdh.uu.nl/event/cdh-on...

18.06.2025 13:56 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

What are the lessons of social media governance for generative AI governance?

Check out the third article of our @icsjournal.bsky.social special issue by @pmnapoli.bsky.social and Suher Adi.

www.tandfonline.com/doi/abs/10.1...

11.06.2025 08:27 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
China shuts down AI tools during nationwide college exams New age problems require new age solutions.

China shuts down AI tools during nationwide college exams

09.06.2025 14:40 โ€” ๐Ÿ‘ 266    ๐Ÿ” 70    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 35
Preview
On moving fast and breaking thingsโ€‰. . .โ€‰again: social mediaโ€™s lessons for generative AI governance Generative AI systems are increasingly being employed globally, bringing with them both tremendous promise and substantial potential for harm. As is often the case, governance initiatives are laggi...

Have a new piece on social media's lessons for the governance of generative AI in Information, Communication, and Society, co-authored with Suher Adi.

www.tandfonline.com/eprint/MBACG...

04.06.2025 12:47 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

If OpenAI shifts its policies, why and how do other platforms follow suit?

The second article of our special issue, written by Chris Chao Su and Ngai Keung Chan, is now online!

www.tandfonline.com/doi/full/10....

06.06.2025 12:44 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Probably the most surprising thing about this confrontation is that it took more than 180 days to happen

05.06.2025 16:53 โ€” ๐Ÿ‘ 231    ๐Ÿ” 31    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 0
Preview
The emerging reality of the OpenAI-SoftBank grand plan for data centres The financing and future of the Stargate project hailed by Trump are far from straightforward

"In Silicon Valley, some investors ask whether the AI infrastructure boom will become what fibre optic cables were to the dotcom era."

www.ft.com/content/0e24...

30.05.2025 07:37 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Who decides what counts as theft when AI copies your style?

Check out the first paper of our special issue on generative AI governance co-edited with @joannekuai.bsky.social!

27.05.2025 09:54 โ€” ๐Ÿ‘ 15    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Dutch parliament calls for end to reliance on US software The Netherlands' parliament on Tuesday approved a series of motions calling on the government to reduce dependence on U.S. software companies, including by creating a cloud services platform that is under Dutch control.

The US cloud is not inevitable

www.reuters.com/world/europe...

19.03.2025 13:40 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
DOGE is going global. It needs to be stopped. The extreme right is organizing for a new austerity campaign modeled on Elon Muskโ€™s destructive efforts

Elon Muskโ€™s DOGE is tearing through the US government with disastrous consequences.

But beyond its borders, the extreme right is gearing up to push their own DOGE-inspired austerity campaigns in countries around the world.

19.03.2025 15:37 โ€” ๐Ÿ‘ 719    ๐Ÿ” 325    ๐Ÿ’ฌ 27    ๐Ÿ“Œ 54
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

Itโ€™s great to see this piece published in Platforms & Society:

journals.sagepub.com/doi/10.1177/...

It nicely brings critical platform scholarship into conversation with the lit. on state capitalism and techno-colonialism, through a rich case study set in post-pandemic Greece.

28.02.2025 09:10 โ€” ๐Ÿ‘ 21    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

A narrow regulatory focus on misinformation distracts from addressing structural problems in the AI industry.

My chapter in the FEPS Progressive Yearbook 2024.

bit.ly/ai-infrastru...

02.02.2024 07:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Together with Joanne Kuai, I'm editing a special issue on generative AI governance in Information, Communication & Society.

Deadline for abstracts: 15 February 2024

Details: bit.ly/generative-a...

20.12.2023 09:09 โ€” ๐Ÿ‘ 4    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Our new paper in New Media & Society: "Observe, inspect, modify: Three conditions for generative AI governance"

journals.sagepub.com/doi/10.1177/...

30.11.2023 15:35 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
In the meantime, over the past year Freedom House that at least 16 countries used generative AI to create content intended to mislead the public. The earliest tools were available only in English, limiting their usage around the world. At the same time, Freedom House notes that investigators in this realm have the same problem the Slovakian fact-checkers did: tools for assessing the authenticity of content posted online are limited and often inaccurate. They believe it is likely that the true number of countries experimenting with synthetic media is likely higher than 16.

In the meantime, over the past year Freedom House that at least 16 countries used generative AI to create content intended to mislead the public. The earliest tools were available only in English, limiting their usage around the world. At the same time, Freedom House notes that investigators in this realm have the same problem the Slovakian fact-checkers did: tools for assessing the authenticity of content posted online are limited and often inaccurate. They believe it is likely that the true number of countries experimenting with synthetic media is likely higher than 16.

At least 16 countries have already experimented with using generative AI to mislead their citizens: www.platformer.news/p/how-author...

06.10.2023 00:18 โ€” ๐Ÿ‘ 137    ๐Ÿ” 82    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 3
Preview
Truepic and Hugging Face Partner to Highlight the Latest Innovations in Transparency to AI-Generated... Truepic brings two new spaces to Hugging Face, the first making C2PA Content Credentials available for developer use and the second an experimental space...

This is a REALLY BIG DEAL for ethical AI! And REALLY HARD TO EXPLAIN. Especially in a skeet.
1 - WATERMARKING FOR GEN AI!
2 - EMBEDDED "NUTRITION LABELS" IN GEN AI (does that tl;dr work?)
www.globenewswire.com/news-release... ๐Ÿงต

06.10.2023 02:45 โ€” ๐Ÿ‘ 45    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1