In early 2024, researchers were already heavily using AI for work
- Survey of 816 verified authors via Semantic Scholar
- 81% of researchers reported using LLMs in their workflow
- Top uses: information seeking & editing
- Rare for data tasks: 69๎73% never use LLMs for data cleaning or generation
The measurement problem
LLM content has risen sharply in both review and non-review papers.
Review papers do have a higher prevalence rate.
But non-review LLM papers outnumber review papers ๎ฃ6x.
CS.CY ๎Computers & Society) faces potential 50% cuts compared to CS.CV (Computer Vision) would only face 3%
Interdisciplinary researchers โ who move between cultures and write in the โborderlandsหฎ โ are experts at adapting their writing. LLMs currently are not.
Private information can appear in unlikely prompts
I gave a short talk at Cornell yesterday on my science-of-science work investigating how AI is being used by researchers and how we should go about crafting policies in response.
Blanket policies are hard, privacy is important, we need more measurement.
Slides: drive.google.com/file/d/1gNTK...
04.03.2026 13:23 โ
๐ 59
๐ 12
๐ฌ 2
๐ 0
Abstract of article:
โABSTRACT
Governing Artificial Intelligence (Al) is difficult, in part, because Al systems never stand still in any one place. They are usually made by private companies, hidden within proprietary infrastructures, spanning jurisdictions, behaving in ways that are difficult to predict, and talked about in messy discourses of hype and panic. I suggest here that all this dynamism and uncertainty could be tackled by understanding Al and its governance as multi-scalar phenomena. Drawing on DiCaglio's idea of a
'scalar view,' defining Al as a scalar media technology, and tracing journalism's encounters with Generative Al as scalar collisions - across
practices, organizations, data, audiences, and engineering - I argue that Al governance is
'scale work', and that multi-scalar governance offers new ways to understand Generative Al and its stakes.โ
New paper!
In @icsjournal.bsky.social I argue that governing #AI means โscale workโ โ the labour of stabilizing AI *across* relationships that are usually tackled in isolation.
I use journalismโs GenAI encounters as a case study, connecting siloed AI collisions
www.tandfonline.com/eprint/T7WWF...
05.12.2025 16:25 โ
๐ 11
๐ 4
๐ฌ 1
๐ 0
Join us online this Thursday!
18.11.2025 09:31 โ
๐ 7
๐ 2
๐ฌ 0
๐ 0
Very glad to take part on "Humanities in Times of Geopolitical Turmoil" seminar series organised by @utrechtuniversity.bsky.social and @fabianlferrari.bsky.social.
๐ Follow the thing AI: 20/11/2025: 15:30 - 16:30
@oii.ox.ac.uk
cdh.uu.nl/event/cdh-on...
15.10.2025 08:26 โ
๐ 9
๐ 3
๐ฌ 0
๐ 1
Happy to be part of this amazing speaker series organized by Utrecht University, especially @fabianlferrari.bsky.social . I'll talk about Latin American Critical AI studies. I'll be in such great company with @nsrnicek.bsky.social and @anavaldi.bsky.social
cdh.uu.nl/event/cdh-on...
18.06.2025 13:56 โ
๐ 11
๐ 4
๐ฌ 0
๐ 0
What are the lessons of social media governance for generative AI governance?
Check out the third article of our @icsjournal.bsky.social special issue by @pmnapoli.bsky.social and Suher Adi.
www.tandfonline.com/doi/abs/10.1...
11.06.2025 08:27 โ
๐ 3
๐ 2
๐ฌ 0
๐ 1
China shuts down AI tools during nationwide college exams
New age problems require new age solutions.
China shuts down AI tools during nationwide college exams
09.06.2025 14:40 โ
๐ 266
๐ 70
๐ฌ 5
๐ 35
If OpenAI shifts its policies, why and how do other platforms follow suit?
The second article of our special issue, written by Chris Chao Su and Ngai Keung Chan, is now online!
www.tandfonline.com/doi/full/10....
06.06.2025 12:44 โ
๐ 2
๐ 1
๐ฌ 1
๐ 1
Probably the most surprising thing about this confrontation is that it took more than 180 days to happen
05.06.2025 16:53 โ
๐ 231
๐ 31
๐ฌ 6
๐ 0
Who decides what counts as theft when AI copies your style?
Check out the first paper of our special issue on generative AI governance co-edited with @joannekuai.bsky.social!
27.05.2025 09:54 โ
๐ 15
๐ 2
๐ฌ 1
๐ 0
DOGE is going global. It needs to be stopped.
The extreme right is organizing for a new austerity campaign modeled on Elon Muskโs destructive efforts
Elon Muskโs DOGE is tearing through the US government with disastrous consequences.
But beyond its borders, the extreme right is gearing up to push their own DOGE-inspired austerity campaigns in countries around the world.
19.03.2025 15:37 โ
๐ 719
๐ 325
๐ฌ 27
๐ 54
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
Itโs great to see this piece published in Platforms & Society:
journals.sagepub.com/doi/10.1177/...
It nicely brings critical platform scholarship into conversation with the lit. on state capitalism and techno-colonialism, through a rich case study set in post-pandemic Greece.
28.02.2025 09:10 โ
๐ 21
๐ 7
๐ฌ 1
๐ 1
A narrow regulatory focus on misinformation distracts from addressing structural problems in the AI industry.
My chapter in the FEPS Progressive Yearbook 2024.
bit.ly/ai-infrastru...
02.02.2024 07:49 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
Together with Joanne Kuai, I'm editing a special issue on generative AI governance in Information, Communication & Society.
Deadline for abstracts: 15 February 2024
Details: bit.ly/generative-a...
20.12.2023 09:09 โ
๐ 4
๐ 5
๐ฌ 0
๐ 0
Our new paper in New Media & Society: "Observe, inspect, modify: Three conditions for generative AI governance"
journals.sagepub.com/doi/10.1177/...
30.11.2023 15:35 โ
๐ 6
๐ 0
๐ฌ 0
๐ 0
In the meantime, over the past year Freedom House that at least 16 countries used generative AI to create content intended to mislead the public. The earliest tools were available only in English, limiting their usage around the world. At the same time, Freedom House notes that investigators in this realm have the same problem the Slovakian fact-checkers did: tools for assessing the authenticity of content posted online are limited and often inaccurate. They believe it is likely that the true number of countries experimenting with synthetic media is likely higher than 16.
At least 16 countries have already experimented with using generative AI to mislead their citizens: www.platformer.news/p/how-author...
06.10.2023 00:18 โ
๐ 137
๐ 82
๐ฌ 6
๐ 3