Yet more coverage on our @science.org paper on AI swarms:
www.theguardian.com/technology/2...
@fil.bsky.social
Researcher on social media misinformation and manipulation, director of the Observatory on Social Media (OSoMe.iu.edu, pronounced “awesome”) at Indiana University
Yet more coverage on our @science.org paper on AI swarms:
www.theguardian.com/technology/2...
Latest working paper 🧪 w/ @shalmoli-ghosh.bsky.social and @matthewdeverna.com shows that AI porn and NSFW deepfakes targeting women are being commoditized
A Marketplace for AI-Generated Adult Content and Deepfakes
Preprint: doi.org/10.48550/arX...
More coverage of our recent @science.org paper warning about AI swarms:
www.inc.com/chloe-aiello...
Don't forget Mirta's awesome talk tomorrow!!
27.01.2026 19:41 — 👍 0 🔁 0 💬 0 📌 0Coverage of our recent paper in Science
www.wired.com/story/ai-pow...
If you don't have access, here is a preprint: doi.org/10.31219/osf...
26.01.2026 01:33 — 👍 8 🔁 1 💬 0 📌 0Our latest paper in @science.org warns about malicious AI swarms, agents capable of adaptive influence campaigns at scale. We already observed some in the wild (picture). AI is a real threat to democracy.
#SciencePolicyForum #ScienceResearch 🧪
Paper: doi.org/10.1126/scie...
We’re excited to welcome Mirta Galesic as our next OSoMe Awesome Speaker!
🗓 Wednesday, Jan 28, 2026
⏰ 12:00–1:00 PM ET
🎙 Dynamics of belief networks
Register: iu.zoom.us/meeting/regi...
Check your sources, we have been saying...
cyberscoop.com/the-quiet-wa...
Just had a meeting with @fil.bsky.social and the Observatory on Social Media at Indiana University (@osome.iu.edu) to discuss the potential for using @furryli.st and the AT Protocol at large to study how to design healthier social media at scale. Really excited to see what comes of it!
13.01.2026 23:01 — 👍 13 🔁 2 💬 0 📌 0What are the dynamic effects of fact-checking on the behavior of those who circulate misinformation and on the spread of false news? In this paper, we provide causal evidence on these questions, building on a unique partnership with the Agence France Presse (AFP), the world's largest fact-checking organization and a partner of Facebook's Third-Party Fact-Checking Program. Over an 18-month period (December 2021-June 2023), we collected information on the stories proposed by fact-checkers during the daily editorial meetings, some of which were ultimately fact-checked while others, despite being ex ante "similar", were left aside. Using two complementary Difference-inDifferences approaches, one at the story level and the other at the post level (within fact-checked stories), we show that fact-checking reduces the circulation of misinformation on Facebook by approximately 8%, an effect driven entirely by stories rated as "False." Furthermore, we provide evidence of behavioral responses: the publication of a fact-check more than doubles the deletion of posts in the fact-checked stories, and users whose posts appear in fact-checked stories become less likely to share misinformation in the future. While our results clearly confirm the effectiveness of fact-checking, we provide policy recommendations to further strengthen its impact.
When FB introduced its fact-checking program, it claimed (w/out evidence, despite our asking) that it reduced exposure to debunked content by 80%. When Meta killed fact-checking in the US, Zuck claimed (without evidence) that it didn't work. Both lies. The truth? ~8%:
dx.doi.org/10.2139/ssrn...
Handbook of Computational Social Science cover and authors
ICYMI -- Delighted that the Handbook of Computational Social Science is finally out. Amazing cast of coauthors, with special thanks to @tahayasseri.bsky.social for leading the effort. Happy Holidays!
www.elgaronline.com/edcollbook/b...
More on this: www.nytimes.com/2025/12/08/t...
15.12.2025 02:13 — 👍 2 🔁 0 💬 0 📌 0In the time since I first posted this thread, this TikTok spam network has grown in size and added new types of repetitive content to its lineup.
Additionally, many of the accounts have pivoted to hawking dubious dietary supplements.
OpenAI's Sora 2 ultrarealistic (but fake) AI videos used by Russian disinformation operations, who could have predicted it?!?
www.nbcnews.com/tech/social-...
2024-25 has been an academic year of challenges and opportunities. We have been working on exciting problems, such as the exploitation of AI for manipulation at scale and tools to promote a healthier information environment. Read all about it in our latest annual report:
osome.iu.edu/research/blo...
Pretty much all the worst-case scenarios we have been predicting through our research in the last 10-15 years are coming true. Sad but excellent year-in-review by @craigsilverman.bsky.social and @mantzarlis.com
10.12.2025 07:21 — 👍 11 🔁 4 💬 0 📌 0Increasingly Aligned Russian and Chinese Disinformation Threatens U.S. Citizens
www.americansecurityproject.org/increasingly...
The European Commissision announced its first non-compliance decision under the Digital Services Act, fining X €120 million for deceptive practices and lack of transparency. 1/
ec.europa.eu/commission/p...
Don't let anyone tell you that the Commission's DSA enforcement against X is about speech or censorship.
That would, indeed, be interesting. But this is just the EU enforcing some normal, boring laws that would get bipartisan support in the U.S. (I bet similar bills *have* had that support.) 1/
What censorship of science looks like
www.reuters.com/world/us/tru...
We apologize for the interruption of the OSoMe Awesome speaker today due to an IU blackout. We will restart as soon as power comes back.
03.12.2025 17:44 — 👍 1 🔁 0 💬 0 📌 0A great collaboration with @erfansam.bsky.social, @matthewdeverna.com, @luceriluc.bsky.social, @fil.bsky.social, @frapierri.bsky.social and @silviagiordano.bsky.social
01.12.2025 12:14 — 👍 3 🔁 1 💬 0 📌 0It represents the first detailed study of how a social platform transitions from being invitation-only to open to the public, with a focus on user activities and the evolution of the platform.
01.12.2025 12:14 — 👍 4 🔁 1 💬 1 📌 0I am pleased to announce that our work “A longitudinal analysis of misinformation, polarization, and toxicity on Bluesky after its public launch” has been accepted at #OSNEM.
The paper is an extension of our previous work presented at #ASONAM.
Don't miss it: www.sciencedirect.com/science/arti...
Even though our website is down, registration for our next OSoMe Awesome Speaker is open!
📅 Dec 3 @ 12pm ET
🎤 James Evans (UChicago)
📖 Information Laundering: How Misinformation Gets Cleaned and Dirty Across Digital and Policy Ecosystems
Register directly through Zoom: iu.zoom.us/meeting/regi...
X's new feature exposes some fake political accounts with foreign origins
apnews.com/article/x-lo...
Check out the paper for all the details: arxiv.org/abs/2511.18749
Thanks to my collaborators @yang3kc.bsky.social @harryyan.bsky.social and @fil.bsky.social .
Exploring GPT citation patterns...
Cited sources were mostly fact-checking outlets, mainstream news, and government sites.
They have high reliability scores (NewsGuard) and tend to align with the political left.
Reasoning didn’t help much.
Web search improved GPT models, but Gemini saw no benefit—likely because it failed to return sources for most queries.
GPT models often return citations and many are the PolitiFact article containing the fact check.
Again, curated info helps a lot.