Fil Menczer's Avatar

Fil Menczer

@fil.bsky.social

Researcher on social media misinformation and manipulation, director of the Observatory on Social Media (OSoMe.iu.edu, pronounced “awesome”) at Indiana University

7,193 Followers  |  529 Following  |  357 Posts  |  Joined: 08.05.2023  |  1.9619

Latest posts by fil.bsky.social on Bluesky

Preview
Experts warn of threat to democracy from ‘AI bot swarms’ infesting social media Misinformation technology could be deployed at scale to disrupt 2028 US presidential election, AI researchers say

Yet more coverage on our @science.org paper on AI swarms:

www.theguardian.com/technology/2...

29.01.2026 23:03 — 👍 8    🔁 5    💬 1    📌 0

Latest working paper 🧪 w/ @shalmoli-ghosh.bsky.social and @matthewdeverna.com shows that AI porn and NSFW deepfakes targeting women are being commoditized

A Marketplace for AI-Generated Adult Content and Deepfakes

Preprint: doi.org/10.48550/arX...

28.01.2026 23:21 — 👍 8    🔁 5    💬 0    📌 0
Preview
Experts Are Warning That  ‘AI Swarms’ Could Spark Disruptions to Democracy Artificial intelligence is a powerful business tool, but also packs the potential for serious civic harm.

More coverage of our recent @science.org paper warning about AI swarms:
www.inc.com/chloe-aiello...

27.01.2026 19:52 — 👍 1    🔁 0    💬 0    📌 1

Don't forget Mirta's awesome talk tomorrow!!

27.01.2026 19:41 — 👍 0    🔁 0    💬 0    📌 0
Preview
AI-Powered Disinformation Swarms Are Coming for Democracy Advances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it’s virtually impossible to detect.

Coverage of our recent paper in Science

www.wired.com/story/ai-pow...

26.01.2026 20:00 — 👍 3    🔁 2    💬 0    📌 0
OSF

If you don't have access, here is a preprint: doi.org/10.31219/osf...

26.01.2026 01:33 — 👍 8    🔁 1    💬 0    📌 0

Our latest paper in @science.org warns about malicious AI swarms, agents capable of adaptive influence campaigns at scale. We already observed some in the wild (picture). AI is a real threat to democracy.
#SciencePolicyForum #ScienceResearch 🧪
Paper: doi.org/10.1126/scie...

26.01.2026 01:08 — 👍 98    🔁 54    💬 2    📌 5
Post image

We’re excited to welcome Mirta Galesic as our next OSoMe Awesome Speaker!

🗓 Wednesday, Jan 28, 2026
⏰ 12:00–1:00 PM ET
🎙 Dynamics of belief networks

Register: iu.zoom.us/meeting/regi...

15.01.2026 18:53 — 👍 3    🔁 2    💬 0    📌 2
Preview
The quiet way AI normalizes foreign influence Americans are learning to “trust the citations” in AI-generated answers—but AI doesn’t reward credibility, it rewards access.

Check your sources, we have been saying...
cyberscoop.com/the-quiet-wa...

19.01.2026 14:59 — 👍 4    🔁 1    💬 0    📌 1

Just had a meeting with @fil.bsky.social and the Observatory on Social Media at Indiana University (@osome.iu.edu) to discuss the potential for using @furryli.st and the AT Protocol at large to study how to design healthier social media at scale. Really excited to see what comes of it!

13.01.2026 23:01 — 👍 13    🔁 2    💬 0    📌 0
What are the dynamic effects of fact-checking on the behavior of those who circulate misinformation and on the spread of false news? In this paper, we provide causal evidence on these questions, building on a unique partnership with the Agence France Presse (AFP), the world's largest fact-checking organization and a partner of Facebook's Third-Party Fact-Checking Program. Over an 18-month period (December 2021-June 2023), we collected information on the stories proposed by fact-checkers during the daily editorial meetings, some of which were ultimately fact-checked while others, despite being ex ante "similar", were left aside. Using two complementary Difference-inDifferences approaches, one at the story level and the other at the post level (within fact-checked stories), we show that fact-checking reduces the circulation of misinformation on Facebook by approximately 8%, an effect driven entirely by stories rated as "False." Furthermore, we provide evidence of behavioral responses: the publication of a fact-check more than doubles the deletion of posts in the fact-checked stories, and users whose posts appear in fact-checked stories become less likely to share misinformation in the future. While our results clearly confirm the effectiveness of fact-checking, we provide policy recommendations to further strengthen its impact.

What are the dynamic effects of fact-checking on the behavior of those who circulate misinformation and on the spread of false news? In this paper, we provide causal evidence on these questions, building on a unique partnership with the Agence France Presse (AFP), the world's largest fact-checking organization and a partner of Facebook's Third-Party Fact-Checking Program. Over an 18-month period (December 2021-June 2023), we collected information on the stories proposed by fact-checkers during the daily editorial meetings, some of which were ultimately fact-checked while others, despite being ex ante "similar", were left aside. Using two complementary Difference-inDifferences approaches, one at the story level and the other at the post level (within fact-checked stories), we show that fact-checking reduces the circulation of misinformation on Facebook by approximately 8%, an effect driven entirely by stories rated as "False." Furthermore, we provide evidence of behavioral responses: the publication of a fact-check more than doubles the deletion of posts in the fact-checked stories, and users whose posts appear in fact-checked stories become less likely to share misinformation in the future. While our results clearly confirm the effectiveness of fact-checking, we provide policy recommendations to further strengthen its impact.

When FB introduced its fact-checking program, it claimed (w/out evidence, despite our asking) that it reduced exposure to debunked content by 80%. When Meta killed fact-checking in the US, Zuck claimed (without evidence) that it didn't work. Both lies. The truth? ~8%:
dx.doi.org/10.2139/ssrn...

08.01.2026 20:32 — 👍 9    🔁 2    💬 0    📌 0
Handbook of Computational Social Science cover and authors

Handbook of Computational Social Science cover and authors

ICYMI -- Delighted that the Handbook of Computational Social Science is finally out. Amazing cast of coauthors, with special thanks to @tahayasseri.bsky.social for leading the effort. Happy Holidays!

www.elgaronline.com/edcollbook/b...

26.12.2025 23:46 — 👍 15    🔁 7    💬 0    📌 0
Preview
A.I. Videos Have Flooded Social Media. No One Was Ready.

More on this: www.nytimes.com/2025/12/08/t...

15.12.2025 02:13 — 👍 2    🔁 0    💬 0    📌 0

In the time since I first posted this thread, this TikTok spam network has grown in size and added new types of repetitive content to its lineup.

Additionally, many of the accounts have pivoted to hawking dubious dietary supplements.

14.12.2025 19:37 — 👍 198    🔁 96    💬 2    📌 3
Preview
As war with Russia drags on, ultrarealistic AI videos attempt to portray Ukrainian soldiers in peril A series of AI-generated deepfakes and videos, many made with OpenAI's Sora, appears to show Ukrainian soldiers apologizing to the Russian people and blaming their government for the war.

OpenAI's Sora 2 ultrarealistic (but fake) AI videos used by Russian disinformation operations, who could have predicted it?!?
www.nbcnews.com/tech/social-...

15.12.2025 01:56 — 👍 10    🔁 7    💬 1    📌 1
Preview
OSoMe Annual Report 2024-2025 Our annual report is now available. The stories, achievements, and insights in this report are a testament to the dedication of our team and the support of...

2024-25 has been an academic year of challenges and opportunities. We have been working on exciting problems, such as the exploitation of AI for manipulation at scale and tools to promote a healthier information environment. Read all about it in our latest annual report:
osome.iu.edu/research/blo...

13.12.2025 21:25 — 👍 8    🔁 1    💬 0    📌 0

Pretty much all the worst-case scenarios we have been predicting through our research in the last 10-15 years are coming true. Sad but excellent year-in-review by @craigsilverman.bsky.social and @mantzarlis.com

10.12.2025 07:21 — 👍 11    🔁 4    💬 0    📌 0
Preview
Increasingly Aligned Russian and Chinese Disinformation Threatens U.S. Citizens

Increasingly Aligned Russian and Chinese Disinformation Threatens U.S. Citizens
www.americansecurityproject.org/increasingly...

07.12.2025 23:28 — 👍 6    🔁 2    💬 0    📌 0
Preview
Commission fines X €120 million under the Digital Services Act Today, the Commission has issued a fine of €120 million to X for breaching its transparency obligations under the Digital Services Act (DSA).

The European Commissision announced its first non-compliance decision under the Digital Services Act, fining X €120 million for deceptive practices and lack of transparency. 1/

ec.europa.eu/commission/p...

05.12.2025 12:15 — 👍 15    🔁 5    💬 1    📌 1

Don't let anyone tell you that the Commission's DSA enforcement against X is about speech or censorship.

That would, indeed, be interesting. But this is just the EU enforcing some normal, boring laws that would get bipartisan support in the U.S. (I bet similar bills *have* had that support.) 1/

05.12.2025 14:58 — 👍 353    🔁 147    💬 6    📌 21
Preview
Exclusive: Trump administration orders enhanced vetting for applicants of H-1B visa An internal State Department memo said that anyone involved in "censorship" of free speech should be considered for rejection.

What censorship of science looks like
www.reuters.com/world/us/tru...

04.12.2025 23:23 — 👍 3    🔁 1    💬 0    📌 0

We apologize for the interruption of the OSoMe Awesome speaker today due to an IU blackout. We will restart as soon as power comes back.

03.12.2025 17:44 — 👍 1    🔁 0    💬 0    📌 0

A great collaboration with @erfansam.bsky.social, @matthewdeverna.com, @luceriluc.bsky.social, @fil.bsky.social, @frapierri.bsky.social and @silviagiordano.bsky.social

01.12.2025 12:14 — 👍 3    🔁 1    💬 0    📌 0
Post image Post image

It represents the first detailed study of how a social platform transitions from being invitation-only to open to the public, with a focus on user activities and the evolution of the platform.

01.12.2025 12:14 — 👍 4    🔁 1    💬 1    📌 0
Preview
A longitudinal analysis of misinformation, polarization and toxicity on Bluesky after its public launch Bluesky is a decentralized, Twitter-like social media platform that has rapidly gained popularity. Following an invite-only phase, it officially opene…

I am pleased to announce that our work “A longitudinal analysis of misinformation, polarization, and toxicity on Bluesky after its public launch” has been accepted at #OSNEM.
The paper is an extension of our previous work presented at #ASONAM.
Don't miss it: www.sciencedirect.com/science/arti...

01.12.2025 12:14 — 👍 18    🔁 13    💬 1    📌 2
Post image

Even though our website is down, registration for our next OSoMe Awesome Speaker is open!

📅 Dec 3 @ 12pm ET
🎤 James Evans (UChicago)
📖 Information Laundering: How Misinformation Gets Cleaned and Dirty Across Digital and Policy Ecosystems

Register directly through Zoom: iu.zoom.us/meeting/regi...

01.12.2025 17:10 — 👍 3    🔁 4    💬 1    📌 0
Preview
X's new feature raises questions about the foreign origins of some popular US political accounts Over the weekend, Elon Musk’s X unveiled a feature that lets users see where an account is based. Online sleuths and experts quickly found that many popular accounts, often posting in support of the U...

X's new feature exposes some fake political accounts with foreign origins

apnews.com/article/x-lo...

01.12.2025 00:56 — 👍 8    🔁 0    💬 0    📌 0
Preview
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search Large language models (LLMs) have raised hopes for automated end-to-end fact-checking, but prior studies report mixed results. As mainstream chatbots increasingly ship with reasoning capabilities and ...

Check out the paper for all the details: arxiv.org/abs/2511.18749

Thanks to my collaborators @yang3kc.bsky.social @harryyan.bsky.social and @fil.bsky.social .

29.11.2025 22:06 — 👍 11    🔁 2    💬 0    📌 0
Post image Post image

Exploring GPT citation patterns...

Cited sources were mostly fact-checking outlets, mainstream news, and government sites.

They have high reliability scores (NewsGuard) and tend to align with the political left.

29.11.2025 22:06 — 👍 8    🔁 1    💬 1    📌 0
Post image Post image

Reasoning didn’t help much.

Web search improved GPT models, but Gemini saw no benefit—likely because it failed to return sources for most queries.

GPT models often return citations and many are the PolitiFact article containing the fact check.

Again, curated info helps a lot.

29.11.2025 22:06 — 👍 12    🔁 1    💬 1    📌 0

@fil is following 20 prominent accounts