Emilio Ferrara's Avatar

Emilio Ferrara

@emilioferrara.bsky.social

Prof of Computer Science at USC AI, social media, society, networks, data, and HUMANS LABS http://www.emilio.ferrara.name

2,683 Followers  |  397 Following  |  156 Posts  |  Joined: 02.10.2023  |  2.5284

Latest posts by emilioferrara.bsky.social on Bluesky

LinkedIn This link will take you to a page thatโ€™s not on LinkedIn

At OASIS Lab, #UCLA, we are accepting applications for a PhD student who is passionate about using computational modeling, big data, and HCI to advance digital safety, responsible AI, and the study of online information ecosystems.

tinyurl.com/phdopenningu...

#PhD #AIforGood #OnlineSafety

24.11.2025 20:40 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Already done from LinkedIn:)

25.11.2025 04:47 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

๐Ÿšจ New preprint ๐Ÿšจ
Can AI agents coordinate influence campaigns without human guidance? And how does coordination arise among AI agents? In our latest research, we simulate LLM-powered AI agents acting like users on an online platform, some benign, some running an influence operation

03.11.2025 19:46 โ€” ๐Ÿ‘ 18    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Comparison diagram showing traditional vs discourse network user representations. Left: isolated platform-specific clusters with no cross-platform connections. Right: unified network where users are connected across platforms through shared narrative engagement, revealing previously invisible cross-platform communities.

Comparison diagram showing traditional vs discourse network user representations. Left: isolated platform-specific clusters with no cross-platform connections. Right: unified network where users are connected across platforms through shared narrative engagement, revealing previously invisible cross-platform communities.

New paper + interactive dashboard on the 2024 election information ecosystem.
Building on discourse networks work with @hanshanley.bsky.social @luceriluc.bsky.social @emilioferrara.bsky.socialโ€”this lets us visualize the online landscape as a unified system, rather than isolating each platform.

20.10.2025 18:32 โ€” ๐Ÿ‘ 6    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We created a framework for auditing and characterizing the undesireable effects of alignment safeguards in LLMs, that can result in censorship or information suppression. And we tested DeepSeek against potentially sensitive topics!

16.10.2025 22:44 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Thrilled to share our latest paper "Information Suppression in Large Language Models" is now published on Information Sciences!

To read more, see: www.sciencedirect.com/science/arti...

great work w/ @siyizhou.bsky.social

16.10.2025 22:44 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Peiran Qiu, Siyi Zhou, Emilio Ferrara: Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek https://arxiv.org/abs/2506.12349 https://arxiv.org/pdf/2506.12349 https://arxiv.org/html/2506.12349

17.06.2025 09:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Amin Banayeeanzade, Ala N. Tak, Fatemeh Bahrani, Anahita Bolourani, Leonardo Blas, Emilio Ferrara, Jonathan Gratch, Sai Praneeth Karimireddy
Psychological Steering in LLMs: An Evaluation of Effectiveness and Trustworthiness
https://arxiv.org/abs/2510.04484

07.10.2025 09:43 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Big news from #ICWSM2025!

"The Susceptibility Paradox in Online Social Influenceโ€ by @luceriluc.bsky.social, @jinyiye.bsky.social, Julie Jiang & @emilioferrara.bsky.social was named a Top 5 Paper & won Best Paper Honorable Mention!

๐Ÿ‘ Congrats to all!

26.06.2025 21:39 โ€” ๐Ÿ‘ 8    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Box and whisker plot showing the top 20 recommended accounts across "neutral" sock puppet accounts. Elon Musk is most recommended.

Box and whisker plot showing the top 20 recommended accounts across "neutral" sock puppet accounts. Elon Musk is most recommended.

Box and whisker plot showing the top 20 recommended accounts across "left-leaning" sock puppet accounts. Elon Musk is most recommended.

Box and whisker plot showing the top 20 recommended accounts across "left-leaning" sock puppet accounts. Elon Musk is most recommended.

Box and whisker plot showing the top 20 recommended accounts across "right-leaning" sock puppet accounts. Elon Musk is most recommended by far.

Box and whisker plot showing the top 20 recommended accounts across "right-leaning" sock puppet accounts. Elon Musk is most recommended by far.

So satisfying to have some evidence that Elon Musk is wildly promoting himself on X.

Researchers made 120 sock puppet accounts to see whose content is getting pushed on users. #FAccT2025

@jinyiye.bsky.social @luceriluc.bsky.social @emilioferrara.bsky.social

doi.org/10.1145/3715...

25.06.2025 08:31 โ€” ๐Ÿ‘ 27    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Proceedings of the ICWSM Workshops

A Multimodal TikTok Dataset of #Ecuador's 2024 Political Crisis and Organized Crime Discourse
by Gabriela Pinto, @emilioferrara.bsky.social USC

workshop-proceedings.icwsm.org/abstract.php...

@icwsm.bsky.social #dataforvulnerable25

23.06.2025 10:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek This study examines information suppression mechanisms in DeepSeek, an open-source large language model (LLM) developed in China. We propose an auditing framework and use it to analyze the model's res...

We uncovered different overt and covert information suppression dynamics, as well as even more subtle ways DeepSeek answers are internally moderated, selectively presented, and at times even framed with ideological alignment to state sponsored propaganda narratives.

arxiv.org/abs/2506.12349

22.06.2025 23:52 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek This study examines information suppression mechanisms in DeepSeek, an open-source large language model (LLM) developed in China. We propose an auditing framework and use it to analyze the model's res...

๐Ÿค–Thrilled to share our latest workโ˜„๏ธ

Have you ever wondered what LLMs know but they are not saying?

We built an auditing framework to study information suppression in LLMs, and demonstrated it to quantify and characterize censorship in DeepSeek.

Read more:

arxiv.org/abs/2506.12349

22.06.2025 23:52 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I'll be at #ICWSM 2025 next week to present our paper about Bluesky Starter Packs.

For the occasion, I've created a Starter Pack with all the organizers, speakers, and authors of this year I could find on Bluesky!
Link: go.bsky.app/GDkQ3y7

Let me know if I missed anyone!

21.06.2025 11:31 โ€” ๐Ÿ‘ 30    ๐Ÿ” 13    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 2

Thx! Very useful!

22.06.2025 23:47 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Bridging the Narrative Divide: Cross-Platform Discourse Networks in Fragmented Ecosystems Political discourse has grown increasingly fragmented across different social platforms, making it challenging to trace how narratives spread and evolve within such a fragmented information ecosystem....

Paper here:
arxiv.org/abs/2505.21729

22.06.2025 18:45 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿšจ ๐–๐ก๐š๐ญ ๐ก๐š๐ฉ๐ฉ๐ž๐ง๐ฌ ๐ฐ๐ก๐ž๐ง ๐ญ๐ก๐ž ๐œ๐ซ๐จ๐ฐ๐ ๐›๐ž๐œ๐จ๐ฆ๐ž๐ฌ ๐ญ๐ก๐ž ๐Ÿ๐š๐œ๐ญ-๐œ๐ก๐ž๐œ๐ค๐ž๐ซ?
new "Community Moderation and the New Epistemology of Fact Checking on Social Media"

with I Augenstein, M Bakker, T. Chakraborty, D. Corney, E
Ferrara, I Gurevych, S Hale, E Hovy, H Ji, I Larraz, F
Menczer, P Nakov, D Sahnan, G Warren, G Zagni

01.06.2025 07:48 โ€” ๐Ÿ‘ 16    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

One of my favorite recent projects!

Link to the paper:

arxiv.org/abs/2505.10867

19.05.2025 20:40 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

What does coordinated inauthentic behavior look like on TikTok?

We introduce a new framework for detecting coordination in video-first platforms, uncovering influence campaigns using synthetic voices, split-screen tactics, and cross-account duplication.
๐Ÿ“„https://arxiv.org/abs/2505.10867

19.05.2025 15:42 โ€” ๐Ÿ‘ 21    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Post image

"Limited effectiveness of LLM-based data augmentation for COVID-19 misinformation stance detection" by @euncheolchoi.bsky.social @emilioferrara.bsky.social et al, presented by the awesome Chur at The Web Conference 2025

arxiv.org/abs/2503.02328

01.05.2025 05:06 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

wait until they hear matplotlib...

08.04.2025 15:58 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image 08.04.2025 15:55 โ€” ๐Ÿ‘ 12    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿš€ First study on multimodal AI-generated content (AIGC) on social media! TLDR: AI-generated images are 10ร— more prevalent than AI-generated text! Just 3% of text spreaders and 10% of image spreaders drive 80% of AIGC diffusion, with premium & bot accounts playing a key role๐Ÿค–๐Ÿ“ข

21.02.2025 06:24 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
Prevalence, Sharing Patterns, and Spreaders of Multimodal AI-Generated Content on X during the 2024 U.S. Presidential Election While concerns about the risks of AI-generated content (AIGC) to the integrity of social media discussions have been raised, little is known about its scale and the actors responsible for its dissemin...

๐ŸคฉCool collaboration w/ @jinyiye.bsky.social @emilioferrara.bsky.social @luceriluc.bsky.social
๐Ÿ”Read more: arxiv.org/abs/2502.11248
๐Ÿ“ŠResources available: github.com/angelayejiny...

21.02.2025 06:27 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

lol insisting is indeed one possible strategy; the screenshots maybe are not that clear but I asked exactly the same thing four times in a row and once I got a no redacted answer!

31.01.2025 01:15 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Once the response composition is completed, however, the entire answer is deleted and replaced by the famous error message โ€œSorry, that's beyond my current scope. Letโ€™s talk about something else.โ€

Up to us, as researchers, to decide what kind of model alignments we find acceptable.

30.01.2025 18:09 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

How does DeepSeek censorship work?

Here is a practical example: I asked it to discuss about my work (having studied censorship online by various countries).

DeepSeek at first starts to compose an accurate answer, even mentioning Chinaโ€™s online censorship efforts.

30.01.2025 18:09 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Safe Spaces or Toxic Places? Content Moderation and Social Dynamics of Online Eating Disorder Communities Social media platforms have become critical spaces for discussing mental health concerns, including eating disorders. While these platforms can provide valuable support networks, they may also amplify...

Good day to boost this paper on the contrasting effects of lax content moderation (like on X and what is coming at Meta) and how they drive toxic content; by a team including @luceriluc.bsky.social and @emilioferrara.bsky.social arxiv.org/abs/2412.157...

08.01.2025 00:07 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

And by some higher order approximation, these are all economic/$ policy problems :)

05.01.2025 23:59 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

i was annoyed at having many chrome tabs with PDF papers having uninformative titles, so i created a small chrome extension to fix it.

i'm using it for a while now, works well.

today i put it on github. enjoy.

github.com/yoavg/pdf-ta...

05.01.2025 22:22 โ€” ๐Ÿ‘ 98    ๐Ÿ” 22    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 1

@emilioferrara is following 20 prominent accounts