Tanise Ceron's Avatar

Tanise Ceron

@taniseceron.bsky.social

Postdoc @milanlp.bsky.social

123 Followers  |  142 Following  |  23 Posts  |  Joined: 09.02.2025  |  1.9676

Latest posts by taniseceron.bsky.social on Bluesky

Great collaboration with Dmitry Nikolaev, @dominsta.bsky.social and @deboranozza.bsky.social ☺️

29.09.2025 14:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- Finally, and for me, most interestingly, our analysis suggests that political biases are already encoded during the pre-training stage.

Taken these evidences together, we highlight important implications these results play on data processing in the development of fairer LLMs.

29.09.2025 14:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- There's a strong correlation (Pearson r=0.90) between the predominant stances in the training data and the models’ behavior when probed for political bias on eight policy issues (e.g., environmental protection, migration, etc).

29.09.2025 14:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- Source domains of pre-training documents differ significantly, with right-leaning content containing twice as many blog posts and left-leaning content 3 times as many news outlets.

29.09.2025 14:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- The framing of political topics varies considerably: right-leaning labeled documents prioritize stability, sovereignty, and cautious reform via technology or deregulation, while left-leaning documents emphasize urgent, science-led mobilization for systemic transformation and equity.

29.09.2025 14:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- left-leaning documents consistently outnumber right-leaning ones by a factor of 3 to 12 across training datasets.
- pre-training corpora contains about 4 times more politically engaged content than post-training data.

29.09.2025 14:54 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We have the answers of these questions here : arxiv.org/pdf/2509.22367

We analyze theΒ political content of the training data from OLMO2, the largest fully open-source model.
πŸ•΅οΈβ€β™€οΈ We run an analysis in all the datasets (2 pre- and 2 post-training) used to train the models. Here are our findings:

29.09.2025 14:54 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“£ New Preprint!
Have you ever wondered what the political content in LLM's training data is? What are the political opinions expressed? What is the proportion of left- vs right-leaning documents in the pre- and post-training data? Do they correlate with the political biases reflected in models?

29.09.2025 14:54 β€” πŸ‘ 43    πŸ” 14    πŸ’¬ 2    πŸ“Œ 0

Tanise Ceron, Dmitry Nikolaev, Dominik Stammbach, Debora Nozza: What Is The Political Content in LLMs' Pre- and Post-Training Data? https://arxiv.org/abs/2509.22367 https://arxiv.org/pdf/2509.22367 https://arxiv.org/html/2509.22367

29.09.2025 06:31 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Thanks SoftwareCampus for supporting Multiview, the organizers of INRA, and Sourabh Dattawad and @agnesedaff.bsky.social for the great collaboration!

26.09.2025 16:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our evaluation with normative metrics shows that this approach does not diversify only frames in user's history, but also sentiment and news categories. These findings demonstrate that framing acts as a control lever for enhancing normative diversity.

26.09.2025 16:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this paper, we propose introduce media frames as a device for diversifying perspectives in news recommenders. Our results show an improvement in exposure to previously unclicked frames up to 50%.

26.09.2025 16:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Today Sourabh Dattawad presented our work "Leveraging Media Frames to Improve Normative Diversity in News Recommendations" at INRA (International Workshop on News Recommendation and Analytics) co-located with RecSys 2025 in Prague.
arxiv.org/pdf/2509.02266

26.09.2025 16:20 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

12.09.2025 10:33 β€” πŸ‘ 259    πŸ” 94    πŸ’¬ 5    πŸ“Œ 19
Post image Post image Post image

Last week we held our 1st MilaNLP retreat by beautiful Lago Maggiore! β›°οΈπŸŒŠ
We shared research ideas, stories (academic & beyond), and amazing food. It was a great time to connect outside of the usual lab working days, and most importantly, strengthen our bonds as a team. #ResearchLife #NLProc

07.07.2025 14:08 β€” πŸ‘ 22    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1
Qualtrics Survey | Qualtrics Experience Management The most powerful, simple and trusted way to gather experience data. Start your journey to experience management and try a free account today.

πŸ” Stiamo studiando come l'AI viene usata in Italia e per farlo abbiamo costruito un sondaggio!

πŸ‘‰ bit.ly/sondaggio_ai...

(Γ¨ anonimo, richiede ~10 minuti, e se partecipi o lo fai girare ci aiuti un saccoπŸ™)

Ci interessa anche raggiungere persone che non si occupano e non sono esperte di AI!

03.06.2025 10:24 β€” πŸ‘ 16    πŸ” 18    πŸ’¬ 1    πŸ“Œ 0

Reminder for the importance of evaluating political biases robustly. :)

15.05.2025 15:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We (w/ @diyiyang.bsky.social, @zhuhao.me, & Bodhisattwa Prasad Majumder) are excited to present our #NAACL25 tutorial on Social Intelligence in the Age of LLMs!
It will highlight long-standing and emerging challenges of AI interacting w humans, society & the world.
⏰ May 3, 2:00pm-5:30pm Room Pecos

03.05.2025 13:58 β€” πŸ‘ 14    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

Join us in an hour at 17:00 (CEST) for @taniseceron.bsky.social's talk on "Evaluating Political Bias: Insights into Robustness and Multilingualityβ€œ. Access to Zoom at join.slack.com/t/tadapolisc... or send me a βœ‰οΈ

30.04.2025 14:10 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Sure, it's here: github.com/tceron/eval_...
The code mapping is in the readme file. :)

23.04.2025 07:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ₯ It's the second half of our 🌱 speaker series (tada.cool) this term, and we couldn't be more excited! Next week (Wednesday, April 30 at 5pm CET), we have the pleasure of welcoming @taniseceron.bsky.social to share insights on "Facilitating Information Access Through Language Models". More details ⬇️

21.04.2025 10:38 β€” πŸ‘ 7    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
Beyond Prompt Brittleness: Evaluating the Reliability and Consistency of Political Worldviews in LLMs Abstract. Due to the widespread use of large language models (LLMs), we need to understand whether they embed a specific β€œworldview” and what these views reflect. Recent studies report that, prompted ...

liberal society... For example, there's no clear stance in the issues of migration.
More on: direct.mit.edu/tacl/article...

I would expect Llama3.1 released last year to have similar political views to what we found in Llama-2.

22.04.2025 12:38 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

change the political worldviews of models. In our study, we find that the previous version (Llama-2) consistently reflects more left-leaning views. However, it does depend on the policy issue as we found clear stances of the models only towards social state welfare, environment protection and [2/3]

22.04.2025 12:37 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I agree 100% that we need to understand what they're measuring, and specifically, how they're aligning the models to be hold certain types of political worldviews. However, I find your results rather puzzling because Llama3.1 was released much before they started announcing their strategy to [1/3]

22.04.2025 12:37 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

All the very best for this new chapter @florplaza.bsky.social! πŸ˜ƒ
We already miss you here! ❀️

25.03.2025 09:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wanna keep up with our @milanlp.bsky.social lab? Here is a starter pack of current and former members:
bsky.app/starter-pack...

05.03.2025 10:47 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0

Happy to be presenting at #TaDa and looking forward to watching the great talks coming up. :)

05.03.2025 12:27 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Hmm, I agree that this could be a good solution. Though I wonder if this is feasible based on the pace that advancements take place in this area.

17.02.2025 19:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That could def encourage people to polish more, but I think we need more well-defined categories. E.g. paper with best related work section given that this section is often underestimated nowadays and it's an important step to build on people's previous work. Ofc, this is just one among many! :)

17.02.2025 19:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@taniseceron is following 20 prominent accounts