Tanise Ceron's Avatar

Tanise Ceron

@taniseceron.bsky.social

Postdoc @milanlp.bsky.social | Interested in language models and how they shape the information environment

133 Followers  |  158 Following  |  33 Posts  |  Joined: 09.02.2025  |  2.284

Latest posts by taniseceron.bsky.social on Bluesky

I will be @euripsconf.bsky.social this week to present our paper as non-archival at the PAIG workshop (Beyong Regulation:
Private Governance & Oversight Mechanisms for AI). Very much looking forward to the discussions!

If you are at #EurIPS and want to chat about LLM's training data. Reach out!

02.12.2025 21:47 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

We could fool ourselves saying that it's because there's no panettone in other periods of the year :P

27.11.2025 19:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We go out of the routine every now and then at the lab. :)

27.11.2025 16:08 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Open Source Generative AI Index: openness leaderboard Evidence-based assessment of Generative AI openness: a comprehensive index comparing LLMs, text-to-image models, audio, and other Generative AI models

Partial answer to my question:
osai-index.eu/the-index?ty...

24.11.2025 12:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In this paper, we investigate how well media frames generalize across different media landscapes. The 15 MFC frames remain broadly applicable, but requires revisions of the guidelines to adapt to the local context.

More on aclanthology.org/2025.starsem...

24.11.2025 10:36 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@agnesedaff.bsky.social presented our work on "Generalizability of Media Frames: Corpus creation and analysis across countries" at *SEM co-located with EMNLP 2025 in China.

24.11.2025 10:36 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@mmitchell.bsky.social

18.11.2025 06:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Does anyone know any good resource that systematically documents information about the training data of different LLMs (e.g. name of datasets, language proportion, etc whenever available)?

18.11.2025 06:27 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image Post image

Proud to present our #EMNLP2025 papers!
Catch our team across Main, Findings, Workshops & Demos πŸ‘‡

31.10.2025 14:04 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 12    πŸ“Œ 2

Great, thanks a lot!

19.10.2025 09:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

As I wasn't at the conference, I'd love to be able to watch the recording. Is it available online anywhere? :)

16.10.2025 09:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Great collaboration with Dmitry Nikolaev, @dominsta.bsky.social and @deboranozza.bsky.social ☺️

29.09.2025 14:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- Finally, and for me, most interestingly, our analysis suggests that political biases are already encoded during the pre-training stage.

Taken these evidences together, we highlight important implications these results play on data processing in the development of fairer LLMs.

29.09.2025 14:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- There's a strong correlation (Pearson r=0.90) between the predominant stances in the training data and the models’ behavior when probed for political bias on eight policy issues (e.g., environmental protection, migration, etc).

29.09.2025 14:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- Source domains of pre-training documents differ significantly, with right-leaning content containing twice as many blog posts and left-leaning content 3 times as many news outlets.

29.09.2025 14:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- The framing of political topics varies considerably: right-leaning labeled documents prioritize stability, sovereignty, and cautious reform via technology or deregulation, while left-leaning documents emphasize urgent, science-led mobilization for systemic transformation and equity.

29.09.2025 14:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- left-leaning documents consistently outnumber right-leaning ones by a factor of 3 to 12 across training datasets.
- pre-training corpora contains about 4 times more politically engaged content than post-training data.

29.09.2025 14:54 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We have the answers of these questions here : arxiv.org/pdf/2509.22367

We analyze theΒ political content of the training data from OLMO2, the largest fully open-source model.
πŸ•΅οΈβ€β™€οΈ We run an analysis in all the datasets (2 pre- and 2 post-training) used to train the models. Here are our findings:

29.09.2025 14:54 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“£ New Preprint!
Have you ever wondered what the political content in LLM's training data is? What are the political opinions expressed? What is the proportion of left- vs right-leaning documents in the pre- and post-training data? Do they correlate with the political biases reflected in models?

29.09.2025 14:54 β€” πŸ‘ 47    πŸ” 14    πŸ’¬ 2    πŸ“Œ 1

Tanise Ceron, Dmitry Nikolaev, Dominik Stammbach, Debora Nozza: What Is The Political Content in LLMs' Pre- and Post-Training Data? https://arxiv.org/abs/2509.22367 https://arxiv.org/pdf/2509.22367 https://arxiv.org/html/2509.22367

29.09.2025 06:31 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Thanks SoftwareCampus for supporting Multiview, the organizers of INRA, and Sourabh Dattawad and @agnesedaff.bsky.social for the great collaboration!

26.09.2025 16:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our evaluation with normative metrics shows that this approach does not diversify only frames in user's history, but also sentiment and news categories. These findings demonstrate that framing acts as a control lever for enhancing normative diversity.

26.09.2025 16:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this paper, we propose introduce media frames as a device for diversifying perspectives in news recommenders. Our results show an improvement in exposure to previously unclicked frames up to 50%.

26.09.2025 16:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Today Sourabh Dattawad presented our work "Leveraging Media Frames to Improve Normative Diversity in News Recommendations" at INRA (International Workshop on News Recommendation and Analytics) co-located with RecSys 2025 in Prague.
arxiv.org/pdf/2509.02266

26.09.2025 16:20 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

12.09.2025 10:33 β€” πŸ‘ 269    πŸ” 96    πŸ’¬ 6    πŸ“Œ 21
Post image Post image Post image

Last week we held our 1st MilaNLP retreat by beautiful Lago Maggiore! β›°οΈπŸŒŠ
We shared research ideas, stories (academic & beyond), and amazing food. It was a great time to connect outside of the usual lab working days, and most importantly, strengthen our bonds as a team. #ResearchLife #NLProc

07.07.2025 14:08 β€” πŸ‘ 22    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1
Qualtrics Survey | Qualtrics Experience Management The most powerful, simple and trusted way to gather experience data. Start your journey to experience management and try a free account today.

πŸ” Stiamo studiando come l'AI viene usata in Italia e per farlo abbiamo costruito un sondaggio!

πŸ‘‰ bit.ly/sondaggio_ai...

(Γ¨ anonimo, richiede ~10 minuti, e se partecipi o lo fai girare ci aiuti un saccoπŸ™)

Ci interessa anche raggiungere persone che non si occupano e non sono esperte di AI!

03.06.2025 10:24 β€” πŸ‘ 16    πŸ” 18    πŸ’¬ 1    πŸ“Œ 0

Reminder for the importance of evaluating political biases robustly. :)

15.05.2025 15:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We (w/ @diyiyang.bsky.social, @zhuhao.me, & Bodhisattwa Prasad Majumder) are excited to present our #NAACL25 tutorial on Social Intelligence in the Age of LLMs!
It will highlight long-standing and emerging challenges of AI interacting w humans, society & the world.
⏰ May 3, 2:00pm-5:30pm Room Pecos

03.05.2025 13:58 β€” πŸ‘ 14    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

@taniseceron is following 20 prominent accounts