Stefano's Avatar

Stefano

@sted19.bsky.social

PhD Student @SapienzaNLP Applied Scientist Intern @Amazon Madrid

11 Followers  |  10 Following  |  11 Posts  |  Joined: 21.02.2025
Posts Following

Posts by Stefano (@sted19.bsky.social)

Preview
Estimating Machine Translation Difficulty Machine translation quality has steadily improved over the years, achieving near-perfect translations in recent benchmarks. These high-quality outputs make it difficult to distinguish between state-of...

📄 Paper: arxiv.org/abs/2508.10175

🤗 Models: huggingface.co/collections/...

💻 Code: github.com/zouharvi/tra...

16.09.2025 08:46 — 👍 2    🔁 0    💬 0    📌 0

A huge thanks to my fantastic co-authors: Lorenzo Proietti, @zouharvi.bsky.social, Roberto Navigli, and @kocmitom.bsky.social. 👏

#AI #NLProc #Evaluation

16.09.2025 08:46 — 👍 2    🔁 0    💬 1    📌 0

🤖 We release our best models, sentinel-src-24 and sentinel-src-25! Use them to build more robust evaluations, filter data, or explore applications in other areas such as curriculum learning.

16.09.2025 08:46 — 👍 2    🔁 0    💬 1    📌 0

🔍 Our most surprising finding? LLM-based methods struggle with this task, performing worse than even simple heuristics like sentence length. In contrast, our specialized, trained models are the clear winners.

16.09.2025 08:46 — 👍 3    🔁 0    💬 1    📌 0

In our paper, we:
1️⃣ Define the task and introduce Difficulty Estimation Correlation to evaluate difficulty estimators.
2️⃣ Benchmark a wide range of methods establishing the first SOTA.
3️⃣ Demonstrate their effectiveness in building more challenging test sets automatically.

16.09.2025 08:46 — 👍 2    🔁 0    💬 1    📌 0

💡Our solution: increase benchmark difficulty!

What if we could predict in advance which texts are hard to translate? We introduce Translation Difficulty Estimation as a novel task to automatically identify challenging texts for MT systems.

16.09.2025 08:46 — 👍 2    🔁 0    💬 1    📌 0
Post image

Our new #EMNLP2025 paper is out: "Estimating Machine Translation Difficulty"! 🚀

Are today's #MachineTranslation systems flawless? When SOTA models all achieve near-perfect scores on standard benchmarks, we hit an evaluation ceiling. How can we tell their true capabilities and drive future progress?

16.09.2025 08:46 — 👍 8    🔁 2    💬 1    📌 2

🤖 We release our best models, sentinel-src-24 and sentinel-src-25! Use them to build more robust evaluations, filter data, or explore applications in other areas such as curriculum learning.

16.09.2025 08:41 — 👍 0    🔁 0    💬 0    📌 0

🔍 Our most surprising finding? LLM-based methods struggle with this task, performing worse than even simple heuristics like sentence length. In contrast, our specialized, trained models are the clear winners.

16.09.2025 08:41 — 👍 0    🔁 0    💬 1    📌 0

In our paper, we:

1️⃣ Define the task and introduce Difficulty Estimation Correlation to evaluate difficulty estimators.
2️⃣ Benchmark a wide range of methods establishing the first SOTA.
3️⃣ Demonstrate their effectiveness in building more challenging test sets automatically.

16.09.2025 08:41 — 👍 0    🔁 0    💬 1    📌 0

💡Our solution: increase benchmark difficulty!

What if we could predict in advance which texts are hard to translate? We introduce Translation Difficulty Estimation as a novel task to automatically identify challenging texts for MT systems.

16.09.2025 08:41 — 👍 0    🔁 0    💬 1    📌 0