WMT 2025
Hey, hey! π Weβve released the blind test set for this yearβs WMT General MT and multilingual instruction tasks. Submit your systems to the special 20th anniversary of the conference and see how you compare to others!
The deadline is next week on 3rd July.
www2.statmt.org/wmt25/
26.06.2025 18:09 β π 1 π 0 π¬ 0 π 0
Tired of messy non-replicable multilingual LLM evaluation? So were we.
In our new paper, we experimentally illustrate common eval. issues and present how structured evaluation design, transparent reporting, and meta-evaluation can help us to build stronger models.
17.04.2025 13:12 β π 7 π 1 π¬ 0 π 0
βοΈ Summer internship at Cohere!
Are you excited about multilingual evaluation, human judgment, or meta-eval? Come help us explore how a rigorous eval really looks like while questioning the status quo in LLM evaluation.
Iβm looking for an intern (EU timezone preferred), are you interested? Ping me!
28.03.2025 16:44 β π 7 π 2 π¬ 2 π 0
Itβs here! Our new modelβs technical report is out. I'm especially proud of the work we did on its multilingual capabilities - this was a massive, collective effort!
27.03.2025 16:42 β π 1 π 0 π¬ 0 π 0
Multilingual Instruction Shared Task
Big news from WMT! π We are expanding beyond MT and launching a new multilingual instruction shared task. Our goal is to foster truly multilingual LLM evaluation and best practices in automatic and human evaluation. Join us and build the winning multilingual system!
www2.statmt.org/wmt25/multil...
11.03.2025 18:26 β π 12 π 7 π¬ 1 π 2
AI is evolving fast, and Aya Vision is proof of that. This open-weights model is designed to make LLM more powerful across languages and modalities, especially vision! Canβt wait to see the real-world applications, perhaps at WMT this year π
04.03.2025 14:40 β π 2 π 0 π¬ 0 π 0
WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects
As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, includin...
Huge shoutout to colleagues at Google & Unbabel for extending our WMT24 testset to 55 languages in four domains, this is game changer! π
I really hope it puts the final nail in the coffin of FLORES or WMT14. The field is evolving, legacy testsets can't show your progress
arxiv.org/abs/2502.124...
01.03.2025 20:30 β π 14 π 6 π¬ 0 π 0
Shared Task: General Machine Translation
* Revamped constrained track β No restrictions on training data except licensing; all open models under 20B parameters are allowed.
* More challenging sources; long-context translation; prompt preambles; and much more.
π All details are available at www2.statmt.org/wmt25/transl...
20.02.2025 21:31 β π 2 π 0 π¬ 0 π 0
* New human-evaluated language pairs: ENβArabic, ENβEstonian, ENβKorean, ENβSerbian, CzechβGerman, BhojpuriβEN, MaasaiβEN
* New multilingual subtask β Can you build a system that translates 30 languages?
* New modalities β Additional context from video and image (text-to-text remains the core).
20.02.2025 21:31 β π 4 π 0 π¬ 1 π 0
Guess what? The jubilee π 20th iteration of WMT General MT π is here, and we want you to participate - as the entry barrier to make an impact is so low!
This isnβt just any repeat. Weβve kept what worked, removed what was outdated, and introduced many exciting new twists! Among the key changes are:
20.02.2025 21:31 β π 18 π 5 π¬ 1 π 3
Yeah, I haven't wrote a paper since it's just a different prompt. It's published in the github repository of GEMBA
09.02.2025 10:14 β π 0 π 0 π¬ 1 π 0
That one is extremely large, but we haven't used it either in the automatic ranking. Unfortunately I'm not aware of any API service for metrics
08.02.2025 11:44 β π 0 π 0 π¬ 1 π 0
π A huge thank you to all organizers, partners, and participants for making this year's WMT General MT Shared Task a success! Stay tuned for WMT25 - many exciting changes are coming! π
20.11.2024 10:16 β π 2 π 0 π¬ 0 π 0
π Highlights from top systems:
β
IOL-Research: led in constrained/open, winning 10/11 in its category.
β
Unbabel-Tower70B: Best participant, winning 8/11 pairs.
β
Claude-3.5-Sonnet: Best overall with 9/11 wins.
β
Shoutout to Dubformer (speech) & CUNI-MH (strong constrained)
20.11.2024 10:16 β π 4 π 0 π¬ 1 π 0
π We introduced new robust and efficient human evaluation protocol: Error Span Annotations (ESA).
π Test sets are now finally document-level!
π We've added three new language pairs, including English-Spanish where translations are near-perfect.
For more details, read our findings paper.
20.11.2024 10:16 β π 0 π 0 π¬ 1 π 0
Exciting time at this year's WMT24 General MT Shared Task:
π Participant numbers increased by over 50%!
ποΈ Decoder-only architectures are leading the way.
π We've introduced a new speech audio modality domain.
π Online systems are losing ground to LLMs.
20.11.2024 10:16 β π 5 π 1 π¬ 2 π 0
PhD student at the University of Zurich. Trying to get to know what LLMs knowπ€
#NLProc postdoc @AdaptCentre @TUDublinCompSci | nerd, gamer, runner, language-lover, time-waster, pun enthusiast, singer @arduvocal | he/him π³οΈβπ
Building robust LLMs @Cohere
Senior Research Engineer with the Common Crawl Foundation.
(languages βͺ tech) in DΓΉn Γideann
I work on LLM post-training, multilingualism, machine translation, and financial AI.
NLP Researcher at ADAPT Centre | PhD
Machine Translation, Speech, LLMs
A series of state-of-the-art, open source and transparent
foundation models for European languages
NLP & ML research @cohereforai.bsky.social
PhD candidate in Biosocial Research at UCL. Research interests include causal inference and psychobiology. Funded by the ESRC & BBSRC. π³οΈβπ
Research Scientist at Meta.
LLMs, neural networks, logographic writing systems.
https://nbogoychev.com