Weโre thrilled to announce that some of our research will be presented at @emnlpmeeting.bsky.social next week! ๐ฅณ
If youโre attending the conference, donโt miss the chance to explore our work and connect with our team.
@juliakreutzer.bsky.social
NLP & ML research @cohereforai.bsky.social ๐จ๐ฆ
Weโre thrilled to announce that some of our research will be presented at @emnlpmeeting.bsky.social next week! ๐ฅณ
If youโre attending the conference, donโt miss the chance to explore our work and connect with our team.
How well do LLMs handle multilinguality? ๐๐ค
๐ฌWe brought the rigor from Machine Translation evaluation to multilingual LLM benchmarking and organized the WMT25 Multilingual Instruction Shared Task spanning 30 languages and 5 subtasks.
๐Most multilingual instruction data starts as English and translation canโt capture cultural nuance or linguistic richness
What if we optimized prompts instead of completions?
Thatโs the focus of our most recent work on prompt space optimization for multilingual synthetic data๐ฃ๏ธ
The next generation of open LLMs should be inclusive, compliant, and multilingual by design. Thatโs why we @icepfl.bsky.social @ethz.ch @cscsch.bsky.social ) built Apertus.
03.09.2025 09:26 โ ๐ 25 ๐ 8 ๐ฌ 2 ๐ 2Let's do the venue justice. Very excited for today's multilingual workshops at #COLM2025 ๐
10.10.2025 12:23 โ ๐ 10 ๐ 1 ๐ฌ 0 ๐ 0Looking forward to tomorrow's #COLM2025 workshop on multilingual data quality! ๐คฉ
09.10.2025 23:16 โ ๐ 6 ๐ 3 ๐ฌ 0 ๐ 0Ready for our poster today at #COLM2025!
๐ญThis paper has had an interesting journey, come find out and discuss with us! @swetaagrawal.bsky.social @kocmitom.bsky.social
Side note: being a parent in research does have its perks, poster transportation solved โ
Weโre not your average lab. Weโre a hybrid research environment dedicated to revolutionizing the ML space.
And weโre hiring a Senior Research Scientist to co-create with us.
If you believe in research as a shared, global effort โ this is your chance.
๐กA collaborativeโdiverse team is key. In real life as in the LLM world ๐ช๐ฆพ
Check out our latest work that builds on this insight. ๐
Breaking into AI research is harder than ever, and early-career researchers face fewer chances to get started.
Entry points matter.
We started the Scholars Program 3 years ago to give new researchers a real shot โ excited to open applications for year 4โจ
While effective for chessโ๏ธ, Elo ratings struggle with LLM evaluation due to volatility and transitivity issues.
New post in collaboration with AI Singapore explores why Elo falls short for AI leaderboards and how we can do better.
COLM 2025 is now accepting applications for:
Financial Assistance Application -- docs.google.com/forms/d/e/1F...
Volunteer Application -- docs.google.com/forms/d/e/1F...
Childcare Financial Assistance Application -- docs.google.com/forms/d/e/1F...
All due by July 31
๐ Squeezing the most of few samples - check out our LLMonade recipe for few-sample test-time scaling in multitask environments.
Turns out that standard methods miss out on gains on non-English languages. We propose more robust alternatives.
Very proud of this work that our scholar Ammar led! ๐
๐จLLM safety research needs to be at least as multilingual as our models.
What's the current stage and how to progress from here?
This work led by @yongzx.bsky.social has answers! ๐
๐งNo LLM safety without multilingual safety - what is missing to closing the language gap? And where does this gap actually originate from?
Answers ๐
Multilingual ๐คreasoning ๐ค test-time scaling ๐ฅ๐ฅ๐ฅ
New preprint!
@yongzx.bsky.social has all the details ๐
1/ Science is only as strong as the benchmarks it relies on.
So how fairโand scientifically rigorousโis todayโs most widely used evaluation benchmark?
We took a deep dive into Chatbot Arena to find out. ๐งต
Thank you @rapha.dev ๐ hope we can establish going a little more into depth rather than just focusing on breadth (massive multilinguality) with evals.
24.04.2025 00:08 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0๐คMT eyes on multilingual LLM benchmarks ๐ Here's a bunch of simple techniques that we could adopt easily, and in total get a much richer understanding of where we are with multilingual LLMs.
๐ฌBonus question: how can we spur research on evaluation of evaluations?
Tired of messy non-replicable multilingual LLM evaluation? So were we.
In our new paper, we experimentally illustrate common eval. issues and present how structured evaluation design, transparent reporting, and meta-evaluation can help us to build stronger models.
๐ฏIn order to keep advancing mLLM models, we need to advance our evaluation methods.
We need meta-evaluation research to think beyond one-fits-all automatic evaluation and develop richer assessments in human evaluation, and iterate to adapt them to advances in capabilities. ๐
Checklist for multilingual LLM evaluation
๐คYes, none of these principles are novel or the techniques particularly sophisticated.
Despite their effectiveness, none of them are standard practice.
โ๏ธWeโve compiled a checklist to help incorporate them in model evaluations.
Table comparing model scores under different prompt templates.
(5) Advancing reproducibility through transparency ๐ช
Current mLLM evaluations are near impossible to reproduce, due to intransparency of evaluation configurations (incl. task formulation as in the example below). We argue for open evaluation releases that include model outputs and their scores.
Diagram breaking down win rate comparisons across buckets of prompt length
(4) Conducting richer analyses ๐ฌ
Aggregate benchmark metrics do not provide insights into what differentiates the outputs of two models - yet this is often the first step in human evaluation. For example, we can group evaluation prompts by length or category.
Table displaying model ranking changes depending on language resourcedness and task focus
(3) Aggregating responsibly ๐๏ธ
How we aggregate results across tasks and languages informs the interpretation of model comparisons. Uniform weighting is not necessarily fair due to differences in training distribution (e.g. language or task support).
Diagram that shows the significance of win rate differences in relation to sample sizes
(2) Measuring significance, power and effect size ๐
Generative evaluations for mLLMs rarely consider significance of results, statistical power of the test setup or effect sizes. We illustrate how these can be helpful to reporting model differences more meaningfully.
Diagram relating prompt translation quality to a change in win rate differences across languages
(1) Treating synthetic data with care ๐
Translations are a common way to expand evaluation sets to new languages. We demonstrate that prompt translation can cause changes in win rates, with magnitudes depending on translation quality and generative models.
๐กโฆ turns out that by adopting practices from MT evaluations we can improve the expressiveness of generative multilingual LLM (mLLM) evaluations. Examples in thread below๐
17.04.2025 10:56 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0Screenshot of the paper header with title and author list and affiliations
๐New preprint with Eleftheria Briakou @swetaagrawal.bsky.social @mziizm.bsky.social @kocmitom.bsky.social!
arxiv.org/abs/2504.11829
๐It reflects experiences from my personal research journey: coming from MT into multilingual LLM research I missed reliable evaluations and evaluation researchโฆ
๐ We are excited to introduce Kaleidoscope, the largest culturally-authentic exam benchmark.
๐ Most VLM benchmarks are English-centric or rely on translationsโmissing linguistic & cultural nuance. Kaleidoscope expands in-language multilingual ๐ & multimodal ๐ VLMs evaluation