Armel Randy Zebaze's Avatar

Armel Randy Zebaze

@armelrandy.bsky.social

PhD Student @InriaParisNLP

17 Followers  |  36 Following  |  11 Posts  |  Joined: 16.02.2025  |  1.6687

Latest posts by armelrandy.bsky.social on Bluesky

Post image

๐ŸŽ‰ Happy to share that 2 of our papers were accepted to #EMNLP2025 Findings! ๐Ÿš€
[1] Compositional Translation: A Novel LLM-based Approach for Low-resource Machine Translation
[2] TopXGen: Topic-Diverse Parallel Data Generation for Low-Resource Machine Translation

Thank you to my amazing co-authors! ๐Ÿ™Œ

21.08.2025 16:26 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

We are thrilled to announce our next seminar by Syrielle Montariol @smontariol.bsky.social (EPFL) entitled "Multimodal perception and reasoning" on Friday 21st February at 11am CET. Connection link to be shared on the day. Details here: t.co/pPbWfkALM4!

18.02.2025 14:06 โ€” ๐Ÿ‘ 10    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

TL;DR
Everything is in the title.

The paper is available on ArXiv
arxiv.org/pdf/2408.00397

The code and outputs are available on Github
github.com/ArmelRandy/I...

Thanks to my co-authors @bensagot.bsky.social and @rachelbawden.bsky.social, and to @inriaparisnlp.bsky.social.

10/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Finally, we demonstrate that similarity-based example selection (in a high-quality sample pool) helps few-shot MT with LLMs ranging from 2 to 70 billion parameters. As the number of in-context examples grows, the gap with random selection remains significant.

9/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Using FLORES-200 dev set (997 human-written pairs) as our initial selection pool, we study the impact of reducing or expanding it with bitexts from the NLLB dataset. In Swahili, similarity search (notably SONAR) proves more robust to pool composition than random selection.

8/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

SONAR also outperforms example selection based on string-matching metrics like BLEU, BM25, R(rerank)-BM25, and cosine-similarity with RoBERTa's sentence representations.

7/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Experiments with 5 sentence embeddings on 4 FLORES-200 languages show that similarity-based selection outperforms random selection in LRLs but offers only marginal gains in HRLs (French). Across both cases, sentence embeddings perform similarly, with SONAR slightly leading.

6/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We tackle these issues by assigning a zero score to problematic generations, making the metrics language-aware. Specifically, we evaluate with Language-aware COMET, based on COMET-22. It preserves COMET's accuracy while improving the assessment of problematic outputs.

5/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Translating into low-resource languages presents two main challenges:
โ€ข Outputs may be in the wrong language (e.g., repeating the prompt).
โ€ข They may be empty or contain meaningless repetitions.
Current neural metrics are not robust to these issues.

4/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We examine three aspects:
โ€ข Evaluating LLM-based MT into LRLs.
โ€ข Assessing whether similarity-based example selection improves MT, especially with a small pool (typical) for LRLs, and at scale.
โ€ข Testing the strategyโ€™s robustness to selection pool heterogeneity.

3/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We explore in-context example selection for MT, focusing on LRLs (Swahili, Wolof etc. ). Given a sentence and a selection pool, we choose the k closest pairs based on a sentence embedding or a string-matching metric, placing the most similar closest to the sentence.

2/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

I am happy to announce that our paper "In-context Example Selection via Similarity Search Improves Low-resource Machine Translation" was accepted to the #NAACL2025 Findings ๐Ÿคฉ๐Ÿ”ฅ.

What is this about?

TAGS: Machine Translation (MT), High/Low -resource languages (H/LRLs).
๐Ÿงต

1/10

17.02.2025 17:54 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@armelrandy is following 19 prominent accounts