LAGoM NLP's Avatar

LAGoM NLP

@lagom-nlp.bsky.social

We are the Leuven AI Group of Multilingual NLP (LAGoM NLP), a research lab at the department of Computer Science at KU Leuven, led by @mdlhx

521 Followers  |  174 Following  |  53 Posts  |  Joined: 05.12.2023  |  2.31

Latest posts by lagom-nlp.bsky.social on Bluesky

Authors: @wpoelman.bsky.social, Thomas Bauwens and @mdlhx.bsky.social

03.11.2025 12:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Confounding Factors in Relating Model Performance to Morphology Wessel Poelman, Thomas Bauwens, Miryam de Lhoneux. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025.

We are presenting this paper at #EMNLP2025 in the β€œMultilinguality and Language Diversity” oral session this Wednesday (November 5th) from 11:00-12:30 (UTC+8). Paper: aclanthology.org/2025.emnlp-m... Code: github.com/LAGoM-NLP/Co...

03.11.2025 11:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our proposed tokenizer metrics are a step in that direction

03.11.2025 11:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We disentangle more such factors in an attempt to outline what the β€œideal” experiment would look like and how to work backwards to a feasible setup. This way, we outline the requirements to reliably answer whether, and how, morphology relates to language modeling.

03.11.2025 11:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finally, we take a look at experimental factors that confounded experiments and conclusions in prior research. Coarse language grouping is one of several confounding factors.

03.11.2025 11:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What's more: using entropy allows for finer-grained ordering of languages than the coarse groupings of "agglutinative" and "fusional".

03.11.2025 11:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We compute the normalized entropy over each token's distribution of neighbors, and indeed find that agglutinative languages tend to have higher entropy than fusional languages on average.

03.11.2025 11:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

To measure this token ambiguity, we re-visit the idea of accessor variety (AV) from Harris (1955) and Feng et al. (2004) by counting which tokens neighbor each other in a corpus and how many times.

03.11.2025 11:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

it is harder to predict the next token. We then hypothesize that this contextual ambiguity is higher in morphologically complex languages.

03.11.2025 11:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In our new #EMNLP2025 paper, we argue that such statistics should relate directly to what a language model actually does: reliably predicting the next token produced by its tokenizer. We argue that if the most recent token has more contextual ambiguity,

03.11.2025 11:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

When is a language hard to model? Previous research has suggested that morphological complexity both does and does not play a role, but it does so by relating the performance of language models to corpus statistics of words or subword tokens in isolation.

03.11.2025 11:53 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Ok, added the ones that were missing from yours to ours

12.08.2025 10:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

βœ…

12.08.2025 10:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You're included in the NLP labs starter pack, see go.bsky.app/LKGekew

11.08.2025 09:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Supervised and Unsupervised Probing of Shortcut Learning: Case Study on the Emergence and Evolution of Syntactic Heuristics in BERT Elke Vandermeerschen, Miryam De Lhoneux. Findings of the Association for Computational Linguistics: ACL 2025. 2025.

* (Findings) Supervised and Unsupervised Probing of Shortcut Learning: Case Study on the Emergence and Evolution of Syntactic Heuristics in BERT by Elke Vandermeerschen and @mdlhx.bsky.social, presented by Elke. URL: aclanthology.org/2025.finding...

28.07.2025 11:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GRaMPa: Subword Regularisation by Skewing Uniform Segmentation Distributions with an Efficient Path-counting Markov Model Thomas Bauwens, David KaczΓ©r, Miryam De Lhoneux. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025.

Our group has two papers at #acl2025:
* (Main) GRaMPa: Subword Regularisation by Skewing Uniform Segmentation Distributions with an Efficient Path-counting Markov Model by Thomas Bauwens, David KaczΓ©r and @mdlhx.bsky.social, presented by Thomas. URL: aclanthology.org/2025.acl-lon...

28.07.2025 11:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 2
Preview
CLIN35 - Call for Abstracts We invite submissions for CLIN35, the 35th edition of the Computational Linguistics in the Netherlands (CLIN) conference, which will take place in Leuven on September 12th, 2025. Abstracts describing ...

The submission deadline for #CLIN35 has been extended by one week! New deadline: June 20th. πŸ”Š Spread the word! More info: clin35.ccl.kuleuven.be/call-for-abs...

06.06.2025 09:38 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Reminder, a few more days to apply!

03.06.2025 09:35 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
CLIN35 Computational Linguistics in The Netherlands (CLIN) is a yearly conference on computational linguistics. Each year the conference is organized by a different institution in the Dutch-speaking region. ...

πŸ“… Don't forget! The deadline for submitting your abstract to the #CLIN conference in Leuven is coming: 13th of June! Submitting is easy: name, title of your work, 500-word abstract, done! #nlp #nlproc #compling #llm #ai #dutch clin35.ccl.kuleuven.be

02.06.2025 07:08 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 0    πŸ“Œ 2

We are hiring in #nlproc!!

16.05.2025 08:24 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

βœ…

18.02.2025 08:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m looking for a postdoc, to start ideally ASAP!

The work would be in the EU-funded TrustLLM project, focusing on modularisation and language adaptation of LLMs, tokenization, and evaluation benchmarks for multilingual LLMs. The position would be full-time for 2 years with no teaching obligation.

13.12.2024 10:18 β€” πŸ‘ 18    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

We look at the role of English in this evaluation: it can be, and is often used as, an interface to boost task performance. Or it can be used as a natural language to evaluate language understanding. We recommend to move away from task performance as a main goal and focus on language understanding.

12.12.2024 15:28 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The Roles of English in Evaluating Multilingual Language Models Multilingual natural language processing is getting increased attention, with numerous models, benchmarks, and methods being released for many languages. English is often used in multilingual evaluati...

@wpoelman.bsky.social and @mdlhx.bsky.social 's πŸ”₯ hot takes on multilingual LLM evaluation, to appear @nodalida.bsky.social is up on arXiv: arxiv.org/abs/2412.08392

12.12.2024 15:28 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 2    πŸ“Œ 1

🚨 New Account Alert! This is the official account of the *MilaNLP group*. We had to recreate it because it was not indexed.

If you were following us before, please follow us again. If not, now’s the perfect time to start!

06.12.2024 14:08 β€” πŸ‘ 19    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
MilaNLP Lab (@milanlp.bsky.social) The Milan Natural Language Processing Group #NLProc #ML #AI https://milanlproc.github.io/

milanlp.bsky.social is having the same issue, maybe take a look at this github issue here: github.com/bluesky-soci...

02.12.2024 09:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
NLP grad students Join the conversation

There's too many starter packs.
πŸ‘‡ Here's a list, mostly for NLP, ML, and related areas.

01.12.2024 03:05 β€” πŸ‘ 40    πŸ” 11    πŸ’¬ 3    πŸ“Œ 2

Moreover, we advocate for a shift in perspective from seeking a general definition of data quality towards a more language- and task-specific one. Ultimately, we aim for this study to serve as a guide to using Wikipedia for pretraining in a multilingual setting.

29.11.2024 14:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We evaluate the downstream impact of quality filtering on Wikipedia by training tiny monolingual pretrained models for each Wikipedia to find that data quality pruning is an effective means for resource-efficient training without hurting performance, especially for LRLs.

29.11.2024 14:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We subject non-English Wikipedias to common quality filtering techniques like script filtering, MinHash and heuristic filtering, which reveal widespread issues such as a high percentage of one-line articles and duplicate articles.

29.11.2024 14:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@lagom-nlp is following 20 prominent accounts