Nicolas Yax's Avatar

Nicolas Yax

@nicolasyax.bsky.social

PhD student working on the cognition of LLMs | HRL team - ENS Ulm | FLOWERS - Inria Bordeaux

66 Followers  |  81 Following  |  30 Posts  |  Joined: 14.11.2024  |  2.26

Latest posts by nicolasyax.bsky.social on Bluesky

Post image

New (revised) preprint with @thecharleywu.bsky.social
We rethink how to assess machine consciousness: not by code or circuitry, but by behavioral inferenceβ€”as in cognitive science.
Extraordinary claims still need extraordinary evidence.
πŸ‘‰ osf.io/preprints/ps...
#AI #Consciousness #LLM

08.10.2025 09:02 β€” πŸ‘ 16    πŸ” 4    πŸ’¬ 0    πŸ“Œ 1
Preview
Relative Value Encoding in Large Language Models: A Multi-Task, Multi-Model Investigation Abtract. In-context learning enables large language models (LLMs) to perform a variety of tasks, including solving reinforcement learning (RL) problems. Given their potential use as (autonomous) decis...

🧠 New paper in Open Mind!

We show that LLM-based reinforcement learning agents encode relative reward values like humans, even when suboptimal and display a positivity bias.

Work led by William Hayes w/ @nicolasyax.bsky.social

doi.org/10.1162/opmi...

#AI #LLM #RL

26.05.2025 18:15 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Generating Computational Cognitive Models using Large Language Models Computational cognitive models, which formalize theories of cognition, enable researchers to quantify cognitive processes and arbitrate between competing theories by fitting models to behavioral data....

Preprint update, co-led with @akjagadish.bsky.social, with @marvinmathony.bsky.social, Tobias Ludwig and @ericschulz.bsky.social!

26.05.2025 10:08 β€” πŸ‘ 16    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Post image

Curious about LLM interpretability and understanding ? We borrowed concepts from genetics to map language models, predict their capabilities, and even uncovered surprising insights about their training !

Come see my poster at #ICLR2025 3pm Hall 2B #505 !

26.04.2025 02:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Charting and Navigating Hugging Face's Model Atlas As there are now millions of publicly available neural networks, searching and analyzing large model repositories becomes increasingly important. Navigating so many models requires an atlas, but as mo...

If you are interested in this line of research of mapping LLMs you might also want to check the amazing work of Eliahu Horwitz arxiv.org/abs/2503.10633 and Momose Oyama arxiv.org/abs/2502.16173 10/10

24.04.2025 13:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In short, PhyloLM is a cheap and versatile algorithm that generates useful representations for LLMs that can have creative applications in pratice. 9/10
paper : arxiv.org/abs/2404.04671
colab : colab.research.google.com/drive/1agNE5...
code : github.com/Nicolas-Yax/...
ICLR : Saturday 3pm Poster 505

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
PhyloLM - a Hugging Face Space by nyax This app allows you to explore and compare language models through various visualizations, including similarity matrices, 2D scatter plots, and tree diagrams. You can search for models by name, adj...

A PhyloLM collaborative Huggingface space is available to try the algorithm and visualize maps : huggingface.co/spaces/nyax/... The Model Submit button has been temporarily suspended for technical reasons but it should be back very soon ! 8/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

By using code related contexts we can obtain a fairly different map. For example we notice that Qwen and GPT-3.5 have a very different way of coding compared to the other models which was not visible on the reasoning map. 7/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The contexts choice is important as it reflects different capabilities of LLMs. Here on a general reasoning type of context we can plot a map of models using UMAP. The larger the edge, the closer models are from each other. Models on the same cluster are even closer ! 6/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It can also measure quantization efficiency by observing the behavioral distance between LLM and quantized versions. In the Qwen 1.5 release, GPTQ seems to perform best. This new concept of metric could provide additional insights to quantization efficiency. 5/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Aside from plotting trees, PhyloLM similarity matrix is very versatile. For example, running a logistic regression on the distance matrix makes it possible to predict performance of new models even from unseen families with good accuracy. Here is what we got on ARC. 4/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Not taking into account these requirements can still produce efficient distance vizualisation trees. However it is important to remember they do not represent evolutionary trees. Feel free to zoom in to see model names. 3/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Phylogenetic algorithms often require common ancestors to not appear in the objects studied but are clearly able to retrieve the evolution of the family. Here is an example in the richness of open-access model : @teknium.bsky.social @maximelabonne.bsky.social @mistralai.bsky.social 2/10

24.04.2025 13:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We build a distance matrix from comparing outputs of LLMs to a hundred of different contexts and build maps and trees from this distance matrix. Because PhyloLM only requires sampling very few tokens after a very short contexts the algorithm is particularly cheap to run. 1/10

24.04.2025 13:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ”₯Our paper PhyloLM got accepted at ICLR 2025 !πŸ”₯
In this work we show how easy it can be to infer relationship between LLMs by constructing trees and to predict their performances and behavior at a very low cost with @stepalminteri.bsky.social and @pyoudeyer.bsky.social ! Here is a brief recap ⬇️

24.04.2025 13:15 β€” πŸ‘ 16    πŸ” 5    πŸ’¬ 3    πŸ“Œ 2
Preview
MAGELLAN: Metacognitive predictions of learning progress guide... Open-ended learning agents must efficiently prioritize goals in vast possibility spaces, focusing on those that maximize learning progress (LP). When such autotelic exploration is achieved by LLM...

πŸš€ Introducing 🧭MAGELLANβ€”our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: πŸ”— arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL

24.03.2025 15:09 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 4

we are recruiting interns for a few projects with @pyoudeyer
in bordeaux
> studying llm-mediated cultural evolution with @nisioti_eleni
@Jeremy__Perez

> balancing exploration and exploitation with autotelic rl with @ClementRomac

details and links in 🧡
please share!

27.11.2024 17:43 β€” πŸ‘ 6    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Putting some Flow Lenia here too

22.11.2024 09:51 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

1/⚑️Looking for a fast and simple Transformer baseline for your RL environment in JAX ?
Sharing my implementation of transformerXL-PPO: github.com/Reytuag/tran...
The implementation is the first to attain the 3rd floor and obtain advanced achievements in the challenging Craftax

22.11.2024 10:15 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Generalization or Memorization: Data Contamination and Trustworthy Evaluation for Large Language Models Recent statements about the impressive capabilities of large language models (LLMs) are usually supported by evaluating on open-access benchmarks. Considering the vast size and wide-ranging sources of...

Related work on contamination in LLMs :
arxiv.org/abs/2402.15938 Dong et al. 2024
arxiv.org/abs/2310.15007 Meeus et al. 2023
arxiv.org/abs/2310.17623 Oren et al. 2024

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Assessing Contamination in Large Language Models: Introducing the LogProber method In machine learning, contamination refers to situations where testing data leak into the training set. The issue is particularly relevant for the evaluation of the performance of Large Language Models...

LogProber paper : www.arxiv.org/abs/2408.14352
git : github.com/Nicolas-Yax/...
collab : colab.research.google.com/drive/1GDbmE...

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It is part of a research agenda to open the LLM black box and provide tools for researchers to better interact with models in a more transparent manner. The last paper in this agenda was PhyloLM proposing methods to investigate the phylogeny of LLMs arxiv.org/abs/2404.04671 15/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This method was first introduced in our paper Studying and improving reasoning in humans and machines investigating the evolution of cognitive biases in language models. www.nature.com/articles/s44... 14/15

15.11.2024 13:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

As such, LogProber can be a useful tool to check contamination in language models at a very low cost (one forward pass) given some very high level assumptions about the training method (that are very often verified in practice). 13/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Lastly, the A scenario is more common in instruction finetuning scenarios. In open access models finetuning databases are often shared making it possible to check directly if the item is found in the training set which is rarely the case for pretraining databases. 12/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

On the other hand, if the score is high, both QA and Q are possible namely the model has seen the question but maybe not the answer during training. LogProber is not able to find which scenario happened but the item is suspicious. 11/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

When testing a LLM on an item only one of these scenarios is possible. This means that if a model pretrained with a full language modelling method returns a low contamination score with LogProber on a given item, then the item is safe as only the STD scenario is possible. 10/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Most LLMs are pretrained in a full language modelling manner meaning they fit on all tokens (both question and answer tokens) meaning A type of training is not a likely scenario. 9/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Results indicate that LogProber is able to accurately predict contamination in QA and STD scenarios, Q leading to false positives and A, to false negatives. In practice some of these scenarios are not possible depending on how the LLM is trained. 8/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

QA is when both appear, A when both are present but the model only fits on the tokens of the answer (usually happens in some finetuning methods) and STD when neither the question nor the answer appear in the training data. 7/15

15.11.2024 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@nicolasyax is following 20 prominent accounts