David Schlangen

David Schlangen

@davidschlangen.bsky.social

Prof of Computational Linguistics / NLP @ Uni Potsdam, Germany. Working on embodied / multimodal / conversational AI. In a way. Also affiliated w/ DFKI Berlin (German Research Center for AI).

379 Followers 1,375 Following 70 Posts Joined Aug 2023
4 days ago
A Playschool for LLMs

Call for Papers: LM Playschool (LMP 2026) – Co-located with EMNLP 2026!

Can #LLMs learn, adapt, and improve through situated, game-based interaction?

See lm-playschool.github.io

#GenAI #NLProc #HRI #ELLISforEurope #AI #ML

3 1 0 1
3 months ago

Any use, exploitation, or sharing of the leaked information is a violation of OpenReview's Terms of Use (openreview.net/legal/terms) and ACL's code of conduct (2026.eacl.org/code/) and may result in OpenReview account suspension, desk rejection and multi-year bans from *ACL conferences. (🧵 2/3)

4 2 1 0
3 months ago

📢 Statement from ACL and EACL 2026 Organizers

On Nov 27, OpenReview was notified of a software bug that allowed unauthorized access to authors, reviewers, and area chairs. We are grateful to the OpenReview team for fixing the issue quickly. (🧵 1/3)

11 11 1 0
7 months ago

Bonus post advertising this other thread through the medium of "memes" which I've been told is what you have to do on social media.

4 0 0 0
7 months ago

(That animation in the first post? That's claude trying, and failing, to fully explore a maze in the MapWorld game.)

3 0 0 0
7 months ago
Preview
GitHub - clp-research/clembench: Collection of games to be run with the clemcore framework Collection of games to be run with the clemcore framework - clp-research/clembench

We'd love for other people to use it to test the interaction / agentic abilities of their models, and/or to build new fun and challenging games / interactions!
github.com/clp-research...
github.com/clp-research...
»

2 0 1 0
7 months ago
Preview
A Third Paradigm for LLM Evaluation: Dialogue Game-Based Evaluation using clembench There are currently two main paradigms for evaluating large language models (LLMs), reference-based evaluation and preference-based evaluation. The first, carried over from the evaluation of machine l...

Thanks to a recent short-term grant, we've been able to focus on code quality and ease of use for benchmarking and extensibility. (Exploring new games is a fun programming lab activity, which we've run several times by now!) Here's a writeup of the current state: arxiv.org/abs/2507.08491
»

0 0 1 0
7 months ago
Preview
Playpen: An Environment for Exploring Learning Through Conversational Interaction Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a mode...

clembench now spans abstract (e.g., wordle) and concrete tasks (simulated household); language and l+vision; and benchmarking, learning (playpen), and user simulation (clem:todd).
arxiv.org/abs/2504.08590
arxiv.org/abs/2505.05445
»

0 0 1 0
7 months ago
Video thumbnail

It's great to see the idea of using games / interactions to evaluate LLMs gain traction, with textarena.ai and now ARC-AGI-3 being latest entrants.
This is something we've been exploring since early 2023 with clembench ( clembench.github.io ), which we've been continuously maintaining & extending. »

2 0 1 1
7 months ago
LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks There is an increasing trend towards evaluating NLP models with LLMs instead of human judgments, raising questions about the validity of these evaluations, as well as their reproducibility in the case...

📄 [ACL 2025 main] LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks (doi.org/10.48550/arX...)

10 4 1 0
9 months ago

Ha, yes, I'm quite pleased as well with how that turned out. It's nothing fancy, just a nice font, colouring (obviously), fbox, and rotate.

1 0 0 0
9 months ago
The list of authors from the paper.

This was the outcome of a collaboration that started last year at an ELLIS workshop, and that has brought together many labs (and many master's and PhD students, and PIs).

Much more remains to be explored in "learning in interaction" -- maybe by you?

🤖🧠 #NLP #AI #LLM

2 0 0 0
9 months ago
Preview
Playpen: An Environment for Exploring Learning Through Conversational Interaction Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a mode...

Oh yes, here's the link to the actual pre-print: arxiv.org/abs/2504.08590

1 1 1 0
9 months ago
Preview
GitHub - lm-playpen/playpen: All you need to get started with the LM Playpen Environment for Learning in Interaction. All you need to get started with the LM Playpen Environment for Learning in Interaction. - lm-playpen/playpen

We release the framework and the baseline training setups to foster research in the promising new direction of learning in (synthetic) interaction which we believe will provide more effective ways of post-training agentic conversational LLMs. github.com/lm-playpen/p...

3 0 1 0
9 months ago
Table 3 from the paper linked in a post below.

We find that imitation learning through SFT improves performance on unseen game instances, but does not generalise to new games and negatively impacts other skills -- while interactive learning with GRPO shows balanced improvements without loss of skills.

3 0 1 0
9 months ago
Preview
Triangulating LLM Progress through Benchmarks, Games, and Cognitive Tests We examine three evaluation paradigms: standard benchmarks (e.g., MMLU and BBH), interactive games (e.g., Signalling Games or Taboo), and cognitive tests (e.g., for working memory or theory of mind). ...

Together with the learning environment, we also define an experimental setup combining gameplay evaluation on unseen games and traditional NLP benchmarks such as MMLU following (Momente’ et al. 2025) arxiv.org/abs/2502.14359

3 0 1 0
9 months ago
Diagram showing an interaction triangle "interlocutor A -- world -- interlocutor B", except that this is mediated by GM (the "Game Master"), and that A is a learner wrapped around an LLM, and B also is a wrapper around a (non-learning) LLM.

Playpen is a training environment for post-training LLMs through learning in interaction, by self-play of "dialogue games": goal-oriented language-based activities that generate verifiable rewards.

2 0 1 0
9 months ago
Title of the paper, with a colourful "playpen" logo

🚨 New pre-print! (Well, new & much improved version in any case.) 🚨
If you're interested in LLM post-training techniques and in how to make LLMs better "language users", read this thread, introducing the "LM Playpen".

14 5 3 0
9 months ago

The University of Potsdam invites applications for 5 postdoc positions, incl. Cognitive Sciences, incl. NLP (esp. cognitive).

These are fairly independent research positions that will allow the candidate to build their own profile. Dln June 2nd.

Details: tinyurl.com/pd-potsdam-2...

#NLProc #AI 🤖🧠

2 2 0 0
10 months ago
Preview
The World Is Wooing U.S. Researchers Shunned by Trump

There's indeed suddenly a bit of flexibility in a system that's not exactly known for that.. If there's anyone (post-doc, tenure-track, or more senior) in the #NLP space currently in the US who'd like to explore possiblities in Potsdam, contact me.

🤖🧠

www.nytimes.com/2025/05/14/b...

1 0 0 0
10 months ago

"We ablated both algorithm and hyperparameter choices [...]"

When did "to ablate" take on the meaning "to systematically vary"? I've noticed this only recently, but it's seems to be super common now.

2 0 1 0
11 months ago
Titlepage of the paper linked in the post.

Update 2: New pre-print! Outcome of an ELLIS workshop last year, & more than a year of discussions and work, across labs and countries: Meet the Playpen, an environment for exploring learning in dialogic interaction.

arxiv.org/abs/2504.08590

1/2

4 1 1 0
11 months ago
A Playschool for LLMs

[Sneak preview: If you're wondering where this is going, have a secret look at lm-playschool.github.io -- and stay tuned for more info!]

3/2

0 0 0 0
11 months ago
Table 1 from that paper.

Nice baseline results as well: learning via SFT from transcripts does a bit, but only "real"(-ish) learning in interaction (GRPO) generalises. (Basically, you want to see the whole row being green in this table.)

2/2

1 0 1 0
11 months ago
Titlepage of the paper linked in the post.

Update 2: New pre-print! Outcome of an ELLIS workshop last year, & more than a year of discussions and work, across labs and countries: Meet the Playpen, an environment for exploring learning in dialogic interaction.

arxiv.org/abs/2504.08590

1/2

4 1 1 0
11 months ago

This is only a subset of the models on the leaderboard, visit the site to see all 32 models, and also the results for the multimodal version of the benchmark.

0 0 0 0
11 months ago
Screenshot of leaderboard as linked in post.

Update 1: New models added to our dialogue game-based agentic LLM leaderboard. TL;DR: GPT-4.1 as good as 4o, but much cheaper. Llama4 indeed not very good (decisively worse than 3.2 70B!). OLMo decent, but there's still a secret sauce that only closed labs have.

clembench.github.io

1 0 1 0
11 months ago

Nicola Horst, Davide Mazzaccara, Antonia Schmidt, Michael Sullivan, Filippo Moment\`e, Luca Franceschetti, Philipp Sadler, Sherzod Hakimov, Alberto Testoni, ...
Playpen: An Environment for Exploring Learning Through Conversational Interaction
https://arxiv.org/abs/2504.08590

1 2 0 0
1 year ago

Wenn die Grünen verhandeln könnten, würden am Tag vor einer Ankündigung über eine Einigung zur Schuldenbremse Söder und Dobrindt ankündigen, dass sie sich für immer aus der Bundespolitik heraushalten werden (und dass die CSU nie wieder einen Verkehrsminister stellen wird).

0 0 0 0
1 year ago

Press release by my Uni about our benchmark for LLMs as agents, which is now out in v2.0.
Check it out here: clembench.github.io

2 0 0 0