Andreas Opedal's Avatar

Andreas Opedal

@andreasopedal.bsky.social

PhD student at ETH Zurich & MPI-IS in NLP & ML Language, Reasoning, and Cognition https://opedal.github.io

435 Followers  |  205 Following  |  8 Posts  |  Joined: 17.11.2024  |  2.2957

Latest posts by andreasopedal.bsky.social on Bluesky

Andreas Opedal, Yanick Zengaffinen, Haruki Shirakami, Clemente Pasti, Mrinmaya Sachan, Abulhair Saparov, Ryan Cotterell, Bernhard Sch\"olkopf
Are Language Models Efficient Reasoners? A Perspective from Logic Programming
https://arxiv.org/abs/2510.25626

30.10.2025 05:25 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Francesco Ignazio Re, Andreas Opedal, Glib Manaiev, Mario Giulianelli, Ryan Cotterell: A Spatio-Temporal Point Process for Fine-Grained Modeling of Reading Behavior https://arxiv.org/abs/2506.19999 https://arxiv.org/pdf/2506.19999 https://arxiv.org/html/2506.19999

26.06.2025 06:32 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

See the paper for more details and experiments: arxiv.org/pdf/2410.13502
Or check out the codebase to generate your own problems: github.com/eth-lre/math...

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

All models are sensitive to a simple change in sentence ordering, where we take one sentence and move it to the beginning. We also find that the problem is easiest for LLMs if the sentence is moved from near the beginning or end, rather than from the middle!

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

OpenAI’s o1 and DeepSeek-R1 are certainly impressive. However, when we permuted the ordering of the sentences their performance went down to 5% and 11% respectively (with the token limit set to 25,000 as recommended by OpenAI).

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Here are the results for what we call β€œnonlinear” problems. Solving them requires keeping intermediate results in memory for subsequent steps before they can be used for further deduction. The most complex problems are pretty hard for all models, but they are still able to solve some of them!

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We apply MathGAP to perform a systematic analysis on whether LLMs can use simple examples in context to solve more complex ones at inference. Generalization to proof width turns out to be harder than to proof depth, but we see a steady decrease in performance as proofs get both deeper and wider πŸ’‘

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

With our proof system we can generate new MWPs that adhere to the structure of proof trees, as well as ground-truth CoT traces! From the proof trees we then characterize the complexity of reasoning in several ways, e.g., depth, width, shape, and ordering of nodes (i.e., sentences).

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our work builds on a simple observation: Math word problems (MWPs) are deductive reasoning problems, so solving them can be thought of as applying inference rules. We can thus view solution/reasoning traces as proof trees, the structure of which tells us how hard/complex the problem is to solve.

14.03.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New #ICLR2025 paper πŸ“£πŸ“£

We argue that to properly evaluate a model’s reasoning ability, it must be tested on problems that are harder than the ones it has already seen. Enter MathGAP, an evaluation framework for math word problems with arbitrarily complex proofs🧡

arxiv.org/abs/2410.13502

14.03.2025 16:14 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Andreas Opedal, Eleanor Chodroff, Ryan Cotterell, Ethan Gotlieb Wilcox
On the Role of Context in Reading Time Prediction
https://arxiv.org/abs/2409.08160

11.10.2024 03:01 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Mario Giulianelli, Andreas Opedal, Ryan Cotterell
Generalized Measures of Anticipation and Responsivity in Online Language Processing
https://arxiv.org/abs/2409.10728

16.10.2024 03:01 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@andreasopedal is following 20 prominent accounts