Stratis Tsirtsis's Avatar

Stratis Tsirtsis

@stratiss.bsky.social

Postdoc @ Hasso Plattner Institute working on machine learning. Previously @ Max Planck Institute, Meta, Stanford, NTUA. πŸ’» https://stsirtsis.github.io/

90 Followers  |  136 Following  |  17 Posts  |  Joined: 20.11.2024  |  1.571

Latest posts by stratiss.bsky.social on Bluesky


This is the result of fantastic team work with Eleni Straitouri, Ander Artola Velasco, and @autreche.bsky.social.

22.10.2025 11:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our decision support system is underpinned by a single parameter (epsilon) that adaptively controls the level of human agency. The resulting cumulative reward varies smoothly with epsilon, which allows us to find the optimal level of human agency using a bandit algorithm.

22.10.2025 11:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

What if AI agents aren't here to replace us, but to facilitate our decisions? In a study with 1600 participants, we show that a human with action choices narrowed by an AI makes better sequential decisions than an AI or a human alone.
πŸ“œ arxiv.org/abs/2510.16097

22.10.2025 11:45 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

The Causality in Cognition Lab at Stanford University is recruiting PhD students this cycle!

We are a supportive team who happened to wear bluesky appropriate colors for the lab photo (this wasn't planned). πŸ’™

Lab info: cicl.stanford.edu
Application details: psychology.stanford.edu/admissions/p...

17.10.2025 17:43 β€” πŸ‘ 43    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
Post image

We (w/ Moritz Hardt, Olawale Salaudeen and
@joavanschoren.bsky.social) are organizing the Workshop on the Science of Benchmarking & Evaluating AI @euripsconf.bsky.social 2025 in Copenhagen!

πŸ“’ Call for Posters: rb.gy/kyid4f
πŸ“… Deadline: Oct 10, 2025 (AoE)
πŸ”— More info: rebrand.ly/bg931sf

22.09.2025 13:45 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Post image

So excited and honored to receive an ERC Starting Grant for the project BrainAlign!! BrainAlign will bring LLMs closer to human understanding by directly aligning them with the human brain.

Stay tuned for our findings, and multiple postdoc and PhD openings in the coming years!

04.09.2025 16:20 β€” πŸ‘ 45    πŸ” 5    πŸ’¬ 4    πŸ“Œ 1

While I’m sad to leave the MPI for Software Systems and its people, it's time to move on. Starting October, I will be a postdoctoral researcher at @hpi.bsky.social, working with @swachter.bsky.social. Super excited about this next chapter!

20.08.2025 17:22 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

todo:
* thesis defense βœ…

Grateful to the committee and reviewers Marius Kloft, @arkrause.bsky.social, @rupakmajumdar.bsky.social, and @tobigerstenberg.bsky.social for their time and support. No words are enough to thank my advisor @autreche.bsky.social for everything I’ve learned from him so far πŸ™

20.08.2025 17:22 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Grateful to the tutorial chairs @urish.bsky.social and @sineadwilliamson.bsky.social for the opportunity, and big thanks to the PC chairs @smaglia.bsky.social and @csilviavr.bsky.social for being on the ground and making UAI 2025 such a unique experience!

01.08.2025 11:40 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Last week I had the pleasure of presenting a 2.5-hour tutorial on "Counterfactuals in Minds and Machines" at UAI 2025 in Rio πŸ‡§πŸ‡·, prepared together with @autreche.bsky.social and @tobigerstenberg.bsky.social. We've made all materials and references available here: learning.mpi-sws.org/counterfactu...

01.08.2025 11:40 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Heading to Rio de Janeiro πŸ‡§πŸ‡· for UAI 2025 (@auai.org) to present our tutorial with @tobigerstenberg.bsky.social and @autreche.bsky.social on "Counterfactuals in Minds and Machines" on Monday. Looking forward to this! If you are in Rio, let's meet!

18.07.2025 12:38 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Post image

In Athens πŸ‡¬πŸ‡· for the Greeks in AI symposium. Super excited to present our work on "Counterfactual Token Generation in LLMs" (bit.ly/4nMibs2) and see all the amazing work Greek people all over the world are doing on AI! If you are in Athens, let's meet! Next, heading toπŸ‘‡

18.07.2025 12:38 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Uncertainty in Artificial Intelligence

did you check our amazing list of tutorials in Rio?
spanning

- hyperparameter optimization
- counterfactual reasoning
- bayesian nonparametrics for causality
- causal inference with deep generative models
- modern variational inference

πŸ‘‰ www.auai.org/uai2025/tuto...

04.06.2025 09:25 β€” πŸ‘ 14    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Awesome work led by Ander, with Nastaran Okati and @autreche.bsky.social.

30.05.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

The LLM API you use returns (and charges you for) 5 tokens. Did the LLM actually generate 5 tokens? Or is the provider overcharging you? πŸ€” In arxiv.org/abs/2505.21627, led by Ander Artola Velasco, we argue (game-theoretically) for a change from pay-per-token to pay-per-character.

30.05.2025 11:24 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Presenting this today at 17:00 in Hall 4 #6

28.04.2025 04:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This work is a collaborative effort with a fantastic team: Nina Corvelo Benz, Eleni Straitouri, Ivi Chatzi, Ander Artola Velasco, Suhas Thejaswi, and @autreche.bsky.social

25.04.2025 13:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

In Singapore for #ICLR2025! I'll be presenting our work on a causal methodology for evaluating LLMs (arxiv.org/abs/2502.01754) at the "Building Trust in LLMs" workshop on Monday. If you are working on causality, game theory and/or LLMs, let's grab a β˜•οΈ during the conference!

25.04.2025 13:53 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Preview
Evaluation of Large Language Models via Coupled Token Generation State of the art large language models rely on randomization to respond to a prompt. As an immediate consequence, a model may respond differently to the same prompt if asked multiple times. In this wo...

LLMs rely on randomization to respond to a prompt: they may respond differently to the same prompt if asked multiple times. In β€œEvaluation of LLMs via Coupled Token Generation” (arxiv.org/abs/2502.01754), we argue that the eval of LLMs should control for this randomization 1/

05.02.2025 08:33 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Let's talk causality and LLMs! Come find us at the posters in East Hall C. 11:30-12:00 & 14:30-15:00. #neurips2024

14.12.2024 15:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

What would an LLM have said, counterfactually? Here is a short video illustrating our method for counterfactual token generation. We will present this work at the CaLM workshop at #neurips2024. See you in Vancouver!
πŸ“œ arxiv.org/abs/2409.17027
πŸ’» made with manim in python

27.11.2024 17:24 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Post image

Hey there πŸ¦‹
Let's start with an intro. I'm a final-year PhD student at the Max Planck Institute for Software Systems, working on machine learning, decision making and social aspects of AI. Currently on the academic job market, looking for tenure-track positionsπŸ‘‡
πŸ’» stsirtsis.github.io

20.11.2024 12:55 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@stratiss is following 20 prominent accounts