I had the occasion to present this work at #EGC2025 during the Explain'AI Workshop.
Always very happy to receive any kind of feedback!π
I had the occasion to present this work at #EGC2025 during the Explain'AI Workshop.
Always very happy to receive any kind of feedback!π
πIn this SOTA review, I present lines of work to democratise interpretability in MADRL. A necessary step as larger and larger pretrained agent systems are rising.
This also sets the direction of my PhD.
yp-edu.github.io/stories/my-phd
π€What role can interpretability play in Multi-Agent Deep Reinforcement Learning?
MADRL challenges, ranging from agent control, state analysis or team identification, might be answered by direct interpretability.
yp-edu.github.io/publications...