#XAI New Paper
SYMBXRL: A symbolic AI framework for explaining Deep Reinforcement Learning solutions catered for Mobile Networks. This work has been accepted for presentation at IEEE International Conference on Computer Communications.
Paper: dspace.networks.imdea.org/bitstream/ha...
09.02.2025 23:27 — 👍 5 🔁 1 💬 0 📌 0
Assistant Professor in Humane AI and NLP at the University of Groningen (GroNLP)
PhD student in Machine Learning @Warsaw University of Technology and @IDEAS NCBR
Computer Science Ph.D. student at Northwestern. Advised by Prof. Jessica Hullman. Interested in Statistical decision theory and Visualizing & modeling uncertainty.
Math Assoc. Prof. (On leave, Aix-Marseille, France)
Teaching Project (non-profit): https://highcolle.com/
NLP Researcher at ADAPT Centre | PhD
Machine Translation, Speech, LLMs
Assistant Professor @ Princeton
Previously: EPFL 🇨🇭, UFMG 🇧🇷
Interests: Computational Social Science, Platforms, GenAI, Moderation
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
PhD Student @ UC San Diego
Researching reliable, interpretable, and human-aligned ML/AI
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
I work with explainability AI in a german research facility
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml
chhaviyadav.org
Assistant Professor @RutgersCS • Previously @MSFTResearch, @dukecompsci, @PinterestEng, @samsungresearch • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
Author of Interpretable Machine Learning and other books
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Seeking superhuman explanations.
Senior researcher at Microsoft Research, PhD from UC Berkeley, https://csinva.io/
Professor in Artificial Intelligence, The University of Queensland, Australia
Human-Centred AI, Decision support, Human-agent interaction, Explainable AI
https://uqtmiller.github.io
Incoming Assistant Professor @ University of Cambridge.
Responsible AI. Human-AI Collaboration. Interactive Evaluation.
umangsbhatt.github.io