Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
26.06.2025 09:21 — 👍 20 🔁 9 💬 0 📌 3
Call for papers at the eXCV workshop at ICCV 2025.
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!
@iccv.bsky.social
14.06.2025 15:47 — 👍 13 🔁 5 💬 1 📌 1
We are presenting 3 papers at #CVPR2025!
11.06.2025 20:56 — 👍 7 🔁 2 💬 1 📌 0
"Reasonable AI" got selected as a cluster of excellence www.tu-darmstadt.de/universitaet...
Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
23.05.2025 11:57 — 👍 8 🔁 3 💬 1 📌 0
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
04.04.2025 13:38 — 👍 22 🔁 6 💬 1 📌 2
Why has continual ML not had its breakthrough yet?
In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!
We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges
arxiv.org/abs/2502.11927
18.02.2025 13:33 — 👍 10 🔁 3 💬 0 📌 0
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
31.01.2025 19:38 — 👍 31 🔁 11 💬 0 📌 0
Excited to share that today our paper recommender platform www.scholar-inbox.com has reached 20k users! We hope to reach 100k by the end of the year.. Lots of new features are being worked on currently and rolled out soon.
15.01.2025 22:03 — 👍 190 🔁 26 💬 12 📌 8
YouTube video by Technische Universität Darmstadt
Verstehen, was KI-Modelle können und was nicht: RAI-Forschende Dr. Simone Schaub-Meyer im Interview
Verstehen, was KI-Modelle können – und was nicht: Interview mit @simoneschaub.bsky.social, Early-Career-Forscherin im Clusterprojekt „RAI“ (Reasonable Artificial Intelligence).
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
13.01.2025 12:18 — 👍 14 🔁 3 💬 0 📌 0
Hi Julian, just joined bluesky, I am working on XAI in Computer Vision, would be great to be added to the list as well, thanks
08.01.2025 15:41 — 👍 1 🔁 0 💬 1 📌 0
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
13.12.2024 10:10 — 👍 21 🔁 7 💬 1 📌 1
Our work, "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals" is accepted at TMLR! 🎉
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
28.11.2024 17:41 — 👍 17 🔁 7 💬 1 📌 0
hessian.AI conducts cutting-edge AI research, provides computing infrastructure & services, supports start-up projects, ensures the transfer to business and society and thus strengthens the AI ecosystem in Hesse & beyond. https://hessian.ai/legal-notice
PhD candidate - Centre for Cognitive Science at TU Darmstadt,
explanations for AI, sequential decision-making, problem solving
PhD student in computer vision at Imagine, ENPC
Senior Lecturer @QUT Centre for Robotics & ARC DECRA Fellow. Blending neuroscience and robotics for robot localisation & underwater perception.
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models
Web: sukrutrao.github.io
Post-Doctoral Researcher at @eml-munich.bsky.social, in
@www.helmholtz-munich.de and @tumuenchen.bsky.social.
Optimal Transport, Explainability, Robustness, Deep Representation Learning, Computer Vision.
https://qbouniot.github.io/
Principal Scientist at Naver Labs Europe, Lead of Spatial AI team. AI for Robotics, Computer Vision, Machine Learning. Austrian in France. https://chriswolfvision.github.io/www/
The German Conference on Pattern Recognition (GCPR) is the annual symposium of the German Association for Pattern Recognition (DAGM). It is the national venue for recent advances in image processing, pattern recognition, and computer vision.
Ziel unserer Forschung ist der Schutz digitaler Städte vor Katastrophen. Dazu entwickeln wir widerstandsfähige Infrastrukturen, die Menschenleben retten.
Cybersicherheit und Privacy
https://www.sit.fraunhofer.de/de/impressum/
https://www.sit.fraunhofer.de/datenschutzerklaerung/
PhD student at the University of Tuebingen. Computer vision, video understanding, multimodal learning.
https://ninatu.github.io/
We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec.
Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning
#ML #AI #XAI #mechinterp
Research Manager @DFKI.bsky.social Darmstadt & Postdoc at @tuda-systems.bsky.social at @tuda.bsky.social
Teaches computers 💻 to read & understand 📚 to support us humans.
Private: @bhaettasch@kif.rocks. EN & DE. He
Tübingen Women in Machine Learning. We are a group of women at
University of Tübingen and MPI-IS trying to build a local community.
https://tuewiml.github.io/index.html
Postdoc@RTG Neuroexplict Models, Uni Saarland (previously@UKP Lab, TU Darmstadt) | Interactive Learning, Model Efficieny, NLP, Neuroexplicit Models.
PhD Student I University of Mannheim I Robustness & Fairness I 3D Computer Vision I
ML Researcher @ Aalto University 🇫🇮.
Previous: TU Graz 🇦🇹, originally from 🇩🇪.
Doing: Reliable ML | uncertainty stuff | Bayesian stats | probabilistic circuits
https://trappmartin.github.io/
Pahadi 🇮🇳| Assistant Professor at TU Eindhoven | Causality, Neuro-symbolic AI, Probabilistic Circuits and pretty much all of Machine Learning ;)
Research Scientist DFKI | PhD Candidate TU Darmstadt | Co-Founder Occiglot
PhD student at AIML Lab TU Darmstadt
Interested in concept learning, neuro-symbolic AI and program synthesis