🦬🏔️ @cuboulder.info science prof
Vis / HCI - Designing data for the public.
📊❤️ PI @informationvisions.bsky.social
Previously: Bucknell CS prof, Tufts CS PhD
🔗 https://peck.phd/
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
PhD Student @ UC San Diego
Researching reliable, interpretable, and human-aligned ML/AI
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
I work with explainability AI in a german research facility
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml
chhaviyadav.org
Postdoctoral Researcher at Microsoft Research • Incoming Faculty at Rutgers CS • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
Author of Interpretable Machine Learning and other books
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Seeking superhuman explanations.
Senior researcher at Microsoft Research, PhD from UC Berkeley, https://csinva.io/
Professor in Artificial Intelligence, The University of Queensland, Australia
Human-Centred AI, Decision support, Human-agent interaction, Explainable AI
https://uqtmiller.github.io
Incoming Assistant Professor @ University of Cambridge.
Responsible AI. Human-AI Collaboration. Interactive Evaluation.
umangsbhatt.github.io
Senior Researcher @arc-mpib.bsky.social MaxPlanck @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, AI/collective intelligence, behavioral public policy, comput. social science, misinfo; stefanherzog.org scienceofboosting.org
PhD Student @ LMU Munich
Munich Center for Machine Learning (MCML)
Research in Interpretable ML / Explainable AI
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him