Any recommendations for a rss reader app in iOS. I prefer something that is self hosted or open source rather than paid app.
22.11.2024 22:51 — 👍 3 🔁 0 💬 0 📌 0
This is an automated account, run by @toxic.sf.ca.us. It uses data provided by caltrans to post timely information about chain controls on Interstate 80 through the Sierra Nevada mountains.
www.chainguy.com
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
PhD Student @ UC San Diego
Researching reliable, interpretable, and human-aligned ML/AI
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
I work with explainability AI in a german research facility
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml
chhaviyadav.org
Assistant Professor @RutgersCS • Previously @MSFTResearch, @dukecompsci, @PinterestEng, @samsungresearch • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
Seeking superhuman explanations.
Senior researcher at Microsoft Research, PhD from UC Berkeley, https://csinva.io/
Professor in Artificial Intelligence, The University of Queensland, Australia
Human-Centred AI, Decision support, Human-agent interaction, Explainable AI
https://uqtmiller.github.io
Incoming Assistant Professor @ University of Cambridge.
Responsible AI. Human-AI Collaboration. Interactive Evaluation.
umangsbhatt.github.io
Senior Researcher @arc-mpib.bsky.social MaxPlanck @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, AI/collective intelligence, behavioral public policy, comput. social science, misinfo; stefanherzog.org scienceofboosting.org
PhD Student @ LMU Munich
Munich Center for Machine Learning (MCML)
Research in Interpretable ML / Explainable AI
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
interpretable machine learning for atmospheric and astronomical data analysis, near-IR spectra, climate tech, stars & planets; bikes, Austin, diving off bridges into the ocean.
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.