There are a few with good vibes and (somewhat) specialty coffee. Personally I like KLVN (near Bakery Square), Arriviste (Shadyside), Redhawk (Oakland). They're not super fancy, but way better than the well-known chains!
23.10.2025 18:55 β π 1 π 0 π¬ 0 π 0
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
The widespread adoption of machine learning systems has raised critical concerns about fairness and bias, making mitigating harmful biases essential for AI development. In this paper, we investigate t...
π
Tuesday 5:45 pm - 8:00 pm in Exhibit Hall poster no. 437
My colleague Εukasz Sztukiewicz will present our joint work (with @inverse-hessian.bsky.social) on the relationship between saliency maps and fairness as part of the Undergraduate and Masterβs Consortium.
π Paper: arxiv.org/abs/2503.00234
03.08.2025 21:52 β π 0 π 0 π¬ 0 π 0
π
Monday 8:00 am - 12:00 pm in Room 700
Presenting our work on mitigating persistent client dropout in decentralized federated learning as part of the FedKDD workshop.
π Project website: ignacystepka.com/projects/fed...
π Paper: openreview.net/pdf/576de662...
03.08.2025 21:52 β π 0 π 0 π¬ 1 π 0
This week I'm presenting some works at #KDD2025 in Toronto π¨π¦
Letβs connect if youβre interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!
Hereβs where you can find me:
03.08.2025 21:52 β π 0 π 0 π¬ 1 π 0
π Results: Across 6 datasets, BetaRCE consistently achieved target robustness levels while preserving explanation quality and maintaining a competitive robustness-cost trade-off. 6/7π§΅
12.05.2025 12:49 β π 0 π 0 π¬ 1 π 0
You control both confidence level (Ξ±) and robustness threshold (Ξ΄), giving statistical guarantees that your explanation will survive changes! For formal proofs on optimal SAM sampling methods and the full theoretical foundation, check out our paper! 5/7π§΅
12.05.2025 12:49 β π 1 π 0 π¬ 1 π 0
βοΈ Under the hood: BetaRCE explores a "Space of Admissible Models" (SAM) - representing expected/foreseeable changes to your model. Using Bayesian statistics, we efficiently estimate the probability that explanations remain valid across these changes. 4/7π§΅
12.05.2025 12:48 β π 0 π 0 π¬ 1 π 0
β
Our solution: BetaRCE - offers probabilistic guarantees for robustness to model change. It works with ANY model class, is post-hoc, and can enhance your current counterfactual methods. Plus, it allows you to control the robustness-cost trade-off. 3/7π§΅
12.05.2025 12:48 β π 0 π 0 π¬ 1 π 0
β This happens constantly in real-world AI systems. Current explanation methods don't address this well - they're limited to specific models, require extensive tuning, or lack guarantees about explanation robustness. 2/7π§΅
12.05.2025 12:48 β π 0 π 0 π¬ 1 π 0
π£ New paper at #KDD2025 on robust counterfactual explanations!
Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7π§΅
12.05.2025 12:47 β π 3 π 1 π¬ 1 π 0
Plangineer (AICP) when Iβm at work, alter ego CityNerd when Iβm not. New videos on cities and transportation every Wednesday. http://linktr.ee/CityNerd π @nerd4cities
At CMU, the (Blue)sky's the limit. United by curiosity and driven by passion, we reach across disciplines, forge new ground and deploy our expertise to make real change that benefits humankind.
Assistant professor at Institut Polytechnique de Paris
https://bakirtzis.net
Post-doc @ VU Amsterdam, prev University of Edinburgh.
Neurosymbolic Machine Learning, Generative Models, commonsense reasoning
https://www.emilevankrieken.com/
Combination of machine learning engineer / data scientist and teacher (depending on what hat I'm wearing that day)
Focusing on ML/AI @Google
(obligatory 'opinions are my own')
github.com/MrGeislinger
Our in depth reporting on innovation reveals and explains whatβs happening now to help you know whatβs coming next.
Find our journalists on Bluesky: https://bsky.app/starter-pack/technologyreview.com/3lar7fofuwl2n
Independent AI researcher, creator of datasette.io and llm.datasette.io, building open source tools for data journalism, writing about a lot of stuff at https://simonwillison.net/
Professor a NYU; Chief AI Scientist at Meta.
Researcher in AI, Machine Learning, Robotics, etc.
ACM Turing Award Laureate.
http://yann.lecun.com
Researching reasoning at OpenAI | Co-created Libratus/Pluribus superhuman poker AIs, CICERO Diplomacy AI, and OpenAI o-series / π
Professor, Santa Fe Institute. Research on AI, cognitive science, and complex systems.
Website: https://melaniemitchell.me
Substack: https://aiguide.substack.com/
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
Chief Scientist at the UK AI Security Institute (AISI). Previously DeepMind, OpenAI, Google Brain, etc.
Co-founder and CEO, Mistral AI
Cofounder CEO, Perplexity.ai
Climate & AI Lead @HuggingFace, TED speaker, WiML board member, TIME AI 100 (She/her/Dr/π¦)
Research Scientist @DeepMind | Previously @OSFellows & @hrdag. RT != endorsements. Opinions Mine. Pronouns: he/him
Co-Founder of LinkedIn. Focused on using AI to find the cure for cancer, faster. Proud American.
Writing, Pod, ETC: Beacons.ai/reidhoffman
Llama Farmer
Ex CLO Hugging Face, Xoogler
I work on AI at OpenAI.
Former VP AI and Distinguished Scientist at Microsoft.
Cofounding Executive President and Chairman at probabl.ai, the @scikit-learn.org company. Tech exec, repeat entrepreneur, advisor, angel investor. MBA @INSEAD. Plays π