Learn more:
π Paper: arxiv.org/abs/2501.18277
π» GitHub: github.com/kadarsh22/sebra
π Project: kadarsh22.github.io/sebra_iclr25
#ICLR2025 #AI #BiasMitigation #MachineLearning
31.01.2025 16:08 β π 0 π 0 π¬ 0 π 0
β¨ Highlights:
π Unsupervised Multi-Bias Mitigation β discover & mitigate multiple biases without bias annotations.
π΅οΈ Discovering Hidden Biases β Reveals unknown biases & labeling errors.
π Scalable Performance β Effectiveness on large datasets like ImageNet-1K, moving beyond simple benchmarks.
31.01.2025 16:08 β π 0 π 0 π¬ 1 π 0
π Iβm thrilled to share that our paper, "Sebra: DeBiasing through Self-Guided Bias Ranking," has been accepted to ICLR 2025! π
31.01.2025 16:08 β π 0 π 1 π¬ 1 π 0
(1) DeNetDM: Debiasing by Network Depth Modulation
ποΈ Thu, Dec 12 | π 11 a.m. β 2 p.m. PST | π East Exhibit Hall A-C #4309
Authors : Silpa Vadakkeeveetil Sreelatha *, Adarsh Kappiyath*, Abhra Chaudhuri, Anjan Dutta β * equal contribution
(2/5)
11.12.2024 05:38 β π 2 π 1 π¬ 1 π 0
Assistant Professor @Mila-Quebec.bsky.social
Co-Director @McGill-NLP.bsky.social
Researcher @ServiceNow.bsky.social
Alumni: @StanfordNLP.bsky.social, EdinburghNLP
Natural Language Processor #NLProc
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
PhD Student @ UC San Diego
Researching reliable, interpretable, and human-aligned ML/AI
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
I work with explainability AI in a german research facility
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml
chhaviyadav.org
Postdoctoral Researcher at Microsoft Research β’ Incoming Faculty at Rutgers CS β’ Trustworthy AI β’ Interpretable ML β’ https://lesiasemenova.github.io/
Author of Interpretable Machine Learning and other books
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Seeking superhuman explanations.
Senior researcher at Microsoft Research, PhD from UC Berkeley, https://csinva.io/
Professor in Artificial Intelligence, The University of Queensland, Australia
Human-Centred AI, Decision support, Human-agent interaction, Explainable AI
https://uqtmiller.github.io
Incoming Assistant Professor @ University of Cambridge.
Responsible AI. Human-AI Collaboration. Interactive Evaluation.
umangsbhatt.github.io
Senior Researcher @arc-mpib.bsky.social MaxPlanck @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, AI/collective intelligence, behavioral public policy, comput. social science, misinfo; stefanherzog.org scienceofboosting.org
PhD Student @ LMU Munich
Munich Center for Machine Learning (MCML)
Research in Interpretable ML / Explainable AI
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him