Robust Mixture Learning when Outliers Overwhelm Small Groups
We study the problem of estimating the means of well-separated mixtures when an adversary may add arbitrary outliers. While strong guarantees are available when the outlier fraction is significantly s...
We obtain information-theoretically optimal list size and recovery error, and provide empirical comparison with prior methods.
link: arxiv.org/abs/2407.15792
Joint with @raresbuhai.bsky.social, Stefan Tiegel, Alex Wolters, Gleb Novikov, @amartyasanyal.bsky.social, David Steurer, and Fanny Yang.
10.12.2024 20:31 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Our method works in the presence of large outliers if mixture components are spherical Gaussians, or, more generally, have bounded k-th sub-Gaussian moments.
We propose a reduction from the robust mixture learning problem to a well-studied list-decodable mean estimation problem.
10.12.2024 20:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
When the number of outliers is negligible compared to the smallest component, existing algorithms recover all means with optimal errors.
However, when the fraction of outliers becomes larger than the smallest component, prior methods suffer both in recovery error and list size.
10.12.2024 20:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Excited to present at #NeurIPS2024 our work on robust mixture learning!
How hard is mixture learning when (a lot of) outliers are present? We show that it's easier than it seems!
Join us at the poster session (Wed, 16:30 PT, West Ballroom A-D #5710).
10.12.2024 20:31 โ ๐ 3 ๐ 2 ๐ฌ 1 ๐ 0
a typical day at Simons: read a paper in the morning, drink tea with all authors of the paper in the afternoon.
22.11.2024 00:58 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
PhD Student at ETH Zurich
Welcome to ETH AI Center! We are ethz.ch/en 's central hub leading the way towards trustworthy, accessible and inclusive #artificialintelligence
ai.ethz.ch
Math Assoc. Prof. at Aix-Marseille (France)
Currently on Sabbatical at CRM-CNRS, Universitรฉ de Montrรฉal
https://sites.google.com/view/sebastien-darses/welcome
Teaching Project (non-profit): https://highcolle.com/
Assistant Prof at Penn CIS | Postdoc at Microsoft Research | PhD from UT Austin CS | Co-founder LeT-All
Head of AI @ NormalComputing. Tweets on Math, AI, Chess, Probability, ML, Algorithms and Randomness. Author of tensorcookbook.com
PhD student (EPFL, Switzerland), working on the theory of deep learning and statistical physics of computation.
https://yatindandi.github.io/
Researcher in ML and Privacy.
Distinguished Postdoc at Khoury College Northeastern.
PhD @UofT & @VectorInst. previously Research Intern @Google and @ServiceNowRSRCH
https://mhaghifam.github.io/mahdihaghifam/
PhD Student at MIT. Previously, EE undergrad at IIT Madras. Interested in online learning, auctions, and mechanism design.
sourav22899.github.io
PhD student in Computer Science at ETH Zurich. raresbuhai.com.
PhD student at the University of Pennsylvania. Currently, intern at MSR. Interested in reliable and replicable reinforcement learning and using it for knowledge discovery: https://marcelhussing.github.io/
All posts are my own.
cs phd @upenn advised by Michael Kearns, Aaron Roth, and Duncan Watts| previously @stanford | she/her
https://psamathe50.github.io/sikatasengupta/
wharton stats phd โ ml theory, ml for science
prev: comp neuro, data, physics
working with Edgar Dobriban and Konrad Kรถrding
also some sports (esp. philly! go birds)
PhD student at University of Alberta. Interested in reinforcement learning, imitation learning, machine learning theory, and robotics
https://chanb.github.io/
Penn CS PhD student and IBM PhD Fellow studying strategic algorithmic interaction. Calibration, commitment, collusion, collaboration. She/her. Nataliecollina.com
Machine Learning @ University of Edinburgh | AI4Science | optimization | numerics | networks | co-founder @ MiniML.ai | ftudisco.gitlab.io