Uncertainty estimation fails under distribution shifts. Why? Partly because in stats, even Bayesian stats, we treat x as given. But intuitively data makes different models plausible. For reliable uncertainty, we need to account for it explicitly. Come chat with me about it tomorrow at my poster
03.12.2025 00:59 β π 5 π 1 π¬ 1 π 1
Hello!
We will be presenting Estimating the Hallucination Rate of Generative AI at NeurIPS. Come if you'd like to chat about epistemic uncertainty for In-Context Learning, or uncertainty more generally. :)
Location: East Exhibit Hall A-C #2703
Time: Friday @ 4:30
Paper: arxiv.org/abs/2406.07457
The circuit hypothesis proposes that LLM capabilities emerge from small subnetworks within the model. But how can we actually test this? π€
joint work with @velezbeltran.bsky.social @maggiemakar.bsky.social @anndvision.bsky.social @bleilab.bsky.social Adria @far.ai Achille and Caro
Fri 13 Dec 11 a.m. PST β 2 p.m. PST
East Exhibit Hall A-C #2204
Paper: neurips.cc/virtual/2024...
10.12.2024 22:09 β π 2 π 0 π¬ 1 π 0In this paper, we tackle shifts caused by an unknown attribute with an approach opposite to bootstrapping: we use small samples to generate synthetic environments with different "kinds" of classes and learn more robust data representations.
10.12.2024 22:08 β π 2 π 0 π¬ 1 π 0But in zero-shot, we face new classes at test time. To adapt, we need to know which "kind" of classes to emphasize. But in reality, the shift is often unknown.
10.12.2024 22:08 β π 2 π 0 π¬ 1 π 0Class distribution shifts are often seen as the easiest to handleβthat's often true for supervised learning, thanks to reweighting/resampling.
10.12.2024 22:07 β π 2 π 0 π¬ 1 π 0I'm on my way to #NeurIPS2024. On Friday I'm going to present my latest paper with Yuval Benjamini. The gist is in the comments, and come chat with me to hear more!
10.12.2024 22:06 β π 7 π 4 π¬ 1 π 1Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! π³ We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.
paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...
π§΅(1/8)
Hi, would love to be added! Thanks!
04.12.2024 23:24 β π 2 π 0 π¬ 0 π 0Hi! Would love to be added. Thanks!
04.12.2024 23:17 β π 0 π 0 π¬ 0 π 0Hi! Would love to be added! Thanks!
04.12.2024 16:37 β π 1 π 0 π¬ 0 π 0Hi! I'd love to be added. Thanks!
04.12.2024 16:26 β π 0 π 0 π¬ 0 π 0Hi! Could you please add me to the starter pack? Thanks!
04.12.2024 16:15 β π 1 π 0 π¬ 0 π 0Hi! Could you please add me to the starter pack? Thanks!
04.12.2024 16:14 β π 0 π 0 π¬ 0 π 0