Christian A. Naesseth's Avatar

Christian A. Naesseth

@canaesseth.bsky.social

Assistant Professor of Machine Learning Generative AI, Uncertainty Quantification, AI4Science Amsterdam Machine Learning Lab, University of Amsterdam https://naesseth.github.io

2,891 Followers  |  551 Following  |  115 Posts  |  Joined: 16.11.2024  |  2.0008

Latest posts by canaesseth.bsky.social on Bluesky

Preview
Monitoring Risks in Test-Time Adaptation Encountering shifted data at test time is a ubiquitous challenge when deploying predictive models. Test-time adaptation (TTA) methods address this issue by continuously adapting a deployed model using...

πŸ“œ Monitoring Risks in Test-Time Adaptation
(ICML PUT Workshop Oral!)

Time: Fri 18 Jul 10 a.m. PDT
Location: West Meeting Room 220-222
Presenter: @monaschir.bsky.social

arxiv.org/abs/2507.08721

16.07.2025 09:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Preview
Controlled Generation with Equivariant Variational Flow Matching We derive a controlled generation objective within the framework of Variational Flow Matching (VFM), which casts flow matching as a variational inference problem. We demonstrate that controlled genera...

πŸ“œ Controlled Generation with Equivariant Variational Flow Matching

Time: Wed 16 Jul 11 a.m. PDT β€” 1:30 p.m. PDT
Location: East Exhibition Hall A-B #E-3309
Presenter: @eijkelboomfloor.bsky.social

arxiv.org/abs/2506.18340

16.07.2025 09:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
https://arxiv.org/abs/2502.02472

πŸ“œ SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations

Time: Thu 17 Jul 11 a.m. PDT β€” 1:30 p.m. PDT
Location: East Exhibition Hall A-B #E-2412
Presenter: @gbarto.bsky.social

arxiv.org/abs/2502.02472

16.07.2025 09:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

At #ICML2025 this week?

Come check out our work on controlled generation, simulation-free latent SDEs, and risk monitoring in test-time adaptation, and chat with the awesome students that made it happen!

#SDE #Diffusion #FlowMatching #TTA #UncertaintyQuantification

16.07.2025 09:28 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
https://us05web.zoom.us/j/7780256206?pwd=flsq8weBOvaZgAsr3ThNiHq9d1mXMS.1&omn=89044077993

Tomorrow, Tuesday (July 1st) from 4pm to 5pm (UK time).

β€œSDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations" (arxiv.org/abs/2502.02472) πŸš€

Join via Zoom πŸ”₯

t.co/N1C3UFukxd

30.06.2025 10:40 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

πŸš¨πŸš€
Come hear @gbarto.bsky.social talk about SDE Matching tomorrow!

SDE Matching is a highly efficient and scalable training framework for Latent/Neural SDEs.

You no longer have to discretize or simulate your SDE models when fitting them to data.

#SDE #Diffusion #FlowMatching #ML

30.06.2025 10:40 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
a cartoon dog is sitting at a table with a cup of coffee in front of a fire with the words this is fine . ALT: a cartoon dog is sitting at a table with a cup of coffee in front of a fire with the words this is fine .

Overleaf down πŸ˜… #Overleaf #NeurIPS

14.05.2025 07:24 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

Tack Oskar!

11.05.2025 13:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wow, I am floored! With the UAI results in, my lab with collaborators have achieved the #PerfectGame πŸ†

100% acceptance rate across an entire #ML cycle! (5/5 #NeurIPS, #ICLR, 2/2 #AISTATS, 2/2 #ICML, 1/1 #UAI)

10 for 10. πŸ₯³πŸΎπŸ€©

#Science #AI #ElementalAI

07.05.2025 22:02 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Exciting news: AMLab is happy to have 7 papers accepted at #ICML2025! πŸŽ‰

See the thread below for the full list πŸ“ and meet us in Vancouver to discuss them further! πŸ‡¨πŸ‡¦

🧡1 / 8

06.05.2025 14:53 β€” πŸ‘ 14    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

Oh, rip, the camera-ready PDF on Open Review is only "privately revealed". Sorry about that :(

proceedings.mlr.press/v258/chen25f...
proceedings.mlr.press/v258/timans2...

03.05.2025 22:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Max-Rank: Efficient Multiple Testing for Conformal Prediction Multiple hypothesis testing (MHT) frequently arises in scientific inquiries, and concurrent testing of multiple hypotheses inflates the risk of Type-I errors or false positives, rendering MHT...

openreview.net/forum?id=1Yi...

openreview.net/forum?id=29c...

Check out the papers and/or the posters tomorrow (Sunday)!

#Statistics #SMC #ConformalPrediction #Testing #ML #Bayes

03.05.2025 01:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

#AISTATS2025 happening in Phuket, Thailand! I have two papers at the conference:

1. Max-Rank: Efficient Multiple Testing for Conformal Prediction

2. Variational Combinatorial Sequential Monte Carlo for Bayesian Phylogenetics in Hyperbolic Space

Both at poster session 2!

03.05.2025 01:37 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Thanks Pierre! Was great meeting in person as well :)

30.04.2025 04:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI πŸ€–πŸ§ 

30.04.2025 00:02 β€” πŸ‘ 19    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

If you missed it and are attending #AABI at NTU today you can find me presenting it again at the afternoon poster session!

approximateinference.org

28.04.2025 23:54 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations The Latent Stochastic Differential Equation (SDE) is a powerful tool for time series and sequence modeling. However, training Latent SDEs typically relies on adjoint sensitivity methods, which depend ...

Paper: arxiv.org/abs/2502.02472
FPI workshop: sites.google.com/view/fpiwork...
DeLTa workshop: delta-workshop.github.io

Joint work with my PhD student
@gbarto.bsky.social and our collaborator Dmitry Vetrov.

27.04.2025 23:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Come check out SDE Matching at the #ICLR2025 workshops, a new simulation-free framework for training fully general Latent/Neural SDEs (generalisation of diffusion and bridge models).

FPI: Morning poster session
DeLTa: Afternoon poster session

#SDE #Bayes #GenAI #Diffusion #Flow

27.04.2025 23:27 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Post image Post image Post image Post image

The calm before the storm #ICLR2025 πŸ”₯πŸ”₯πŸ”₯

23.04.2025 04:47 β€” πŸ‘ 48    πŸ” 6    πŸ’¬ 0    πŸ“Œ 2
Preview
E-Valuating Classifier Two-Sample Tests We introduce a powerful deep classifier two-sample test for high-dimensional data based on E-values, called E-C2ST. Our test combines ideas from existing work on split likelihood ratio tests and...

Attending #ICLR2025 and #AABI2025. Presenting at the conference and workshops:

1. E-Valuating Classifier Two-Sample Tests, Friday, Hall 3 + Hall 2B #437
2. SDE Matching, Sunday-Tuesday, FPI/DeLTa/AABI

openreview.net/forum?id=dwF...
arxiv.org/abs/2502.02472

lmk if you want to chat!

21.04.2025 06:00 β€” πŸ‘ 14    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Generative modelling in latent space Latent representations for generative models.

New blog post: let's talk about latents!
sander.ai/2025/04/15/l...

15.04.2025 09:43 β€” πŸ‘ 74    πŸ” 18    πŸ’¬ 3    πŸ“Œ 5

I'm not sure I followed this comment as I understood your earlier comment about disliking mandatory cites as leaning towards allowing more author discretion? But I understood this comment like an argument for less author discretion?

13.04.2025 16:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

However, if you cite something that you think is actively bad/wrong I think that it is perfectly fine to argue that point in the related work/discussion section, or perhaps in an extended part of it in the supplementary/appendix.

13.04.2025 15:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ah, I see. Perhaps I then misunderstood your comment about being opinionated about what is worth citing.

As I mentioned, my comment wasn't about this specific case as it is from my understanding quite a bit more complex than what was available on OpenReview.

13.04.2025 15:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Just to be extra clear, my comment was not (and is not) a comment about this specific case.

My comment was about whether it is ok in general to not cite relevant work because an author dislikes it and therefore doesn't think it is worth citing.

Of course relevance is to some degree subjective.

13.04.2025 15:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Indeed, as I mentioned it is not black and white and there is of course a cutoff. But not citing relevant papers because of personal taste is the wrong direction imo.

13.04.2025 15:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Of course there is a grayscale, but I don't think ones personal opinion about a work's worth should be given much weight when deciding whether a citation is warranted or not.

(note these are comments about citation norms in general and not this case in particular)

13.04.2025 13:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In general, I believe in stronger norms as weaker would allow for even more abuse and gaming than whatever our current norms are. If the work is relevant, it should be cited. If the work is highly relevant, it should be cited and discussed. In the discussion you can ofc give your opinion about it.

13.04.2025 13:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Home

Working on probabilistic modeling, inference, and decision-making? Join us at #AABI 2025 if you’re going to Singapore later this month!

Register here (free but spots are limited!): approximateinference.org

12.04.2025 21:05 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Make sure to get your tickets to AABI if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic modeling, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#Bayes #MachineLearning #ICLR2025 #AABI2025

13.04.2025 07:43 β€” πŸ‘ 17    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1

@canaesseth is following 20 prominent accounts