Jeremias Sulam's Avatar

Jeremias Sulam

@jsulam.bsky.social

Assistant Prof. @ JHU πŸ‡¦πŸ‡·πŸ‡ΊπŸ‡Έ Mathematics of Data & Biomedical Data Science jsulam.github.io

72 Followers  |  137 Following  |  15 Posts  |  Joined: 22.11.2024  |  1.7261

Latest posts by jsulam.bsky.social on Bluesky

Many more details on derivation, intuition, implementation details and proofs can be found in our paper arxiv.org/pdf/2507.08956 with the amazing Zhenghan Fang, Sam Buchanan and Mateo Diaz @ @jhu.edu @hopkinsdsai.bsky.social

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

2) In theory, we show that prox-diff sampling requires only O(d/sqrt(eps)) to produce a distribution epsilon-away (in KL) from a target one (assuming oracle prox), faster than the score version (resp. O(d/eps)). Technical assumptions differ in all papers though, so exact comparison is hard (10/n)

22.07.2025 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

1) In practice, ProxDM can provide samples from the data distribution much faster than comparable methods based on the score like DDPM from (Ho et al, 2020) and even comparable to their ODE alternatives (which are much faster) (9/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this way, we generalize the Proximal Matching Loss from (Fang et al, 2024) to learn time-specific proximal operators for the densities at each discrete time. The result is Proximal Diffusion Models: sampling by using proximal operators instead of the score. This has 2 main advantages: (8/n)

22.07.2025 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

So, in order to implement a Proximal/backward version of diffusion models, we need a (cheap!) way of solving this optimization problem, i.e. computing the proximal of the log densities at every single time step. If only there was a way… oh, in come Learned Proximal Networks (7/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What are proximal operators? You can think of them as generalizations of projection operators. For a given (proximable) functional \rho(x), its proximal is defined by the solution of a simple optimization problem: (6/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Backward discretization of diff. eqs. has been long studied (c.f. gradient descent vs proximal point method). Let’s go ahead and discretize the same SDE, but backwards! One problem: the update is defined implicitly... But it does admit a close form expression in terms of proximal operators! (5/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Crucially, this step relies on being able to compute the score function. Luckily, Minimum Mean Squared Estimate (MMSE) denoisers can do just that (at least asymptotically). But, couldn’t there be a different discretization strategy for this SDE, you ask? Great question! Let's go *back*... (4/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

While elegant in continuous time, one needs to discretize the SDE to implement it in practice. In DF, this has always been done through forward discretization (e.g Euler-Maruyama), which combines a gradient step of the data distribution at the discrete time t (the *score*), and Gaussian noise: (3/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

First, a (very) brief overview of diffusion models (DM). DM work by simulating a process that converts samples from a distribution (random noise) to samples from target distribution of interest. This process is modeled mathematically with a stochastic differential equation (SDE) (2/n)

22.07.2025 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Check this out πŸ“’ Score-based diffusion models are powerfulβ€”but slow to sample. Could there be something better? Drop the scores, use proximals instead!

We present Proximal Diffusion Models, providing a faster alternative both in theory* and practice. Here’s how it works 🧡(1/n)

22.07.2025 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Awesome to see our cover in @cp-patterns.bsky.social finally out! And kudos go to Zhenzhen Wang for her massive work on biomarker discovery for breast cancer
www.cell.com/patterns/ful...

14.03.2025 16:09 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Today, on #WomenInScience day, this paper on biomarker discovery for breast cancer, by my amazing student Zhenzhen, has just appeared in @cp-patterns.bsky.social
πŸŽ‰ Her work shows how to construct fully interpretable biomarkers employing bi-level graph learning! @jhu.edu @hopkinsdsai.bsky.social

12.02.2025 02:30 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Preview
Wanna bet? Testing conceptual importance for more explainable AI Johns Hopkins researchers used betting strategies to help clarify AI models’ decision-making processes.

Nice write-up by @JHUCompSci about @JacopoTeneggi's work. Puch-line: interpretability of opaque ML models can be posed as hypothesis tests, for which online (efficient) testing procedures can be derived! www.cs.jhu.edu/news/wanna-b...

13.12.2024 17:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“£ What should *ML explanations* convey, and how does one report these precisely and rigorously? @neuripsconf.bsky.social
come check
Jacopo Teneggi's work on Testing for Explanations via betting this afternoon! I *bet* you'll like it :) openreview.net/pdf?id=A0HSm... @hopkinsdsai.bsky.social

11.12.2024 18:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
realSEUDO for real-time calcium imaging analysis Closed-loop neuroscience experimentation, where recorded neural activity is used to modify the experiment on-the-fly, is critical for deducing causal connections and optimizing experimental time. A cr...

NeurIPS paper: Excited for our work (with Iuliia Dmitrieva+Sergey Babkin) on

"realSEUDO for real-time calcium imaging analysis"
arxiv.org/abs/2405.15701

to be presented tomorrow (Thu 4:30-7:30PM). realSEUDO is a fully on-line method for cell detection and activity estimation that runs at >30Hz.

11.12.2024 14:23 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

@jsulam is following 20 prominent accounts