CSML IIT Lab's Avatar

CSML IIT Lab

@pontilgroup.bsky.social

Computational Statistics and Machine Learning (CSML) Lab | PI: Massimiliano Pontil | Webpage: csml.iit.it | Active research lines: Learning theory, ML for dynamical systems, ML for science, and optimization.

557 Followers  |  13 Following  |  28 Posts  |  Joined: 24.11.2024  |  1.8355

Latest posts by pontilgroup.bsky.social on Bluesky

Excited to share our groupโ€™s latest work at #AISTATS2025! ๐ŸŽ“
Tackling concentration in dependent data settings with empirical Bernstein bounds for Hilbert space-valued processes.
๐Ÿ“Catch the poster tomorrow!

๐Ÿ” See the original tweet for details!

02.05.2025 18:36 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

DeltaProduct is here! Achieve better state tracing through highly parallel execution. Explore more!๐Ÿš€

09.04.2025 10:11 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Slow dynamical modes from static averages In recent times, efforts are being made at describing the evolution of a complex system not through long trajectories, but via the study of probability distribution evolution. This more collective app...

P11] (submitted to The Journal of Chemical Physics)
chemrxiv.org/engage/chemr...

Kooplearn library:
kooplearn.readthedocs.io/latest/

For the longer version of the thread, you can take a look at this blog post:
vladi-iit.github.io/posts/2024-1...

15.01.2025 14:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces We study a class of dynamical systems modelled as Markov chains that admit an invariant distribution via the corresponding transfer, or Koopman, operator. While data-driven algorithms to reconstruct s...

Publications:
[P1] NeurIPS 2022
arxiv.org/abs/2205.14027

[P2] NeurIPS2023
arxiv.org/abs/2302.02004

[P3] ICML2024
arxiv.org/abs/2312.13426

[P4] NeurIPS2023
arxiv.org/abs/2306.04520

[P5] ICLR 2024
arxiv.org/abs/2307.09912

[P6] NeurIPS2024
arxiv.org/abs/2405.12940

15.01.2025 14:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

14/ Looking ahead, weโ€™re excited to tackle new challenges:
โ€ข Learning from partial observations
โ€ข Modeling non-time-homogeneous dynamics
โ€ข Expanding applications in neuroscience, genetics, and climate modeling

Stay tuned for groundbreaking updates from our team! ๐ŸŒ

15.01.2025 14:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ™ Collaborations with the Dynamic Legged Systems group led by Claudio Semini and the Atomistic Simulations group led by Michele Parrinello enriched our research, resulting in impactful works like [P9, P10] and [P7, P11].

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

12/ This journey wouldnโ€™t have been possible without the inspiring collaborations that shaped our work.

๐ŸŒŸ Special thanks to Karim Lounici from ร‰cole Polytechnique, whose insights were a major driving force behind many projects.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Predicting the quantiles for opening/closing of the Chignolin protein in the next simulation step

Predicting the quantiles for opening/closing of the Chignolin protein in the next simulation step

11/ One of our most exciting results:
[P8] NeurIPS 2024 proposed Neural Conditional Probability (NCP) to efficiently learn conditional distributions. It simplifies uncertainty quantification and guarantees accuracy for nonlinear, high-dimensional data.

15.01.2025 14:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

10/ [P7] NeurIPS 2024 developed methods to discover slow dynamical modes in systems like molecular simulations. This is transformative for studying rare events and costly data acquisition scenarios in atomistic systems.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

9/ Addressing continuous dynamics:
[P6] NeurIPS 2024 introduced a physics-informed framework for learning Infinitesimal Generators (IG) of stochastic systems, ensuring robust spectral estimation.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

8/ ๐ŸŒŸ Representation learning takes center stage in:
[P5] ICLR 2024
We combined neural networks with operator theory via Deep Projection Networks (DPNets). This approach enhances robustness, scalability, and interpretability for dynamical systems.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Free energy surface of Chignolin protein folding

Free energy surface of Chignolin protein folding

7/ ๐Ÿ“ˆ Scaling up:
[P4] NeurIPS 2023 introduced a Nystrรถm sketching-based method to reduce computational costs from cubic to almost linear without sacrificing accuracy. Validated on massive datasets like molecular dynamics, see figure.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Effects of metric distortion in learning eigenvalues (left) and stabilization of forecasting (right) for Ornstein-Uhlenbeck process

Effects of metric distortion in learning eigenvalues (left) and stabilization of forecasting (right) for Ornstein-Uhlenbeck process

6/ [P3] ICML 2024 addressed a critical issue in TO-based modeling: reliable long-term predictions.
Our Deflate-Learn-Inflate (DLI) paradigm ensures uniform error bounds, even for infinite time horizons. This method stabilized predictions in real-world tasks; see the figure.

15.01.2025 14:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

5/ [P2] NeurIPS 2023 advanced TOs with theoretical guarantees for spectral decompositionโ€”previously lacking finite sample guarantees. We developed sharp learning rates, enabling accurate, reliable models for long-term system behavior.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
 Koopman Operator Regression Pipeline

Koopman Operator Regression Pipeline

4/ ๐Ÿ”‘ The journey began with:
[P1] NeurIPS 2022
We introduced the first ML formulation for learning TO, which led to the development of the open-source Kooplearn library. This step laid the groundwork for exploring the theoretical limits of operator learning from finite data.

15.01.2025 14:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

3/TOs describe system evolution over finite time intervals, while IGs capture instantaneous rates of change. Their spectral decomposition is key for identifying dominant modes and understanding long-term behavior in complex or stochastic systems.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

2/ ๐ŸŒ Our work revolves around Markov/Transfer Operators (TO) and their Infinitesimal Generators (IG)โ€”tools that allow us to model complex dynamical systems by understanding their evolution in higher-dimensional spaces. Hereโ€™s why this matters.

15.01.2025 14:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

1/ ๐Ÿš€ Over the past two years, our team, CSML, at IIT, has made significant strides in the data-driven modeling of dynamical systems. Curious about how we use advanced operator-based techniques to tackle real-world challenges? Letโ€™s dive in! ๐Ÿงต๐Ÿ‘‡

15.01.2025 14:34 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

An inspiring dive into understanding dynamical processes through 'The Operator Way.' A fascinating approach made accessible for everyoneโ€”check it out! ๐Ÿ‘‡๐Ÿ‘€

15.01.2025 10:31 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to Transformers in large language modeling, offering linear scaling withโ€ฆ

Excited to present
"Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues"
at the M3L workshop at #NeurIPS
https://buff.ly/3BlcD4y

If interested, you can attend the presentation the 14th at 15:00, pass at the afternoon poster session, or DM me to discuss :)

10.12.2024 22:52 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

In his book โ€œThe Nature of Statistical Learningโ€ V. Vapnik wrote:
โ€œWhen solving a given problem, try to avoid a more general problem as an intermediate stepโ€

12.12.2024 17:19 โ€” ๐Ÿ‘ 8    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Join us at our posters and talks to connect, share ideas, and explore collaborations. ๐Ÿš€โœจ

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ”ฌ Fine-tuning Foundation Models for Molecular Dynamics: A Data-Efficient Approach with Random Features
โœ๏ธ @pienovelli.bsky.social, L. Bonati, P. Buigues, G. Meanti, L. Rosasco, M. Pontil | ๐Ÿ“…ML4PS Workshop, Dec 15.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ”— Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues
โœ๏ธ R. Grazzi, J. Siems, J. Franke, A. Zela, F. Hutter, M. Pontil
๐Ÿ“ƒhttps://arxiv.org/abs/2411.12537 | ๐Ÿ“… Oral @ M3L workshop, Dec 14, 15:00 - 15:15.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐ŸŒŠ Learning the Infinitesimal Generator of Stochastic Diffusion Processes
โœ๏ธV. Kostic, H. Halconruy, @tdevergne.bsky.social, K. Lounici, M. Pontil
๐Ÿ“ƒhttps://arxiv.org/abs/2405.12940 | ๐Ÿ“… Poster #5410 Dec 13, 16:30 - 19:30.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿงฎ Generalization of Hamiltonian algorithms
โœ๏ธA. Maurer
๐Ÿ“ƒhttps://arxiv.org/abs/2405.14469 | ๐Ÿ“… Poster #3706 Dec 13, 16:30 - 19:30.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ” Neural Conditional Probability for Uncertainty Quantification
โœ๏ธV. Kostic, G. Pacreau, @giaturri.bsky.social, @pienovelli.bsky.social, K. Lounici, M. Pontil
๐Ÿ“ƒhttps://arxiv.org/abs/2407.01171 | ๐Ÿ“… Poster #4007 Dec 13, 11:00 - 14:00.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

โš™๏ธ From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach
โœ๏ธ @tdevergne.bsky.social, V. Kostic, M. Parrinello, M. Pontil
๐Ÿ“ƒhttps://arxiv.org/abs/2406.09028 | ๐Ÿ“… Poster #3806 Dec 12, 16:30 - 19:30.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐ŸŒ Operator World Models for Reinforcement Learning
โœ๏ธ @pienovelli.bsky.social, @marcopra.bsky.social, M. Pontil, C. Ciliberto
๐Ÿ“ƒhttps://arxiv.org/abs/2406.19861 | ๐Ÿ“… Poster #6907 Dec 12, 16:30 - 19:30.

10.12.2024 02:38 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

At #NeurIPS2024 ๐Ÿ‡จ๐Ÿ‡ฆ our group will present 7 contributions! These span a diverse array of topics: from theoretical advances in stochastic processes and reinforcement learning to applications in molecular dynamics and uncertainty quantification.

10.12.2024 02:38 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@pontilgroup is following 13 prominent accounts