Marco Cuturi

Marco Cuturi

@marcocuturi.bsky.social

machine learning researcher @ Apple machine learning research

786 Followers 58 Following 30 Posts Joined Dec 2023
3 weeks ago
Post image

Small Language Models (SLMs) don’t have the capacity to remember everything in their training data. Which tokens should they learn to predict, and when should they ask for help? We tackle this question in our new preprint.

You can check it out on arxiv: arxiv.org/abs/2602.12005
🧡1/7

46 7 1 1
2 months ago

With other folks at 🍏, @brunokm.bsky.social has worked on a complete(d) parameterisation for NNs that can *transfer* locally tuned hyperparameters: tune optimizers' parameters (e.g. LR) *per module/depth* using an evolutionary search on small models β†’ they transfer perf. gains to much larger models

5 1 0 0
3 months ago

I am at #NeurIPS2025. We can chat about data mixing, efficient training, ML@Apple and more.

2 1 0 0
3 months ago

To me, a simple and cheap fix could be an automatic reveal of the names of everyone involved in the review process 5 years after decision. This would be opt-in by authors. Reviewers/AC/SACs would be more careful when writing.

5 0 0 0
3 months ago

I understand it's a challenge to implement this, but when bad actors do not pay a price for cheating, the folks that pay the steepest price end up being the authors who spend endless hours writing rebuttals. I feel AI conferences have prioritized growth over fairness to authors.

2 0 1 0
3 months ago

namely, it's impossible, when PCs see abuse/collusions or very poor quality work (submissions/reviews), to ban momentarily or permanently bad players. So every year/conference is again up for grabs if you're one of those bad actors.

2 0 1 0
3 months ago

my 2 cents on the ICLR drama: It's been years that the system has been under attack. But it's also been years that we hear, year after year, that there is no way to enforce protection mechanisms (e.g. deny lists for dishonest authors or reviewers etc..) for legal reasons.

6 1 1 0
4 months ago

πŸ“’ We’re looking for a researcher in in cogsci, neuroscience, linguistics, or related disciplines to work with us at Apple Machine Learning Research! We're hiring for a one-year interdisciplinary AIML Resident to work on understanding reasoning and decision making in LLMs. 🧡

9 5 1 1
4 months ago
Preview
On Fitting Flow Models with Large Sinkhorn Couplings Flow models transform data gradually from one modality (e.g. noise) onto another (e.g. images). Such models are parameterized by a time-dependent velocity field, trained to fit segments connecting pai...

We also introduce two coupling approaches advocated this summer to improve FM training: using either very large sharp Sinkhorn couplings (arxiv.org/abs/2506.05526) or, even better, semidiscrete couplings (arxiv.org/abs/2509.25519), as proposed with Alireza Mousavi-Hosseini and
@syz.bsky.social

1 1 0 0
4 months ago
Video thumbnail

We have been working with Michal Klein on pushing a module to train *flow matching* models using JAX. This is shipped as part of our new release of the OTT-JAX toolbox (github.com/ott-jax/ott)

The tutorial to do so is here: ott-jax.readthedocs.io/tutorials/ne...

13 7 1 0
4 months ago
Post image Post image Post image

Afternoon talks by:
@marcocuturi.bsky.social
Elena Agliari
Jan Gerken

Thanks all for the great talks, conversations, and engagement! Fingers crossed we get to host this event a 4th time next year and see many of you back in Gothenburg πŸ€žπŸ‡ΈπŸ‡ͺ

5 1 0 0
4 months ago
Video thumbnail

πŸš€ Excited to share LinEAS, our new activation steering method accepted at NeurIPS 2025! It approximates optimal transport maps e2e to precisely guide 🧭 activations achieving finer control 🎚️ with ✨ less than 32 ✨ prompts!

πŸ’»https://github.com/apple/ml-lineas
πŸ“„https://arxiv.org/abs/2503.10679

2 1 1 1
4 months ago

It's that time of the year! 🎁

The Apple Machine Learning Research (MLR) team in Paris is hiring a few interns, to do cool research for Β±6 months πŸš€πŸš€ & work towards publications/OSS.

Check requirements and apply: ➑️ jobs.apple.com/en-us/detail...

More❓→ βœ‰οΈ mlr_paris_internships@group.apple.com

7 4 0 0
5 months ago
Video thumbnail

While working on semidiscrete flow matching this summer (➑️ arxiv.org/abs/2509.25519), I kept looking for a video illustrating that the velocity field solving the Benamou-Brenier OT problem is NOT constant w.r.t. time ⏳... so I did it myself, take a look! ott-jax.readthedocs.io/tutorials/th...

11 1 0 0
5 months ago
Video thumbnail

LLMs are currently this one big parameter block that stores all sort of facts. In our new preprint, we add context-specific memory parameters to the model, and pretrain the model along with a big bank of memories.

πŸ“‘ arxiv.org/abs/2510.02375

[1/10]🧡

13 4 1 0
5 months ago

Wow! Finally OT done on the entire training set to train a diffusion model!

12 3 0 0
5 months ago

Then there's always πœ€ regularization. When πœ€=∞, we recover vanilla FM. At this point we're not completely sure whether πœ€=0 is better than πœ€>0, they both work! πœ€=0 has a minor edge in larger scales (sparse gradients, faster assignment, slightly better metrics), but πœ€>0 is also useful (faster SGD)

1 0 0 0
5 months ago

Thanks for the nice comments! my interpretation is that we're using OT to produce pairs (x_i,y_i) to guide FM. With that, it's up to you to provide an inductive bias (a model) that gets f(x)~=y while generalizing. The hard OT assignment could be that model, but it would fail to generalize.

5 0 1 0
5 months ago
Post image

for people that like OT, IMHO the very encouraging insight is that we have evidence that the "better" you solve your OT problem, the more flow matching metrics improve, this is Figure 3

3 0 1 0
5 months ago
Post image

Thanks @rflamary.bsky.social! yes, exactly. We try to summarize this tradeoff in Table 1, in which we show that for a one-off preprocessing cost, we now get all (noise,data) pairings you might need during flow matching training for "free" (up to the MIPS lookup for each noise).

1 0 1 0
5 months ago
Preview
Flow Matching with Semidiscrete Couplings Flow models parameterized as time-dependent velocity fields can generate data from noise by integrating an ODE. These models are often trained using flow matching, i.e. by sampling random pairs of noi...

the paper is out: arxiv.org/abs/2509.25519

Michal also did a fantastic push to open source the semidiscrete solver prepared by Stephen and Alireza in the OTT-JAX library. We plan to open source the flow pipeline in JAX soon. Please reach out if interested!

7 0 0 0
5 months ago
Post image

This much faster than using Sinkhorn, and generates with higher quality.

As a bonus, you can forget about entropy regularization (set Ξ΅=0), apply things like correctors to guidance, and use it on consistency-type models, or even with conditional generation.

2 0 1 0
5 months ago

the great thing with SD-OT is that this only needs to be computed once. You only need to store a real number per data sample. You can precompute these numbers once & for all using stochastic convex optimization.

When training a flow model, you assign noise to data using these numbers.

1 0 1 0
5 months ago
Post image

In practice, however, this idea only begins to work when using massive batch sizes (see arxiv.org/abs/2506.05526). The problem is that the costs of running Sinkhorn on millions of points can quickly balloon...

Our solution? rely on semidiscrete OT at scales that were never considered before.

1 0 1 0
5 months ago
Post image

Our two phenomenal interns, Alireza Mousavi-Hosseini and Stephen Zhang @syz.bsky.social have been cooking some really cool work with Michal Klein and me over the summer.

Relying on optimal transport couplings (to pick noise and data pairs) should, in principle, be helpful to guide flow matching

🧡

30 7 2 1
6 months ago
Preview
The A recent paper from Apple researchers,

New Apple #ML Research Highlight: The "Super Weight:" How Even a Single Parameter can Determine an #LLM's Behavior machinelearning.apple.com/research/the...

4 2 1 0
6 months ago

you're right that the PCs' message uses space as a justification to accept less papers, but it does not explicitly mention that the acceptance rate should be lower than the historical standard of 25%. In my SAC batch, the average acceptance before their email was closer to 30%, but that's just me..

4 0 2 0
6 months ago

I see it a bit differently. The new system pushed reviewers aggressively to react to rebuttals. I think this is a great change, but this has clearly skewed results, creating many spurious grade upgrades. Now the system must be rebalanced in the other direction by SAC/AC for results to be fair..

5 0 1 1
7 months ago
Sharded Sinkhorn β€” ott 0.5.1.dev34+g3462f28 documentation

scaling up the computation of optimal transport couplings to hundreds of thousands of 3k dimensional vectors made easy using sharding and OTT-JAX! check this notebook, it only takes a few lines of code thanks to JAX's native sharding abilities ott-jax.readthedocs.io/en/latest/tu...

14 2 0 0
7 months ago
Preview
FastVLM: Efficient Vision Encoding for Vision Language Models Vision Language Models (VLMs) enable visual understanding alongside textual inputs. They are typically built by passing visual tokens from a…

New Apple #ML Research Highlight: "FastVLM: Efficient Vision Encoding for Vision Language Models" machinelearning.apple.com/research/fas...

5 1 1 0