Matthew Muckley's Avatar

Matthew Muckley

@mattmucklm.bsky.social

Research Engineer, Meta Fundamental AI Research (FAIR). ML for compression, computer vision, medicine. https://mmuckley.github.io/

570 Followers  |  308 Following  |  19 Posts  |  Joined: 18.11.2024  |  1.9567

Latest posts by mattmucklm.bsky.social on Bluesky

Very strong results on SSv2 and action anticipation, plus zero-shot robotics planning! And we also attached an LLM to the vision encoder and got strong numbers on PerceptionTest!

Check out the blog post (with link to paper and GitHub) above!

11.06.2025 21:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Very excited to share V-JEPA 2! I've been working on the encoder pretraining pipeline and data curation for this model the last few months, and am excited for it to finally be out!

ai.meta.com/blog/v-jepa-...

11.06.2025 21:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
GitHub - facebookresearch/Qinco: Residual Quantization with Implicit Neural Codebooks Residual Quantization with Implicit Neural Codebooks - facebookresearch/Qinco

If you'd like to try yourself, the code has been added to our GitHub repository!

07.01.2025 14:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Qinco2 builds on Qinco with several optimizations, including beam search (increases accuracy+compute) and pre-selection (decreases compute). On the balance this leads to a more efficient method for similarity search.

07.01.2025 14:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The Qinco2 architecture builds on our previous Qinco work, which uses a neural network to implicitly parametrize code books for residual quantization. At each quantization step, a neural network is used in conjunction with the current vector to predict the next update.

07.01.2025 14:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks Vector quantization is a fundamental technique for compression and large-scale nearest neighbor search. For high-accuracy operating points, multi-codebook quantization associates data vectors with one...

We just published "Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks" on arXiv, work led by our talented intern, Theophane Vallaeys.

Qinco2 achieves as much as 40-60% reduction error for vector compression, as well as better performance for approximate similarity search.

07.01.2025 14:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes exactly.

Depending on how much you mutate it, keeping such libraries can also be very useful for reproducibility (which I didn't mention above).

14.12.2024 17:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But this can be difficult for other people that are trying to do something that doesn't fit in your framework (which happens often in research). There's always one or two things that simply don't fit. As a result, I'm finding myself writing more hacky/prototyping code these days.

13.12.2024 15:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - facebookresearch/NeuralCompression: A collection of tools for neural compression enthusiasts. A collection of tools for neural compression enthusiasts. - facebookresearch/NeuralCompression

In my PhD much of my code was hacky, and I think this set me back quite a bit. At some point I overcorrected towards building complex frameworks for my work, which let me try a lot of things (so long as I stayed within my own framework). This is more or less what you see in NeuralCompression.

13.12.2024 15:26 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One thing I've found in research is the constant counterbalancing between "prototype" code and "engineered" code.

Prototyped code is often a bit hacky, but gets the job done. But if you ever need to extend it, it can be quite a pain.

Engineered code usually has some overarching design philosophy

13.12.2024 15:26 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Is there a particular reason this is considered an anti pattern? I'm actually curious.

04.12.2024 17:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Release v1.5.2 Fix required numpy version Β· mmuckley/torchkbnufft What's Changed Update required numpy by @mmuckley in #103 Full Changelog: v1.5.1...v1.5.2

For MRI folks: we just rolled out a new release to torchkbnufft, first in a couple years.

The changes are for working with newer package versions. Things now work on numpy 2.0, and a few deprecations are fixed. Other than that, it's the same as before :). Get it with

`pip install torchkbnufft`

04.12.2024 15:26 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

I actually think there was quite a bit of spam here a month or two ago and it's already gotten better. Not sure if that's due to an effort on the part of the site admins or just basic engagement numbers shifting what gets into a feed.

24.11.2024 21:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Good thoughts, some of which I've learned from trial and error over the years.

The advice part, centering things on technical points, is also useful for academic publishing and the review process. It really helps defuse what tends to be an adversarial relationship with reviewers (or authors).

23.11.2024 15:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“£ I am sure we have reached only a small fraction of New York's ML community in bsky. Please repost πŸ” this if you think you may have interested people close to you in the social graph.

22.11.2024 14:14 β€” πŸ‘ 18    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0

Please make machine learners who still are children.

20.11.2024 13:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ‘‹

19.11.2024 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm here!

19.11.2024 17:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Kinda has the y2k gaming energy (but without the CRT monitor)

19.11.2024 13:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

HELLO Hello hello hellooo...

18.11.2024 16:01 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

@mattmucklm is following 20 prominent accounts