CVPR panel at the What is Next in Multimodal Foundation Models? workshop kicks off soon!
11:30AM, R207 AโD (Level 2)
Don't miss an amazing discussion with: Ludwig Schmidt, @andrewowens.bsky.social , Arsha Nagrani, and Ani Kembhavi ๐ฅ
@cvprconference.bsky.social
sites.google.com/view/mmfm3rd...
We found that 4D representations maintain a shared geometric structure between the points and robot state representations up to a linear transformation, and thus enabling efficient transfer learning from human video data to low-level robotic control.
For example, VLAs use language decoders, which are pretrained on tasks like visual question answering and image captioning.
This presents a discrepancy between the modelsโ high-level pre-training objective and the need for robotic models to predict low-level actions.
Pretraining has significantly contributed to recent Foundational Model success. However, in robotics, progress has been limited due to a lack of robotic annotations and insufficient representations that accurately model the physical world.
Our paper: arxiv.org/pdf/2502.13142.
Our project page and code will be released soon!
Team: \w Dantong Niu, Yuvan Sharma, Haoru Xue, Giscard Biamby, Junyi Zhang, Ziteng Ji, and Trevor Darrell.
What happens when vision๐ค robotics meet? ๐จ Happy to share our new work on Pretraining Robotic Foundational Models!๐ฅ
ARM4R is an Autoregressive Robotic Model that leverages low-level 4D Representations learned from human video data to yield a better robotic model.
BerkeleyAI ๐
The best friend of Auto-regressive Robotic Models is 4D representations...๐ค๐ปโค๏ธ
ืืื ืืืชื ืืืืืืืก?
Wow! This image so horrible and beautiful at the same time.
I wouldn't recommend deleting the old users on X and Facebook as this social network is still in a beta version.
The Star of David on the Christmas tree is quite hilarious :)
Our workshop "What is Next in Multimodal Foundation Models?" has been accepted to #CVPR for its 3rd time!
We are cooking amazing talks and an excellent panel for you, so stay tuned!
@cvprconference.bsky.social
For all our @neuripsconf.bsky.social friends๐ค๐ฆ, our work is presented NOW at POSTER #3701.
Come hear us talk our work on many-shot in-context learning and test-time scaling by leveraging the activations! You won't be disappointed๐
#Multimodal-InContextLearning #NeurIPS
Oh no, I have a NeurIPS @neuripsconf.bsky.social FOMO๐๐๐ค
Or is it actually more of Taylor Swift?๐ซ
This fantastic work was done by the outstanding students, Brandon Huang, Chancharik Mitra and Tianning Chai, as well as Zhiqiu Lin, Assaf Arbelle, Rogerio Feris, Leonid Karlinsky.
I also want to special thanks the amazing Trevor Darrell and Deva Ramanan for their invaluable guidance.
Key-takeaways:
(1) Utilizing truly multimodal features (like those found in generative architectures)
(2) Demonstrating how generative LMMs can be used for discriminative VL tasks
(3) It is very convenient to have all the information in a small and different head for different VL tasks.
We tried several different tasks, such as Safety, Visual Question Answering (VQA), and Classification benchmarks.
The results suggest that SAVs are particularly useful even when compared to LoRA (where there are not a lot of samples to fine-tune the model).
What we did? ->
We propose an algorithm for finding small sets of attention heads (~20!) as multimodal features in Generative LMMs that can be used for discriminative VL tasks, outperforming encoder-only architectures (CLIP, SigLIP) without training.
Motivation:
On the one hand, encoder-only architectures are great for discriminative VL tasks but lack multimodal features.
On the other hand, decoder-only architectures have a joint multimodal representation but are not suited for decoding tasks.
Can we enjoy both worlds? The answer is YES!
๐จExcited to share for the first time our work here in ๐ฆ "Sparse Attention Vector (SAVs)" ๐ฅณ
We showed that when done properly, generative multimodal features can be discriminative vision-language classifiers.
A really fun & enjoyable collab w/ @CMU, @BAIR, and @MIT-IBM Lab
arxiv.org/abs/2410.12782
I think for two main reasons. Firstly, ICL is an emergent property of LLMs/VLMs, not something they were pre-trained to do originally. Second, those VLMs that suffer from poor ICL are usually those who were instruction-tuned, while most pretrained VLMs (i.e., generative models) should still have it.
I took my two kids there last April, and I was amazed at how much they could climb even at a young age!
Also, I highly recommend visiting north of California (Mendocino, Fort Bragg, etc.) during this time of year!
My hot take it is essential to have many-shot capabilities. In our NeurIPS work arxiv.org/abs/2406.15334, we showed how to use Multimodal Task Vectors for many-shot.
But, I'm not sure it makes sense to pretrain for this. ICL is an emergent property, not a downstream task...
Anyway, nice work!
๐ค
You can access keynotes, papers, and spotlights on Robotic Learning for free! So cool!๐ค
youtube.com/watch?v=0joZ...
#Robotics #DeepLearning #CoRL2024
๐ฃ Stony Brook Universityโs Department of Computer Science invites applications for a tenure-track assistant professor position with an expected starting date of Fall 2025.
Link to the job post: careercenter.cra.org/job/assistan...
Wow! This is fantastic! Well deserved.
So far, my experience with this platform has shown that it is much better for research. I really love the research feed here!
For all the ML/AI researchers, are you still tweeting both at X and BK at the same time? Is there a convenient way to do this?
Very useful๐