1/n Introducing ReDi (Representation Diffusion): a new generative approach that leverages a diffusion model to jointly capture
β Low-level image details (via VAE latents)
β High-level semantic features (via DINOv2)π§΅
@sta8is.bsky.social
1/n Introducing ReDi (Representation Diffusion): a new generative approach that leverages a diffusion model to jointly capture
β Low-level image details (via VAE latents)
β High-level semantic features (via DINOv2)π§΅
π Check out our paper at arxiv.org/abs/2501.08303 and π₯οΈcode at github.com/Sta8is/FUTUR... to learn more about FUTURIST and its applications in autonomous systems! (9/n)
Joint work with @ikakogeorgiou.bsky.social, @spyrosgidaris.bsky.social and Nikos Komodakis
π The architecture demonstrates significant performance improvements with extended trainingβindicating substantial potential for future enhancements (8/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π‘ Our multimodal approach significantly outperforms single-modality variants, demonstrating the power of learning cross-modal relationships (7/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π Results are impressive! We achieve state-of-the-art performance in future semantic segmentation on Cityscapes, with strong improvements in both short-term (0.18s) and mid-term (0.54s) predictions (6/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π Key innovation #3: We developed a novel multimodal masked visual modeling objective specifically designed for future prediction tasks (5/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π Key innovation #2: Our model features an efficient cross-modality fusion mechanism that improves predictions by learning synergies between different modalities (segmentation + depth) (4/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π― Key innovation #1: We introduce a VAE-free hierarchical tokenization process integrated directly into our transformer. This simplifies training, reduces computational overhead, and enables true end-to-end optimization (3/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π FUTURIST employs a multimodal visual sequence transformer to directly predict multiple future semantic modalities. We focus on two key modalities: semantic segmentation and depth estimationβcritical capabilities for autonomous systems operating in dynamic environments (2/n)
26.02.2025 19:57 β π 0 π 0 π¬ 1 π 0π§΅ Excited to share our latest work: FUTURIST - A unified transformer architecture for multimodal semantic future prediction, is accepted to #CVPR2025! Here's how it works (1/n)
π Links to the arxiv and github below
1/nπIf youβre working on generative image modeling, check out our latest work! We introduce EQ-VAE, a simple yet powerful regularization approach that makes latent representations equivariant to spatial transformations, leading to smoother latents and better generative models.π
18.02.2025 14:26 β π 18 π 8 π¬ 1 π 18/n π‘Our work shows that by leveraging the semantic power of VFMs, we create more efficient and effective future prediction systems.
π Paper: arxiv.org/abs/2412.11673
π₯οΈCode available at: github.com/Sta8is/DINO-...
Joint work with @ikakogeorgiou.bsky.social, @spyrosgidaris.bsky.social, N. Komodakis
7/n π¬Interesting discovery: The intermediate features from our transformer can actually enhance the already-strong VFM features, suggesting potential for self-supervised learning.
07.02.2025 17:05 β π 2 π 0 π¬ 1 π 06/n πAnd it works amazingly well! We achieve state-of-the-art results in semantic segmentation forecasting, with strong performance across multiple tasks using a single feature prediction model.
07.02.2025 17:05 β π 2 π 0 π¬ 1 π 05/n π¨The beauty of our method? It's completely modular - different task-specific heads (segmentation, depth estimation, surface normals) can be plugged in without retraining the core model.
07.02.2025 17:05 β π 1 π 0 π¬ 1 π 04/n πOur approach: We train a masked feature transformer to predict how VFM features change over time. These predicted features can then be used for various scene understanding tasks!
07.02.2025 17:05 β π 1 π 0 π¬ 1 π 03/n π§©Why is this important? Most existing approaches focus on pixel-level prediction, which wastes computation on irrelevant visual details. We focus directly on meaningful semantic features!
07.02.2025 17:05 β π 1 π 0 π¬ 1 π 02/n π―Our key insight: Instead of predicting future RGB frames directly, we can forecast how semantic features from Vision Foundation Models (VFMs) evolve over time.
07.02.2025 17:05 β π 1 π 0 π¬ 1 π 01/n π Excited to share our latest work: DINO-Foresight, a new framework for predicting the future states of scenes using Vision Foundation Model features!
Links to the arXiv and Github π