Guillaume Astruc's Avatar

Guillaume Astruc

@gastruc.bsky.social

2nd Year PhD Student from Imagine-ENPC/IGN/CNES Working on Self-supervised Cross-modal Geospatial Learning. Personal WebPage: https://gastruc.github.io/

347 Followers  |  79 Following  |  10 Posts  |  Joined: 20.11.2024  |  1.7318

Latest posts by gastruc.bsky.social on Bluesky

Super interesting to see pure SSL outperforms text alignement on a super competitive but text-aligned suited task 🀯

18.08.2025 15:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ›°οΈ At #CVPR2025 presenting "AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities" - Saturday afternoon, Poster 355!
If you're here and want to discuss geolocation or geospatial foundation models, let's connect!

11.06.2025 21:08 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping The growing availability of high-quality Earth Observation (EO) data enables accurate global land cover and crop type monitoring. However, the volume and heterogeneity of these datasets pose major pro...

πŸ“’ FLAIR-HUB dataset
A new large-scale, multimodal dataset for land cover and crop type mapping
πŸ€— Dataset: huggingface.co/datasets/IGN...
πŸ“„ Preprint: arxiv.org/abs/2506.07080
πŸ€— Pretrained models: huggingface.co/collections/...
πŸ’» Code: github.com/IGNF/FLAIR-HUB
🌐 Project : arxiv.org/abs/2506.07080

11.06.2025 14:00 β€” πŸ‘ 18    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Post image

I will be presenting our work on the detection of archaeological looting with satellite image time series at CVPR 2025 EarthVision workshop tomorrow!

Honored and grateful that this paper received the best student paper award!

11.06.2025 04:03 β€” πŸ‘ 15    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Preview
When majority rules, minority loses: bias amplification of gradient descent Despite growing empirical evidence of bias amplification in machine learning, its theoretical foundations remain poorly understood. We develop a formal framework for majority-minority learning tasks, ...

πŸ“’ New preprint!
β€œWhen majority rules, minority loses: bias amplification of gradient descent”

We often blame biased data but training also amplifies biases. Our paper explores how ML algorithms favor stereotypes at the expense of minority groups.

➑️ arxiv.org/abs/2505.13122

(1/3)

23.05.2025 16:48 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

We've added new experiments demonstrating robust generalization capabilities! Notably, AnySat shows strong performance on HLS Burn Scars - a sensor never seen during pretraining! πŸ”₯πŸ›°οΈ
Check it out:
πŸ“„ Paper: arxiv.org/abs/2412.14123
🌐 Project: gastruc.github.io/anysat

30.04.2025 14:00 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Looking forward to #CVPR2025! We will present the following papers:

30.04.2025 13:04 β€” πŸ‘ 28    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1
Preview
The Change You Want To Detect: Semantic Change Detection In Earth Observation With Hybrid Data Generation Bi-temporal change detection at scale based on Very High Resolution (VHR) images is crucial for Earth monitoring. This remains poorly addressed so far: methods either require large volumes of annotate...

Introducing HySCDG #CVPR2025, a generative pipeline for creating a large hybrid semantic change detection for Earth Observation using Stable Diffusion and ControlNet ! πŸ—ΊοΈπŸ›©οΈ

πŸ“„ Paper: arxiv.org/abs/2503.15683

28.04.2025 16:48 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

πŸ’»We've released the code for our #CVPR2025 paper MAtCha!

🍡MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...

...While also working with dense-view datasets (hundreds of images)!

03.04.2025 10:33 β€” πŸ‘ 40    πŸ” 16    πŸ’¬ 4    πŸ“Œ 1
Post image Post image

πŸ”₯πŸ”₯πŸ”₯ CV Folks, I have some news! We're organizing a 1-day meeting in center Paris on June 6th before CVPR called CVPR@Paris (similar as NeurIPS@Paris) πŸ₯πŸΎπŸ₯–πŸ·

Registration is open (it's free) with priority given to authors of accepted papers: cvprinparis.github.io/CVPR2025InPa...

Big πŸ§΅πŸ‘‡ with details!

21.03.2025 06:43 β€” πŸ‘ 136    πŸ” 51    πŸ’¬ 8    πŸ“Œ 10

Starter pack including some of the lab members: go.bsky.app/QK8j87w

14.03.2025 10:34 β€” πŸ‘ 24    πŸ” 11    πŸ’¬ 0    πŸ“Œ 1
Post image

🧩 Excited to share our paper "RUBIK: A Structured Benchmark for Image Matching across Geometric Challenges" (arxiv.org/abs/2502.19955) accepted to #CVPR2025! We created a benchmark that systematically evaluates image matching methods across well-defined geometric difficulty levels. πŸ”

28.02.2025 15:23 β€” πŸ‘ 19    πŸ” 7    πŸ’¬ 2    πŸ“Œ 0

Weights for CAD are finally available. It's one of the smallest diffusion models on the market, achieving performance close to SD and Pixart, featuring a Perceiver-like architecture.
We leverage our coherence aware training to improve the textual understanding

20.02.2025 12:14 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ”— Check it out:
πŸ“œ Paper: arxiv.org/abs/2412.14123
🌐 Project: gastruc.github.io/anysat
πŸ€— HuggingFace: huggingface.co/g-astruc/Any...
πŸ™ GitHub: github.com/gastruc/AnySat

19.12.2024 10:46 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸš€ Even better: AnySat supports linear probing for semantic segmentation!
That means you can fine-tune just a few thousand parameters and achieve SOTA results on challenging tasksβ€”all with minimal effort.

19.12.2024 10:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

AnySat achieves SOTA performance on 6 tasks across 10 datasets:
🌱 Land cover mapping
🌾 Crop type segmentation
🌳 Tree species classification
🌊 Flood detection
🌍 Change detection

19.12.2024 10:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We trained AnySat on 5 multimodal datasets simultaneously:
πŸ“‘ 11 distinct sensors
πŸ“ Resolutions: 0.2m–500m
πŸ” Revisit: single date to weekly
🏞️ Scales: 0.3–150 hectares

The pretrained model can adapt to truly diverse data, and probably yours too!

19.12.2024 10:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ”Thanks to our modified JEPA training scheme and scale-adaptive spatial encoders, AnySat trains on datasets with diverse scales, resolutions, and modalities!
🧠 75% of its parameters are shared across all inputs, enabling unmatched flexibility.

19.12.2024 10:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ€” What if embedding multimodal EO data was as easy as using a ResNet on images?
Introducing AnySat: one model for any resolution (0.2m–250m), scale (0.3–2600 hectares), and modalities (choose from 11 sensors & time series)!
Try it with just a few lines of code:

19.12.2024 10:46 β€” πŸ‘ 35    πŸ” 10    πŸ’¬ 2    πŸ“Œ 2

Guillaume Astruc, Nicolas Gonthier, Clement Mallet, Loic Landrieu
AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities
https://arxiv.org/abs/2412.14123

19.12.2024 06:45 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

⚠️Reconstructing sharp 3D meshes from a few unposed images is a hard and ambiguous problem.

β˜‘οΈWith MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧡

🌐Webpage: anttwo.github.io/matcha/

11.12.2024 14:59 β€” πŸ‘ 38    πŸ” 11    πŸ’¬ 4    πŸ“Œ 1
Video thumbnail

🌍 Guessing where an image was taken is a hard, and often ambiguous problem. Introducing diffusion-based geolocationβ€”we predict global locations by refining random guesses into trajectories across the Earth's surface!

πŸ—ΊοΈ Paper, code, and demo: nicolas-dufour.github.io/plonk

10.12.2024 15:56 β€” πŸ‘ 96    πŸ” 32    πŸ’¬ 8    πŸ“Œ 5

Hi, I am a PhD student from @imagineenpc.bsky.social. Could you also add us both please?

25.11.2024 15:55 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@gastruc is following 19 prominent accounts