Simone Azeglio @NeurIPS's Avatar

Simone Azeglio @NeurIPS

@s-azeglio.bsky.social

From Physics to Vision Neuroscience & AI | PhD Candidate @InstVisionParis & @ENS_ULM | Enjoyed my time @FlatironCCN, @CERN | co-organizer @neurreps

986 Followers  |  483 Following  |  26 Posts  |  Joined: 15.11.2024
Posts Following

Posts by Simone Azeglio @NeurIPS (@s-azeglio.bsky.social)

NeurIPS Poster Convolution Goes Higher-Order: A Biologically Inspired Mechanism Empowers Image ClassificationNeurIPS 2025

Link to Poster & Paper --> neurips.cc/virtual/2025...

01.12.2025 13:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Higher-Order Convolution Improves Neural Predictivity in the Retina We present a novel approach to neural response prediction that incorporates higher-order operations directly within convolutional neural networks (CNNs). Our model extends traditional 3D CNNs by embed...

🎀Come see the poster at #NeurIPS2025: πŸ“ Wed Dec 3, 4:30-7:30 PM PST πŸ“ Exhibit Hall C,D,E #5000

This started as exploring how to predict neuronal responses to videos! First preprint here:
arxiv.org/abs/2505.07620 (soon to be updated)

Let's chat about bio-inspired vision! πŸš€

01.12.2025 13:23 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Using Representational Similarity Analysis, we found HoCNNs learn fundamentally different geometries than standard CNNs.
Their representations are more dispersed in high-dimensional space β†’ better class separation and more discriminative features!

01.12.2025 13:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We then systematically perturbed images with controlled higher-order statistics.
HoCNNs showed increased sensitivity to these perturbationsβ€”proving they genuinely rely on higher-order correlations! Yet they're more robust to common corruptions (CIFAR-10-C).

01.12.2025 13:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We employ our higher-order convolutional layer in different network structures, from vanilla CNNs to ResNet-18 and validate it across different datasets.

01.12.2025 13:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Key finding: Optimal performance at 3rd/4th order!
This remarkably aligns with Koenderink & van Doorn's analysis: natural images have ~63% quadratic, ~35% cubic, and ~2% quartic correlations. Our model learns to match the statistical structure of the natural world.

01.12.2025 13:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

Our solution: Higher-order convolutions!

We extend standard convolution to include learnable 2nd, 3rd, and 4th order terms with independent weights. Each order captures different aspects of visual structureβ€”from edges (1st) to complex textures (3rd/4th).

01.12.2025 13:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The issue: Standard CNNs use pointwise nonlinearities with tied weights. When you expand Οƒ(w₁x₁ + wβ‚‚xβ‚‚), different orders share the same weightsβ€”limiting expressivity!
Natural images have rich higher-order correlations that basic convolutions struggle to capture.

01.12.2025 13:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ§ πŸ”¬ Excited to share our #NeurIPS2025 paper: "Convolution Goes Higher-Order"!

We asked: Can shallow networks be as expressive as deep ones? Inspired by biological vision, we introduce higher-order convolutions that capture complex image patterns standard CNNs miss.

πŸ§΅πŸ‘‡

01.12.2025 13:23 β€” πŸ‘ 23    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0

This project was jointly done with
@slneuro.bsky.social building on deep intuitions and under the supervision of Matthew Chalk! πŸ™

Come discuss at #NeurIPS2025! πŸŽͺ

πŸ“ Fri Dec 5, 11 AM-2 PM PST
πŸ“ Exhibit Hall C,D,E #2005

01.12.2025 13:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Post image

Scales to natural images too! 🐦🐱
Tested with realistic retinal ganglion cell models. Again, I_local peaks at high-contrast regions & object boundariesβ€”the regions contributing most to encoded information.
Shows the method works beyond toy examples!

01.12.2025 13:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Applied to visual neurons responding to MNIST digits: I_local(x) reveals information is concentrated along object EDGESβ€”exactly where the decoded images are most sensitive to stimulus changes! 🎯
Compare to Fisher info which just shows "blobs" at receptive field locations.

01.12.2025 13:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The magic: our decomposition can be efficiently computed using diffusion models!
Train conditional & unconditional models to predict X from noisy observations. This makes the method scalable to high-dimensional naturalistic stimuliβ€”something previous methods couldn't handle.

01.12.2025 13:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We derive a new decomposition "I_local(x)" that satisfies ALL axioms! πŸŽ‰

Key insight: integrate Fisher information over noise-corrupted stimuli. This generalizes Fisher info to finite perturbations while remaining a valid mutual information decomposition.

01.12.2025 13:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our approach: introduce 4 core axioms any meaningful decomposition should satisfy:

βœ“ Completeness: recovers total mutual information
βœ“ Locality: local changes have local effects
βœ“ Positivity: information β‰₯ 0
βœ“ Additivity: combines measurements properly

01.12.2025 13:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The challenge: decomposing mutual information I(R;X) into stimulus-specific contributions I(x) is fundamentally ill-posed - many solutions exist!
Previous methods (Fisher info, stimulus specific info) violate key properties, making them hard to interpret as sensitivity measures.

01.12.2025 13:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧠 How do neurons encode information? We know HOW MUCH, but what about WHAT information they encode?

Our new work uses diffusion models to decompose neural information down to individual stimuli & features!

🎯Spotlight at #NeurIPS2025 πŸŒŸπŸ“„

arxiv.org/abs/2505.11309

01.12.2025 13:11 β€” πŸ‘ 13    πŸ” 5    πŸ’¬ 2    πŸ“Œ 1
Preview
Nonlinear spatial integration allows the retina to detect the sign of defocus in natural scenes The retina can easily detect whether the eye is too small or too big thanks to the imperfections of the eye optics.

Happy to share my first work with a connection to myopia, a collaboration with EssilorLuxottica
www.science.org/doi/10.1126/...

25.10.2025 21:07 β€” πŸ‘ 22    πŸ” 6    πŸ’¬ 1    πŸ“Œ 1
Post image

πŸ•³οΈπŸ‡Into the Rabbit Hull – Part II

Continuing our interpretation of DINOv2, the second part of our study concerns the *geometry of concepts* and the synthesis of our findings toward a new representational *phenomenology*:

the Minkowski Representation Hypothesis

15.10.2025 17:13 β€” πŸ‘ 33    πŸ” 9    πŸ’¬ 2    πŸ“Œ 1
Preview
Connecting neural activity, perception in the visual system Figuring out how the brain uses information from visual neurons may require new tools. I asked nine experts to weigh in.

Figuring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field.

#neuroskyence

www.thetransmitter.org/the-big-pict...

13.10.2025 13:23 β€” πŸ‘ 58    πŸ” 25    πŸ’¬ 3    πŸ“Œ 3
Preview
The Rod Bipolar Cell Pathway Contributes To Surround Responses In OFF Retinal Ganglion Cells Sensory neurons can be influenced by stimuli beyond their receptive field center, yet the mechanisms underlying this surround modulation remain poorly understood. In the retina, many OFF ganglion cell...

🚨 New preprint out from our lab!
πŸ“„ The Rod Bipolar Cell Pathway Contributes to Surround Responses in OFF Retinal Ganglion Cells
πŸ‘‰ www.biorxiv.org/content/10.1...

07.10.2025 12:19 β€” πŸ‘ 17    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Post image

Images (or image patches) are secretly multi-channel signals over groups. Below, the dihedral group of order 8: reflecting/rotating the image permutes the values in the magenta vector. So we can reshape the image into 8-tuples that all permute according to the dihedral group (edge case diagonals).

03.10.2025 09:13 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Thrilled to see this work accepted at NeurIPS!

Kudos to @hafezghm.bsky.social for the heroic effort in demonstrating the efficacy of seq-JEPA in representation learning from multiple angles.

#MLSky πŸ§ πŸ€–

19.09.2025 18:46 β€” πŸ‘ 19    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Video thumbnail

We present our preprint on ViV1T, a transformer for dynamic mouse V1 response prediction. We reveal novel response properties and confirm them in vivo.

With @wulfdewolf.bsky.social, Danai Katsanevaki, @arnoonken.bsky.social, @rochefortlab.bsky.social.

Paper and code at the end of the thread!

🧡1/7

19.09.2025 12:37 β€” πŸ‘ 17    πŸ” 12    πŸ’¬ 2    πŸ“Œ 0
Post image Post image Post image

Our illustrated guide to non-Euclidian ML is finally published!
Check it out for
⭐️ gorgeous figures (with new additions!) on topology, algebra, and geometry in the field
⭐️ broken down tables for easy reading
⭐️ accessible text, additional refs, and more
iopscience.iop.org/article/10.1...

01.08.2025 15:24 β€” πŸ‘ 52    πŸ” 19    πŸ’¬ 1    πŸ“Œ 5
Post image

🚨New paper🚨

Neural manifolds went from a niche-y word to an ubiquitous term in systems neuro thanks to many interesting findings across fields. But like with any emerging term, people use it very differently.

Here, we clarify our take on the term, and review key findings & challenges rdcu.be/ex8hW

01.08.2025 09:57 β€” πŸ‘ 155    πŸ” 46    πŸ’¬ 2    πŸ“Œ 1
Video thumbnail

Do you study neural systems with feedback at different temporal-, spatial-, hierarchical-, data-, or computational- scales? Have you submitted your abstract to the "Neurocybernetics at Scale" symposium?
Due to multiple requests, new abstract submission deadline is 18 July! #AI4Science #cybernetics

11.07.2025 18:49 β€” πŸ‘ 25    πŸ” 8    πŸ’¬ 2    πŸ“Œ 0

NeurReps is back for its 4th edition at NeurIPS 2025! Stay tuned for updates!

10.07.2025 13:16 β€” πŸ‘ 9    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Exciting new preprint from the lab: β€œAdopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168

08.07.2025 13:03 β€” πŸ‘ 140    πŸ” 59    πŸ’¬ 3    πŸ“Œ 11
Post image

Lack of spontaneous activity (#retinalwaves) in early development prevents the formation of retinal circuits critical for stabilizing images as we move through the world. doi.org/10.1016/j.ce....

30.06.2025 16:21 β€” πŸ‘ 20    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0