Federico D’Agostino's Avatar

Federico D’Agostino

@fededagos.bsky.social

PhD student @ Uni Tübingen | @bethgelab.bsky.social | Computational Neuroscience & ML

384 Followers  |  385 Following  |  21 Posts  |  Joined: 17.11.2024  |  1.8637

Latest posts by fededagos.bsky.social on Bluesky

Preview
A large-scale dataset of functional mouse ganglion cell layer responses We present the All-GCL dataset, a large-scale resource of functional two-photon Ca2+-imaging recordings with rich meta-data information from more than 80,000 cells in the ganglion cell layer (GCL) of ...

We’re excited to share ALL-GCL: a large-scale dataset of 2P Ca²⁺ imaging from 80,000+ cells in the mouse retinal ganglion cell layer, collected over 9 years. Includes rich metadata, shared stimuli & cell-type assignments designed for type-specific analyses, modeling, and ML.

📄: tinyurl.com/ymn53frf

17.12.2025 11:14 — 👍 12    🔁 4    💬 1    📌 0

This is a concrete step toward bridging the performance/understanding gap in vision science.

📄 Paper: openreview.net/forum?id=cnr...
⚙️ Code: github.com/bethgelab/wh...

🙏 A joint effort with @matthiaskue.bsky.social, Lisa Schwetlick, @bethgelab.bsky.social

#NeurIPS #CognitiveModeling

30.11.2025 21:23 — 👍 3    🔁 0    💬 0    📌 0

💬 Conceptually: Deep neural networks should be viewed as scientific instruments. They tell us what is predictable in human behavior.

We then use that information to ask why, building fully interpretable models that approach the performance of their black-box counterparts.

30.11.2025 21:23 — 👍 1    🔁 0    💬 1    📌 0
Post image

📈 The Result: SceneWalk-X (also re-implemented in #JAX ⚡)

These 3 mechanisms double SceneWalk’s explained variance on the MIT1003 dataset (from 35 % → 70 %)! We closed over 56 % of the gap to deep networks, setting a new State-of-the-Art for mechanistic scanpath prediction.

30.11.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0
Post image

↔️ 3. Cardinal + Leftward Bias

People tend to move their eyes more horizontally, and display a subtle initial bias for leftward movements. Adding this adaptive attentional prior further stabilized the model.

30.11.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0
Post image

➡️ 2. Saccadic Momentum

The eyes often tend to continue moving in the same direction, especially after long saccades. We captured this bias by adding a dynamic directional map that adapts based on the previous eye movement.

30.11.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0
Post image

🔥 1. Time-Dependent Temperature Scaling

Early fixations are more focused (exploitative), later ones become more exploratory. We modeled this with a decaying “temperature” that controls the determinism of fixation choices over time.

30.11.2025 21:23 — 👍 1    🔁 0    💬 1    📌 0

From these systematic failures, we isolated three critical mechanisms SceneWalk was missing.

The data pointed to known cognitive principles, but revealed critical new nuances. Our method showed us not just what was missing, but how to formulate it to match human behavior. 👇

30.11.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0
Post image

💡 Our idea: Use the deep model not just to chase performance, but as a tool for scientific discovery.

We isolate "controversial fixations" where DeepGaze's likelihood vastly exceeds SceneWalk's.
These reveal where the mechanistic model fails to capture predictable patterns.

30.11.2025 21:23 — 👍 2    🔁 1    💬 1    📌 0

Science often faces a choice:

Build models primarily designed to predict, or models that compactly explain. But what if we used them in synergy?

Our paper tackles this head-on. We combine a deep network (DeepGaze III) with an interpretable mechanistic model (SceneWalk).

30.11.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0
Post image

🚨 New paper at #NeurIPS2025!

A systematic fixation-level comparison of a performance-optimized DNN scanpath model and a mechanistic cognitive model reveals behaviourally relevant mechanisms that can be added to the mechanistic model to substantially improve performance.

🧵👇

30.11.2025 21:23 — 👍 10    🔁 5    💬 2    📌 3

Thanks for sharing this!
I was not aware of it but looks really relevant. No problem if your lab is no longer working on this much, we will try to incorporate it in the future and reach out if we have any trouble 😉
This is exactly the kind of engagement we hoped to get!

17.03.2025 13:05 — 👍 1    🔁 0    💬 1    📌 0

Try it out and help us improve the accessibility of retinal datasets and models together.

A team effort with:
@thomaszen.bsky.social
@dgonschorek.bsky.social
@lhoefling.bsky.social
@teuler.bsky.social
@bethgelab.bsky.social

#openscience #computationalneuroscience (9/9)

14.03.2025 09:41 — 👍 4    🔁 1    💬 0    📌 0

This is just the beginning.
We see openretina as more than a Python package—it aims to be the start of an initiative to foster open collaboration in computational retina research.
We’d love your feedback! (8/9)

14.03.2025 09:41 — 👍 2    🔁 0    💬 1    📌 0

Researchers can use openretina to:
✅ Explore pre-trained models in minutes
✅ Train their own models
✅ Contribute datasets & models to the community (7/9)

14.03.2025 09:41 — 👍 1    🔁 0    💬 2    📌 0
Post image

The currently supported models follow a Core + Readout architecture:
🔸 Core: Extracts shared retinal features across data recording sessions
🔸 Readout: Maps shared features to individual neuron responses
🔹 Includes pre-trained models & easy dataset loading (6/9)

14.03.2025 09:41 — 👍 2    🔁 0    💬 1    📌 0

Why does it matter?
Current retina models are often dataset-specific, limiting generalization.
With OpenRetina, we integrate:
🐭 🦎 🐒 Data from multiple species
🎥 Different stimuli & recording modalities
🧠 Deep learning models that can be trained across datasets (5/9)

14.03.2025 09:41 — 👍 3    🔁 0    💬 1    📌 0

What is openretina?
It’s a Python package built on PyTorch, designed for:
🔹 Training deep learning models on retinal data
🔹 Sharing and using pre-trained retinal models
🔹 Cross-dataset, cross-species comparisons
🔹 In-silico hypothesis testing & experiment guidance (4/9)

14.03.2025 09:41 — 👍 2    🔁 0    💬 1    📌 0
Preview
openretina: Collaborative Retina Modelling Across Datasets and Species Studying the retina plays a crucial role in understanding how the visual world is translated into the brains language. As a stand-alone neural circuit with easily controllable input, the retina provid...

📄 Paper: www.biorxiv.org/content/10.1...
📦 Code: github.com/open-retina/...
🔧 pip install openretina
📖 Docs: coming soon at open-retina.org (3/9)

14.03.2025 09:41 — 👍 1    🔁 0    💬 1    📌 0
Post image

Understanding the retina is crucial for decoding how visual information is processed. However, decades of data and models remain scattered across labs and approaches. We introduce openretina to unify retinal system identification. (2/9)

14.03.2025 09:41 — 👍 4    🔁 1    💬 1    📌 0
Post image

🚨 New paper alert! 🚨
We’ve just launched openretina, an open-source framework for collaborative retina modeling across datasets and species.
A 🧵👇 (1/9)

14.03.2025 09:41 — 👍 38    🔁 20    💬 1    📌 1

Incredibly honoured to have been a part of this!

01.03.2025 13:32 — 👍 4    🔁 0    💬 0    📌 0

@fededagos is following 20 prominent accounts