Geometric Intelligence Lab's Avatar

Geometric Intelligence Lab

@geometric-intel.bsky.social

Research lab at UCSB Engineering revealing the geometric signatures of nature and artificial intelligence | PI: @ninamiolane.bsky.social

130 Followers  |  22 Following  |  15 Posts  |  Joined: 23.11.2024  |  2.2992

Latest posts by geometric-intel.bsky.social on Bluesky

🚨One of the coolest workshops in AI is back!

NeurReps 2025 is calling for papers on symmetry, geometry & topology in neural networks πŸ§ πŸ“

If your work bridges theory & structure β€” don’t miss this.
πŸ“… Deadline: Aug 22

08.07.2025 23:18 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Featuring @haewonjeong.bsky.social and Yao Qin from @ai-ucsb.bsky.social !

11.06.2025 17:11 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Check out a new perspective from @ninamiolane.bsky.social on the role of human scientists in an era where discoveries by artificial intelligence are increasing πŸ§ πŸ‘½

11.06.2025 18:50 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The era of artificial scientific intelligence is here.

As algorithms generate discoveries at scale, what role remains for human scientists? πŸ€”

Thanks @plosbiology.org for publishing my perspective @ucsantabarbara.bsky.social @ai-ucsb.bsky.social @ucsbece.bsky.social @ucsb-cs.bsky.social 🌟

11.06.2025 17:10 β€” πŸ‘ 14    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

Watch the winning team's approach to predicting ADHD across sexesπŸ§ πŸ‘Incredible work from this year’s champions!

09.06.2025 16:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Incredible work from 1000s of minds across continents advancing women's brain health. Big congrats to these standout teams! 🧠🌍

09.06.2025 16:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Discover our lab's research with the @bowers-wbhi.bsky.social on building AI models of the maternal brain
@neuromaternal.bsky.social !🀰🧠

Thanks, @ucsbengineering.bsky.social , for the feature!

Read more:πŸ‘‡

29.05.2025 15:25 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

🚨 New preprint from the lab!

Discover fast topological neural networks, that leverage higher order structures without the usual computational burden!

By @martinca.bsky.social @gbg141.bsky.social @marcomonga.bsky.social @levtelyatnikov.bsky.social @ninamiolane.bsky.social

27.05.2025 15:17 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Empirically we find that π‡πŽππ’π„ πŸ‘€:

⚑Outperforms (speed and performance) GNNs in larger datasets (MANTRA)
πŸ₯‡Achieves SOTA on Topological Tasks -- predicting Betti numbers 1 & 2 on MANTRA
πŸ“ˆ Is: faster, more accurate, scalable

(5/6)

26.05.2025 11:17 β€” πŸ‘ 5    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

The model relies on positional and structural encodings (PSE) of the combinatorial domain. By decomposing a combinatorial representation into neighborhoods which take the form of a Hasse graph, the PSE of each can be used to discard the topology and learn from the PSE!

(4/6)

26.05.2025 11:17 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

However, with π‡πŽππ’π„
βœ… Higher-order expressivity (> CCWL) 😱
βœ… Linear scaling wrt dataset size πŸ”₯
βœ… No message passing πŸ’‘
βœ… Higher-order Positional and Structural encodings (PSEs) for ANY combinatorial representation

(3/6)

26.05.2025 11:17 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

HOMP (higher-order message passing) networks are more expressive than traditional message passing GNNs but significantly more costly πŸ’£πŸ’₯!

(2/6)

26.05.2025 11:17 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨Higher-order combinatorial models in TDL are notoriously slow and resource-hungry. Can we do better?

Introducing:
πŸš€ π‡πŽππ’π„: A Scalable Higher-Order Positional and Structural Encoder for Combinatorial Representations πŸš€

πŸ“ arXiv: arxiv.org/abs/2505.15405

🧡 (1/6)

26.05.2025 11:17 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 1    πŸ“Œ 2

TDL actually works at scale! And we believe π‡πŽππ’π„ lays the foundation for broad applications of TDL ✨

πŸ“­ Reach out for collaborations

Special thanks to @levtelyatnikov.bsky.social and @gbg1441 and the team @ninamiolane.bsky.social and @Coerulatus :)
(6/6)

26.05.2025 11:17 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Absolutely proud of this work! Huge thanks to @gbg141.bsky.social @ninamiolane.bsky.social @marcomonga.bsky.social β€” and of course @martinca.bsky.social, who drove the project, learned on the fly, and kept the enthusiasm high at every turn!

26.05.2025 12:01 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

🍩TopoTune takes any neural network as input and builds the most general TDL model to date, complete with permutation equivariance and unparalleled expressivity.

βš™οΈ Thanks to its implementation in TopoBench, defining and training these models only requires a few lines of code.

19.05.2025 18:04 β€” πŸ‘ 7    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

TopoTune is going to ICML 2025!πŸŽ‰πŸ‡¨πŸ‡¦
Curious to try topological deep learning with your custom GNN or your specific dataset? We built this for you! Find out how to get started at geometric-intelligence.github.io/topotune/

19.05.2025 18:04 β€” πŸ‘ 22    πŸ” 9    πŸ’¬ 1    πŸ“Œ 1

More πŸ”₯πŸ”₯ speakers at @cosynemeeting.bsky.social GNN workshop: @ninamiolane.bsky.social who runs the @geometric-intel.bsky.social lab at UCSB. She will take us beyond πŸš€πŸš€ GNNs, presenting a survey of message passing topological neural networks for neuroscience

30.03.2025 10:17 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Thank you Guillermo BernΓ‘rdez, @clabat9.bsky.social @ninamiolane.bsky.social
for making this work possible!
🏠 @geometric-intel.bsky.social @ucsb.bsky.social

19.05.2025 18:04 β€” πŸ‘ 7    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Post image

What if the key to finding software bugs faster lies in your GPUπŸ€”? Santa Barbara folksπŸ“’: Please join us Tuesday May 27th at 6pm, at Brass Bear Brewing in downtown SB for Gabe Pizarro's talk on using GPUs for software security.

12.05.2025 22:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Latent computing by biological neural networks: A dynamical systems framework Although individual neurons and neural populations exhibit the phenomenon of representational drift, perceptual and behavioral outputs of many neural circuits can remain stable across time scales over...

Latent computing by biological neural networks: A dynamical systems framework.

arxiv.org/abs/2502.14337

22.02.2025 10:10 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

My PhD work is now out! Here, we set out to formalize the neural manifold hypothesis and explain several experimental phenomena in systems neuroscience with a single theoretical framework. I highly recommend giving it a read, though tweetprint come much later!

08.03.2025 23:01 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
WiDS Datathon 2025 Panel: Women’s Brain Health YouTube video by Women in Data Science Worldwide

Join LIVE: www.youtube.com/live/v_BiTTH...

10.03.2025 18:13 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Join us for the #WiDS Panel on Women’s Brain Health!🧠

Why is sex-specific data crucial? From ADHD under-diagnosis to healthcare gaps, we’ll dive in.

πŸ“… March 12 | 8 AM PT
πŸ“ YouTube -link in comment

With @amykooz.bsky.social @ariannazuanazzi.bsky.social E. Rosenthal T. Silk R. Neuhaus & N. Williams

10.03.2025 18:13 β€” πŸ‘ 5    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1

How can neural nets extract *interpretable* features from dataβ€”& uncover new science?

πŸ‘‰ Discover our mathematical framework tackling this question w/ identifiability theory, compressed sensing, interpretability & geometry!🌐

By @david-klindt.bsky.social @rpatrik96.bsky.social C. O'Neill H Maurer

05.03.2025 14:41 β€” πŸ‘ 18    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1

Kudos to the great team @ninamiolane.bsky.social @rpatrik96.bsky.social @charlesoneill.bsky.social Harald Maurer πŸ™

04.03.2025 19:43 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

Honestly, the real achievement? We managed to sneak pics of both our pets into the paper. 😝 🐾
Check it out and let us know what you think!
arxiv.org/abs/2503.01824

04.03.2025 19:43 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
From superposition to sparse codes: interpretable representations in neural networks Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence sugg...

We’re looking for feedback and especially criticism before sending this off to a journal. If you’re into neural representations, we’d love to hear your thoughts! πŸ“πŸ”₯
arxiv.org/abs/2503.01824

04.03.2025 19:43 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

At the core:
1️⃣ Identifiability theory
2️⃣ Compressed sensing
3️⃣ Quantitative interpretability

Our goal is a unified model for LRH, superposition, sparse coding, and AutoInterpβ€”backed by theory and practical insights. πŸ§ πŸ”
arxiv.org/abs/2503.01824

04.03.2025 19:43 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

This all started last year with a simple question: who first applied sparse coding to neural representations for more interpretable codes? That question led us to uncover links between identifiability, compressed sensing, and interpretabilityβ€”a story that was too good not to tell. 🧩

04.03.2025 19:43 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

@geometric-intel is following 20 prominent accounts