π¨One of the coolest workshops in AI is back!
NeurReps 2025 is calling for papers on symmetry, geometry & topology in neural networks π§ π
If your work bridges theory & structure β donβt miss this.
π
Deadline: Aug 22
@geometric-intel.bsky.social
Research lab at UCSB Engineering revealing the geometric signatures of nature and artificial intelligence | PI: @ninamiolane.bsky.social
π¨One of the coolest workshops in AI is back!
NeurReps 2025 is calling for papers on symmetry, geometry & topology in neural networks π§ π
If your work bridges theory & structure β donβt miss this.
π
Deadline: Aug 22
Featuring @haewonjeong.bsky.social and Yao Qin from @ai-ucsb.bsky.social !
11.06.2025 17:11 β π 4 π 2 π¬ 0 π 0Check out a new perspective from @ninamiolane.bsky.social on the role of human scientists in an era where discoveries by artificial intelligence are increasing π§ π½
11.06.2025 18:50 β π 3 π 0 π¬ 0 π 0The era of artificial scientific intelligence is here.
As algorithms generate discoveries at scale, what role remains for human scientists? π€
Thanks @plosbiology.org for publishing my perspective @ucsantabarbara.bsky.social @ai-ucsb.bsky.social @ucsbece.bsky.social @ucsb-cs.bsky.social π
Watch the winning team's approach to predicting ADHD across sexesπ§ πIncredible work from this yearβs champions!
09.06.2025 16:42 β π 2 π 0 π¬ 0 π 0Incredible work from 1000s of minds across continents advancing women's brain health. Big congrats to these standout teams! π§ π
09.06.2025 16:42 β π 2 π 0 π¬ 0 π 0Discover our lab's research with the @bowers-wbhi.bsky.social on building AI models of the maternal brain
@neuromaternal.bsky.social !π€°π§
Thanks, @ucsbengineering.bsky.social , for the feature!
Read more:π
π¨ New preprint from the lab!
Discover fast topological neural networks, that leverage higher order structures without the usual computational burden!
By @martinca.bsky.social @gbg141.bsky.social @marcomonga.bsky.social @levtelyatnikov.bsky.social @ninamiolane.bsky.social
Empirically we find that πππππ π:
β‘Outperforms (speed and performance) GNNs in larger datasets (MANTRA)
π₯Achieves SOTA on Topological Tasks -- predicting Betti numbers 1 & 2 on MANTRA
π Is: faster, more accurate, scalable
(5/6)
The model relies on positional and structural encodings (PSE) of the combinatorial domain. By decomposing a combinatorial representation into neighborhoods which take the form of a Hasse graph, the PSE of each can be used to discard the topology and learn from the PSE!
(4/6)
However, with πππππ
β
Higher-order expressivity (> CCWL) π±
β
Linear scaling wrt dataset size π₯
β
No message passing π‘
β
Higher-order Positional and Structural encodings (PSEs) for ANY combinatorial representation
(3/6)
HOMP (higher-order message passing) networks are more expressive than traditional message passing GNNs but significantly more costly π£π₯!
(2/6)
π¨Higher-order combinatorial models in TDL are notoriously slow and resource-hungry. Can we do better?
Introducing:
π πππππ: A Scalable Higher-Order Positional and Structural Encoder for Combinatorial Representations π
π arXiv: arxiv.org/abs/2505.15405
π§΅ (1/6)
TDL actually works at scale! And we believe πππππ lays the foundation for broad applications of TDL β¨
π Reach out for collaborations
Special thanks to @levtelyatnikov.bsky.social and @gbg1441 and the team @ninamiolane.bsky.social and @Coerulatus :)
(6/6)
Absolutely proud of this work! Huge thanks to @gbg141.bsky.social @ninamiolane.bsky.social @marcomonga.bsky.social β and of course @martinca.bsky.social, who drove the project, learned on the fly, and kept the enthusiasm high at every turn!
26.05.2025 12:01 β π 8 π 5 π¬ 1 π 0π©TopoTune takes any neural network as input and builds the most general TDL model to date, complete with permutation equivariance and unparalleled expressivity.
βοΈ Thanks to its implementation in TopoBench, defining and training these models only requires a few lines of code.
TopoTune is going to ICML 2025!ππ¨π¦
Curious to try topological deep learning with your custom GNN or your specific dataset? We built this for you! Find out how to get started at geometric-intelligence.github.io/topotune/
More π₯π₯ speakers at @cosynemeeting.bsky.social GNN workshop: @ninamiolane.bsky.social who runs the @geometric-intel.bsky.social lab at UCSB. She will take us beyond ππ GNNs, presenting a survey of message passing topological neural networks for neuroscience
30.03.2025 10:17 β π 6 π 4 π¬ 1 π 0Thank you Guillermo BernΓ‘rdez, @clabat9.bsky.social @ninamiolane.bsky.social
for making this work possible!
π @geometric-intel.bsky.social @ucsb.bsky.social
What if the key to finding software bugs faster lies in your GPUπ€? Santa Barbara folksπ’: Please join us Tuesday May 27th at 6pm, at Brass Bear Brewing in downtown SB for Gabe Pizarro's talk on using GPUs for software security.
12.05.2025 22:51 β π 1 π 0 π¬ 0 π 0Latent computing by biological neural networks: A dynamical systems framework.
arxiv.org/abs/2502.14337
My PhD work is now out! Here, we set out to formalize the neural manifold hypothesis and explain several experimental phenomena in systems neuroscience with a single theoretical framework. I highly recommend giving it a read, though tweetprint come much later!
08.03.2025 23:01 β π 10 π 4 π¬ 0 π 0Join LIVE: www.youtube.com/live/v_BiTTH...
10.03.2025 18:13 β π 3 π 2 π¬ 0 π 0Join us for the #WiDS Panel on Womenβs Brain Health!π§
Why is sex-specific data crucial? From ADHD under-diagnosis to healthcare gaps, weβll dive in.
π
March 12 | 8 AM PT
π YouTube -link in comment
With @amykooz.bsky.social @ariannazuanazzi.bsky.social E. Rosenthal T. Silk R. Neuhaus & N. Williams
How can neural nets extract *interpretable* features from dataβ& uncover new science?
π Discover our mathematical framework tackling this question w/ identifiability theory, compressed sensing, interpretability & geometry!π
By @david-klindt.bsky.social @rpatrik96.bsky.social C. O'Neill H Maurer
Kudos to the great team @ninamiolane.bsky.social @rpatrik96.bsky.social @charlesoneill.bsky.social Harald Maurer π
04.03.2025 19:43 β π 6 π 3 π¬ 0 π 0Honestly, the real achievement? We managed to sneak pics of both our pets into the paper. π πΎ
Check it out and let us know what you think!
arxiv.org/abs/2503.01824
Weβre looking for feedback and especially criticism before sending this off to a journal. If youβre into neural representations, weβd love to hear your thoughts! ππ₯
arxiv.org/abs/2503.01824
At the core:
1οΈβ£ Identifiability theory
2οΈβ£ Compressed sensing
3οΈβ£ Quantitative interpretability
Our goal is a unified model for LRH, superposition, sparse coding, and AutoInterpβbacked by theory and practical insights. π§ π
arxiv.org/abs/2503.01824
This all started last year with a simple question: who first applied sparse coding to neural representations for more interpretable codes? That question led us to uncover links between identifiability, compressed sensing, and interpretabilityβa story that was too good not to tell. π§©
04.03.2025 19:43 β π 9 π 3 π¬ 1 π 0