Congratulations to our faculty member Chris Ching on receiving the @usc.edu Bosco S. Tjan Mentorship Award. A well-deserved recognition of his dedication to mentoring and supporting the next generation of scientists! We're thrilled to celebrate this honor 🎉 🧠
Here is a great interview on her life in science samizdathealth.org/wp-content/u...
[3] amazon.com/Boy-Couldnt-...
[4] Obituary echovita.com/us/obituarie...
She trained and inspired countless generations of neuroscientists and psychiatrists.
We co-authored many papers together, and time-lapse movies of development, which she explains here "You can make a movie of this if you are willing to wait 12 years" [1,2].
[1] youtube.com/watch?v=4ET8...
[2] youtube.com/watch?v=4ET8...
Very sad to hear of the passing of visionary neuroscientist Dr Judith Rapoport of NIMH, she pioneered neuroimaging in children and was a world expert on childhood and adolescent psychiatric conditions, including schizophrenia + OCD [3]. See video links below 👇
🔥WORKING THIS WEEKEND revising a paper [1] on how much data is needed to train a vision-language model (VLM) to classify brain diseases in radiologic images, and the enigmatic RIEMANN ZETA FUNCTION [0] magically appears (!!)
[0] en.wikipedia.org/wiki/Riemann...
[1] arxiv.org/html/2512.23...
Thanks Conor !
Art show including paintings on Mõbius strips and knots
I just saw that on Blue Ridge while looking for something else - HUGE CONGRATS!! That is really awesome :)
A reporter for The Wrap (a publication focused on entertainment, media, + Hollywood) asked me for a comment on this complex situation
thewrap.com/creative-con...
Thanks to Casey Loving for a thoughtful, nuanced article
+if the reachable transformations of the network lie in the Lie group generated by its layers, you could use this commutator (+the high order brackets if you like!) to test compressibility. I have not thought about multiple heads, which may increase the rank (noncompressibility) of the Lie algebra
*If layer i moves features into a region where layer j behaves differently, the Lie bracket (=HOW much applying layer i changes the action of layer j, minus the reverse) is large. but, nearly-commuting layers are compressible, so perhaps you could use fewer layers (or 1!) if the brackets are small.
Animation of a Lie bracket*
*used in compressing neural networks such as transformers or flow maps
The exponential of a velocity field is the diffeomorphism obtained by following that velocity field for unit time, and the logarithm of a diffeomorphism, when it exists (and this is cool) is the stationary velocity field whose flow produces that map, same idea as matrix exp and log.
*note we use the words exp and log for maps as it comes from the fact that diffeomorphisms form a kind of infinite-dimensional Lie group, and velocity fields are its Lie algebra.. the log is the velocity at time 0 that generates the full path at time 1.
🔥So you can now generate text and molecules in one-shot !!
[1] x.com/osclsd/statu... and arxiv.org/html/2602.12...
[2] x.com/PTenigma/sta...
[3] x.com/PTenigma/sta...
*
🔥The cool new paper [1] extends this framework to discrete data by embedding tokens in the probability simplex, allowing flows to be defined on a continuous manifold where this exact same geometric transport theory applies.
If the time-dependent flow is on the time interval [0,1], you can easily make intermediate samples by linear interpolation at times 0 < s < t < 1 and marginalise (weight these) over the data density to get the displacement of the source distribution Phi(t) given Phi(s).
If the time-dependent flow is on the time interval [0,1], you can easily make intermediate samples by linear interpolation at times 0 < s < t < 1 and marginalise (weight these) over the data density to get the displacement of the source distribution Phi(t) given Phi(s).
...between a reference distribution (usually n-dimensional Gaussian) and the target distribution you want to model (available as examples). ..🔥And flow matching builds this flow by systematically taking pairs of points in the source and target (the target is your training examples).
🔥This really ingenious paper (Categorical Flow Matching [1]) came out today.
🔥 TL;DR: generates molecules, text, images
🔥As I said yesterday [2,3], you can use generative AI to make images (or molecules) with certain properties and learn their full distribution by learning a flow ... (thread below)
Nicely organised cats
Although I never drove an Uber, they sent tax forms to the IRS saying I earned ~$30k (got another one today). I reported the identity theft to IRS/FTC/Uber (hopefully fixed it). Still curious who’s driving an Uber as me -ask them some tough neuro questions if Paul Thompson pops up in your Uber app!!
If you like modern AI with latent diffusion + flow matching, take a look at [1] well before latent diffusion, you will see how natural variation can arise naturally from statistical laws built with PDEs, continuum mechanics, + Bayesian priors that arise from these operators+their Green's functions.
This later led to metric pattern theory, a general framework to understand variation in objects, a general theory of metrics on diffeomorphisms, and procedures to construct flows that do not fold (diffeomorphisms) by integrating velocity fields.
..the deformations u(x) result from a stochastic differential equation Lu = e, where L is a self-adjoint differential operator, whose covariance can be learned from data, and may be non-stationary.
But work by Michael Miller, Ulf Grenander, and the Brown Pattern Theory school showed that natural variation in brain geometry, and function, could be modelled as a set of probabilistic transformations of a template, where ..
In the 1990s, as statistical parametric mapping was being developed, the standard way to study disease effects on the brain was to average images together.
Brilliant talk by Michael Miller at USC today. Michael has inspired countless generations of students, including me in the 1990s when his work with Ulf Grenander [1] helped new generations of mathematicians get involved with medical imaging and neuroscience.
[1] www.ams.org/journals/qam...
Brilliant to catch up with giants in neuroimaging + genetics, Anders Dale and Ole Andreassen. Thank you to Pravesh Parekh from the J Craig Venter Institute for a great talk on detecting time-dependent genomic effects on the brain, and his FEMA method to accelerate massively parallel GWAS analyses.