Anima Anandkumar's Avatar

Anima Anandkumar

@anima-anandkumar.bsky.social

AI Pioneer, AI+Science, Professor at Caltech, Former Senior Director of AI at NVIDIA, Former Principal Scientist at AWS AI.

316 Followers  |  196 Following  |  27 Posts  |  Joined: 12.03.2025
Posts Following

Posts by Anima Anandkumar (@anima-anandkumar.bsky.social)

Anima AI + Science Lab As AI+Science went more mainstream in 2025, our team’s seminal contributions to the field are getting wide recognition. While language models have been used by β€œAI-scientists” to generate new ideas, they do not solve the main bottleneck in scientific discovery. The cost and time needed for physical experimentation is the main limiting factor for trying out many new ideas. AI with physical understanding aims to replace expensive physical experimentation with digital exploration. In 2020, we invented Neural Operators to learn multiscale physical phenomena that laid the foundations for such Physical AI. In 2025, we deepened these foundations and also pushed the frontier in a wide range of scientific applications.

My 2025 research highlights in ai+science tensorlab.cms.caltech.edu/users/anima/...

02.01.2026 20:41 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

An exciting collaboration with @francesarnold.bsky.social on AI+enzymes. This combines generative protein models with carefully tuned filters that resulted in functional and versatile enzymes beating natural and previously engineered enzymes.

16.12.2025 03:23 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

Finetune a codon-level language model with 30k tryptophan synthases, then generate diverse, functional, enzymes with broad substrate scopes.

ThΓ©ophile Lambert @jsunn-y.bsky.social @francesarnold.bsky.social

www.biorxiv.org/content/10.1...

16.12.2025 00:25 β€” πŸ‘ 22    πŸ” 4    πŸ’¬ 0    πŸ“Œ 1

Generated PLP-dependent Trp synthases are functional, stable, and exhibit usefully broad substrate scopes! fun collaboration with @anima-anandkumar.bsky.social @ramanathanlab.bsky.social Amin Takavoli. I love AI + #enzymes!

16.12.2025 00:46 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Post image

Analyzing Political Text at Scale with Online Tensor LDA: @sarakangaslahti.bsky.social @harvard.edu | D Ebanks @iqss.bsky.social | @jeankossaifi.bsky.social NVIDIA |A Liu @hopkinsengineer.bsky.social @rmichaelalvarez.bsky.social @caltechlcssp.bsky.social @anima-anandkumar.bsky.social @caltech.edu

10.12.2025 20:52 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Join our happy hour meetup today at NeurIPS to chat about AI+Science and AI+Math with me and my team from @caltechedu at:

Achilles Coffee Roasters Gaslamp, San Diego.
4:45pm - 6:30pm

I will be announcing one more meetup later this week if you can't make it to this one. Stay tuned!

02.12.2025 18:56 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
AI+Science Conference The California Institute of Technology and the University of Chicago are centers of gravity for the study, application, and use of AI and Machine Learning to enable scientific discovery across the physical and biological sciences, advancing core AI principles and training a new generation of interdisciplinary scientists. To both advance this scientific and technical pursuit and demonstrate the leadership of Caltech and UChicago in this space, we will host the The Caltech and University of Chicago Conference on AI+Science, Sponsored by the Margot and Tom Pritzker Foundation, at Caltech from November 10-11, 2025. This event will bring together an elite and diverse cohort of leading researchers in core AI and domain sciences to lead conversations and drive partnerships that will shape future inquiry, industry investment, and entrepreneurial opportunities.

Join livestream and listen to talks at @caltech.edu ai+science conference aiscienceconference.caltech.edu

10.11.2025 17:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Bigger, Better Ambitions for AI How can we advance efforts to harness AI to deliver positive impacts to people's lives?

Join @aratip.bsky.social, Vivek Vishwanathan & @anima-anandkumar.bsky.social for UC Berkeley’s #TechPolicyWeek!

β€œBigger, Better Ambitions for AI” β€” exploring how #AI can drive positive impact.

Oct 20 | 3–4:15pm | 2400 Ridge Rd

@BerkeleyISchool.bsky.social @GoldmanSchool.bsky.social

11.10.2025 22:50 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Shaping the Water-Harvesting Behavior of Metal–Organic Frameworks Aided by Fine-Tuned GPT Models We construct a data set of metal–organic framework (MOF) linkers and employ a fine-tuned GPT assistant to propose MOF linker designs by mutating and modifying the existing linker structures. This strategy allows the GPT model to learn the intricate language of chemistry in molecular representations, thereby achieving an enhanced accuracy in generating linker structures compared with its base models. Aiming to highlight the significance of linker design strategies in advancing the discovery of water-harvesting MOFs, we conducted a systematic MOF variant expansion upon state-of-the-art MOF-303 utilizing a multidimensional approach that integrates linker extension with multivariate tuning strategies. We synthesized a series of isoreticular aluminum MOFs, termed Long-Arm MOFs (LAMOF-1 to LAMOF-10), featuring linkers that bear various combinations of heteroatoms in their five-membered ring moiety, replacing pyrazole with either thiophene, furan, or thiazole rings or a combination of two. Beyond their consistent and robust architecture, as demonstrated by permanent porosity and thermal stability, the LAMOF series offers a generalizable synthesis strategy. Importantly, these 10 LAMOFs establish new benchmarks for water uptake (up to 0.64 g g–1) and operational humidity ranges (between 13 and 53%), thereby expanding the diversity of water-harvesting MOFs.

I am thrilled to see Omar Yaghi win the Nobel Prize in Chemistry today. I have had the privilege to interact with him and collaborate with him. This is a paper from a couple of years ago using generative models for MOFs with @ucberkeleyofficial.bsky.social group. pubs.acs.org/doi/10.1021/...

08.10.2025 19:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our new paper on AI-generated TrpBs with @anima-anandkumar.bsky.social. GenSLM generated very useful promiscuous TrpB #enzymes, bypassing a lot of #directedevolution! great work by the whole team, especially Theophile Lambert. www.biorxiv.org/content/10.1...

04.09.2025 14:58 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Very pleased to see our AI model GenSLM designing novel and versatile enzymes in a challenging setting in
@francesarnold.bsky.social lab in the tryptophan synthase (TrpB) family. www.biorxiv.org/content/10.1...
AI can create novel enzymes outperformed both natural and laboratory-optimized TrpBs.

03.09.2025 19:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

End-to-end learning can use both approximate and accurate training data, if the model can learn how to mix them correctly. It turns out that Neural Operators offer a perfect solution when such multi-fidelity and multi-resolution data is available, and can learn with high data efficiency.

02.09.2025 00:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our latest paper surprisingly shows that it is not the case! End to end also requires less training data compared to methods that keep existing numerical solvers and augment with AI. Where do savings come from? The approach that augments AI relies only on fully accurate expensive training data.

02.09.2025 00:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We have seen end-to-end approach win in areas like weather forecasting. It is significantly better for speed: 1000-million x faster than numerical simulations in many areas such as fluid dynamics, plasma physics etc. But a big argument against it is the need for expensive training data.

02.09.2025 00:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Popular prescription is to augment AI into existing workflows rather than replace them, e.g., keep the approximate numerical solver for simulations, and use AI only to correct its errors in every time step. Other extreme is to completely discard the existing workflow and replace it fully with AI.

02.09.2025 00:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How do we build AI for science? Augment with AI or replace with AI? Augment with AI involves keeping existing numerical simulations. In our latest paper, we show end-to-end learning is faster significantly and also wins in data efficiency, which is counterintuitive. arxiv.org/pdf/2408.05177 #ai

02.09.2025 00:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image

Thank you @cvprconference.bsky.social for hosting my
IEEE Kiyo Tomiyasu award for bringing AI to scientific domains with Neural Operators and physics-informed learning. The future of science is AI+Science!
corporate-awards.ieee.org/award/ieee-k...

16.06.2025 03:02 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Post image

🚨We propose EquiReg, a generalized regularization framework that uses symmetry in generative diffusion models to improve solutions to inverse problems. arxiv.org/abs/2505.22973

@aditijc.bsky.social, Rayhan Zirvi, Abbas Mammadov, @jiacheny.bsky.social, Chuwei Wang, @anima-anandkumar.bsky.social 1/

12.06.2025 15:47 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
The Roots of Neural Network: How Caltech Research Paved the Way to Modern AI β€” Caltech Magazine Tracing the roots of neural networks, the building blocks of modern AI, at Caltech. By Whitney Clavin

Thank you @caltech.edu for including me in the history of AI. It starts with Carver Mead, John Hopfield and Richard Feynman teaching a course on physics of computation. Not many are aware that the main AI conference, NeurIPS, started at @caltech.edu

magazine.caltech.edu/post/ai-mach...

10.06.2025 17:34 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

Check out our new preprint π“πžπ§π¬π¨π«π†π‘πšπƒ.
We use a robust decomposition of the gradient tensors into low-rank + sparse parts to reduce optimizer memory for Neural Operators by up to πŸ•πŸ“%, while matching the performance of Adam, even on turbulent Navier–Stokes (Re 10e5).

03.06.2025 03:16 β€” πŸ‘ 30    πŸ” 7    πŸ’¬ 2    πŸ“Œ 2
Preview
TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training Scientific problems require resolving multi-scale phenomena across different resolutions and learning solution operators in infinite-dimensional function spaces. Neural operators provide a powerful fr...

Thanks to my co-authors David Pitt, Robert Joseph George, Jiawwei Zhao, Cheng Luo, Yuandong Tian, Jean Kossaifi, @anima-anandkumar.bsky.social, and @caltech.edu for hosting me this spring!
Paper: arxiv.org/abs/2501.02379
Code: github.com/neuraloperat...

03.06.2025 03:16 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Science in the age of AI
YouTube video by Google for Developers Science in the age of AI

It was an honor to be part of Google IO Dialogues stage and talk about AI+Science.

AI needs to understand the physical world to make new scientific discoveries.

LLMs come up with new ideas, but bottleneck is testing in real world.

Physics-informed learning is needed

youtu.be/NYtQuneZMXc?...

01.06.2025 18:21 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Indian American professor Anima Anandkumar on developing AI for new scientific discoveries Learn how Indian American professor Anima Anandkumar is revolutionizing the world of artificial intelligence to drive new scientific discoveries. Explore her cutting-edge research and innovative appro...

In a recent interview I talk about what it takes for AI to make new scientific discoveries. tldr: it won’t be just LLMs. www.newindiaabroad.com/english/tech...

25.05.2025 23:26 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Caltech AI Professor: The One Skill AI Can't Replace  |  Anima Anandkumar
YouTube video by EO Caltech AI Professor: The One Skill AI Can't Replace | Anima Anandkumar

Thank you EO for coming to @caltech.edu interviewing me on #ai I talk about the need to keep being curious and use AI as a tool, rather than being afraid of AI. I talk about AI for scientific modeling and discovery, and training the first high-resolution AI-based weather model. youtu.be/FIxLJVthW6I

04.05.2025 20:41 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We have released VARS-fUSI: Variable sampling for fast and efficient functional ultrasound imaging (fUSI) using neural operators.

The first deep learning fUSI method to allow for different sampling durations and rates during training and inference. biorxiv.org/content/10.1... 1/

28.04.2025 17:55 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Rayhan Zirvi is presenting our paper "Diffusion State-Guided Projected Gradient for Inverse Problems" at #ICLR2025! Joint work with @anima-anandkumar.bsky.social 1/

paper: openreview.net/pdf?id=kRBQw...
code: github.com/Anima-Lab/Di...
website: diffstategrad.github.io

24.04.2025 04:58 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Collage with 20 trailblazing Women of AI- Anima Anandakumar, Ayanna Howard, Cynthia Breazeal, Cynthia Rudin, Daphne Koller, Devi Parikh, Doina Precup, Fei-Fei Li, Hanna Hajishirzi, Joelle Pineau, Joy Buolamwini, Latanya Sweeney, Leslie Kaelbling, Margaret Mitchell, Melanie Mitchell, Niki Parmar, Rana el Kaliouby, Regina Barzilay, Timnit Gebru, Yejin Choi

Collage with 20 trailblazing Women of AI- Anima Anandakumar, Ayanna Howard, Cynthia Breazeal, Cynthia Rudin, Daphne Koller, Devi Parikh, Doina Precup, Fei-Fei Li, Hanna Hajishirzi, Joelle Pineau, Joy Buolamwini, Latanya Sweeney, Leslie Kaelbling, Margaret Mitchell, Melanie Mitchell, Niki Parmar, Rana el Kaliouby, Regina Barzilay, Timnit Gebru, Yejin Choi

#WomensHistoryMonth: Honoring trailblazing #WomenOfAI whose research has made an impact on the current #AI/ML revolution incl. @anima-anandkumar.bsky.social @timnitgebru.bsky.social @mmitchell.bsky.social @deviparikh.bsky.social @ajlunited.bsky.social @yejinchoinka.bsky.social @drfeifei.bsky.social

30.03.2025 19:33 β€” πŸ‘ 43    πŸ” 16    πŸ’¬ 0    πŸ“Œ 0
Post image

How does the brain integrate prior knowledge with sensory data to perceive the world?

Come check out our poster [1-090] at #cosyne2025:
"A feedback mechanism in generative networks to remove visual degradation," joint work with Yuelin Shi, @anima-anandkumar.bsky.social, and Doris Tsao. 1/2

27.03.2025 20:59 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

Thank you, IEEE, for the honor! AI+Science is here to stay. I started working on this seriously after I joined @caltech.edu in 2017. We grounded our work in principled foundations, such as Neural Operators and physics-informed learning, for accelerating modeling and making scientific discoveries.

18.03.2025 18:16 β€” πŸ‘ 16    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Preview
LeanAgent: Lifelong Learning for Formal Theorem Proving Large Language Models (LLMs) have been successful in mathematical reasoning tasks such as formal theorem proving when integrated with interactive proof assistants like Lean. Existing approaches involv...

LeanAgent: Lifelong learning for formal theorem proving. ~ Adarsh Kumarappan, Mo Tiwari, Peiyang Song, Robert Joseph George, Chaowei Xiao, Anima Anandkumar. arxiv.org/abs/2410.06209 #LLMs #ITP #LeanProver #Math

13.03.2025 07:42 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0