Brad Aimone's Avatar

Brad Aimone

@jbimaknee.bsky.social

Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.

635 Followers  |  159 Following  |  364 Posts  |  Joined: 02.12.2023
Posts Following

Posts by Brad Aimone (@jbimaknee.bsky.social)

Post image

Excited to be in San Antonio for the UTSA AI Matrix THOR Neuromorphic Commons kickoff!

THOR will be one of the first community resources fully dedicated for accessing scalable neuromorphic hardware! Check it out! #NeuroAI #Neuromorphic πŸ§ͺπŸ§ πŸ€–

www.neuromorphiccommons.com/events/thor_...

23.02.2026 14:47 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Am I alone in being skeptical when seeing "We need WORLD MODELS because that's what the BRAIN does!"?
Won't this just be another 'AI tech bros use the brain to get attention and $$$ but ignore it at the first opportunity'?
Why trust any of the AI crowd to talk about the brain? We need real #NeuroAI

09.02.2026 15:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

How a place can ruin both coffee and donuts astounds me

09.02.2026 01:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I've always understood the "have to start somewhere" and the *hope* that visual cortex is all we need since it is easy to access and easy and intuitive for inputs.

But we're, what, 75 years into V1's reign over neuro? With little generalizable to neural disorders to show for it?

Let's move on.

07.02.2026 15:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Is there even such a thing as academic machine learning anymore?

26.01.2026 15:48 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

If you defer leadership to industry, you relinquish any right to criticize the outcome being profit-centric.

Kudos to the BRAIN leadership for embracing the brain / AI / computing connection. That takes courage because so many neuros and AI tech bros deny the connection out of self-interest.

26.01.2026 15:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Maybe cynical, but any time a scientist says "We should leave AI to industry" it is because they are scared about $$ going to something they don't work on

I've heard for >10 years "let industry lead" and we have LLMs, power plants & data centers

#NeuroAI needs research, not just venture capital

26.01.2026 15:47 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There is plenty of space to innovate in #NeuroAI. The issue has been that neuroscientists don't even try as they assume industry will do it.

Deferring AI and neural computing to an industry that only cares about selling ads is not a way to help further our understanding of the brain.

26.01.2026 14:44 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

You can take BlueSky out of Twitter, but you can't keep the Twitter tech bros out of BlueSky

04.01.2026 21:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Well, we have another 20 years until the cortex field rediscovers a hippocampus finding.

04.01.2026 21:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I agree. Deep learning has huge value on its own. But it isnt brain inspired in intent or practice
I sometimes see "debates" with LeCun or Dally, what is the point? To convince them? Of what? It's a different field
Neuro may be able to overcome ANN limitations, but the brain path won't come from DL

03.01.2026 13:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It's unfortunate, but there really isn't any other monetization path that justifies the insane capital expenses they're investing in.

02.01.2026 20:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
a man with a red face is standing in a space ship and saying it 's a trap . ALT: a man with a red face is standing in a space ship and saying it 's a trap .

Don't do it Dan!

02.01.2026 18:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In the end, I try to be practical about all of this. Philosophically we can debate about what true understanding means, but practically we want better and smarter AI algorithms and to be able to fix the brain. How do we do that? If digital isn't sufficient, what is the scalable alternative?

02.01.2026 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm not a Turing worshiper, but I think you're fixated too much on the discrete/analog thing. The brain isn't analog all the way down, synapses and ion channels are really stochastic discrete elements. Really the stochasticity, not the continuity, is where the brain diverges from classical computing

02.01.2026 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

You're not guilty of this to begin with. :) That's like me saying my New Year's resolution is not to start every day off with a Bloody Mary.

02.01.2026 17:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Well, I certainly believe that there are better models to use than serial Turing machines. Though "analog!" is a pretty weak alternative for a number of reasons.

But saying Turing computation fundamentally cannot represent what the brain is doing is a very high theoretical bar to get over.

02.01.2026 17:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Solving sparse finite element problems on neuromorphic hardware - Nature Machine Intelligence Theilman and Aimone introduce a natively spiking algorithm for solving partial differential equations on large-scale neuromorphic computers and demonstrate the algorithm on Intel’s Loihi 2 neuromorphi...

Take a look at this recent paper of ours. This isn't to say that the brain is doing conjugate gradient; but getting neurons to solve linear systems is not just possible, it is rather natural.

www.nature.com/articles/s42...

02.01.2026 16:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The brain clearly solves the same types of problems (control, inference, etc) in a different way. The same functions but different algorithms on a different model of computation. It isn't marginalizing the brain to say that it computes, it helps demystifies it. Which is what we have to do.

02.01.2026 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is the danger of falling into the trap of implementations. Today we use a certain type of computer to do scientific computing and AI; but that doesn't mean that Von Neumann machines are the only type of computer or that sequential linear algebra is the only type of math that is useful.

02.01.2026 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Assuming the brain is representing the world for decision making and survival, it is effectively modeling the world with neurons. That's exactly what numerical computing is - modeling something with something else - the substrate is just different than transistors in a stored program architecture.

02.01.2026 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The brain isn't doing Runge-Kutta in floating point on a von Neumann architecture, but that doesn't mean the principles of applied math and theoretical computer science don't apply.

The brain isn't magic. Math and computer science apply to it, just like the laws of physics do.

02.01.2026 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Basically, I claim that neural computation is just any other numerical method, with limitations like any other and amenable to analysis like any other.

We simply don't yet know what that method is.

02.01.2026 16:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I get what youre saying but the brain is representing the world (external and body) inexactly. Whether digital or analog, discrete or continuous, it doesn't matter. The brain is approximating some other dynamics with its dynamics. That approximation has numerical limitations like anything else.

02.01.2026 16:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

In fact, Id go so far as to argue that many neurological disorders are a breakdown of that robustness of neural computation that we take for granted.

02.01.2026 15:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That I disagree with. If the brain is computing, it has to do so reliably. Which means the same numerical stability issues matter. If I see a cat, I should always perceive a cat. And we do. Even if the underlying dynamics are chaotic.

To me, that is one of the biggest open questions in neuro.

02.01.2026 15:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

My takeaway, which has stuck with me for ten years since, is that the brain's computations must be abstracted from the microsecond details of biophysics we can potentially measure. The timing of spikes matters, but relatively across a population, not individually.

02.01.2026 14:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Many years ago we had a study (file cabinet paper sadly) that showed no matter the numerical precision, the stiff nonlinearities and fanout of recurrent spiking circuits made simulations diverge.

At 1st this says simulations don't work, but the brain also has to operate reliably with such stiffness

02.01.2026 14:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

The goal of neuroscience and NeuroAI really shouldn't be to have an exact in silico clone of an individuals brain. That isn't necessary for almost any helpful societal impactful application of neuroscience research

02.01.2026 13:47 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This boils down to what the purpose of the simulation is. Weather predictions are fine as samples over days because that saves lives and $$. I'd argue that to help treat disease - finding the locus of seizure generation, say? - a sample over short time is fine. Perhaps also for language generation

02.01.2026 13:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0