๐ฏ
05.06.2025 21:53 โ ๐ 10 ๐ 2 ๐ฌ 0 ๐ 0@dileeplearning.bsky.social
AGI research @DeepMind. Ex cofounder & CTO Vicarious AI (acqd by Alphabet), Cofounder Numenta Triply EE (BTech IIT-Mumbai, MS&PhD Stanford). #AGIComics blog.dileeplearning.com
๐ฏ
05.06.2025 21:53 โ ๐ 10 ๐ 2 ๐ฌ 0 ๐ 0Ohh ok I realize that @tyrellturing.bsky.social mentioned evolution. Fine then. But then which neuroscientist believes this?
15.05.2025 18:04 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0HmmโฆI donโt think itโs impossible.
Evolution could create structures in the brain that are in correspondence with structure in the world.
Here's the CSCG paper: www.nature.com/articles/s41...
And here' the CML paper:
www.nature.com/articles/s41...
Good conclusion :-).
15.05.2025 01:22 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0somehow chatGPT understand my opinion about successor representations? 4/
15.05.2025 01:22 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0I didn't mention partial observability specifically, so it is impressive that this was picked up. Looks like we did something right in our CSCG paper in making this explicit? 3/
15.05.2025 01:22 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0It is quite impressive that chatGPT picked up these nuances, picks up a relevant quote from the paper and even emphasizes portions of the response. 2/
15.05.2025 01:22 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0This paper turned up on a feed, I was intrigued by it and started reading...
..but then I was quite baffled because our CSCG work seem to have tackled many of these problems in a more general setting and it's not even mentioned!
So I asked ChatGPT... ...I'm impressed by the answer1. 1/๐งต
Some of our work could explain this kind of latent graph learning and schema-like abstraction. 2/
arxiv.org/abs/2302.07350
Wow, very cool to see this work from Alla Karpova's lab. She had shown me the results when I visited @hhmijanelia.bsky.social and I was blown away.
www.biorxiv.org/content/10.1...
1/
it takes a bit of getting immersed in the field to know this :-)
27.04.2025 21:15 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0This comic would be an inside joke. Many neuroscience papers that study a brain region postulate that the current does a simple transformation of the input to make it easy for brain regions that are 'downstream' to solve the real problem effectively deferring the real problem. ...
27.04.2025 21:14 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0๐๐ผ๐ ๐๐ต๐ผ๐๐น๐ฑ ๐๐ฒ ๐ฑ๐ฒ๐ณ๐ถ๐ป๐ฒ ๐ฎ๐ป๐ฑ ๐ฑ๐ฒ๐๐ฒ๐ฟ๐บ๐ถ๐ป๐ฒ ๐ฎ ๐ฏ๐ฟ๐ฎ๐ถ๐ป ๐ฟ๐ฒ๐ด๐ถ๐ผ๐ป'๐ ๐ถ๐บ๐ฝ๐ผ๐ฟ๐๐ฎ๐ป๐ฐ๐ฒ?
We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state.
Work by @codejoydo.bsky.social
elifesciences.org/reviewed-pre...
It's kinda obvious. #AGIComics has already figured out which brain region is the most important. ๐
27.04.2025 20:56 โ ๐ 25 ๐ 5 ๐ฌ 3 ๐ 0And whether top-down't influence is multiplicative or not is very context-dependent. (this is also what is seen in neurobiology). www.science.org/doi/10.1126/...
26.04.2025 00:59 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0www.science.org/doi/10.1126/...
26.04.2025 00:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Explaining away is a concrete role for feedback computations, and here is one example showing its effect. ....there are many more examples in the paper.
www.science.org/cms/10.1126/...
just want to re-point to this work and remind you that we have proposed and built models that incorporate topdown feedback, and have a theory about it.
www.science.org/doi/10.1126/...
ohh...yes...this is exactly what I think after reading some of the "deep research" reports. ....written by a committee
30.03.2025 01:30 โ ๐ 9 ๐ 0 ๐ฌ 0 ๐ 0jumping on the Gemini 2.5 bandwagon... it's an incredible model. really feels like an(other) inflection point. talking to Claude 3.7 feels like talking to a competent colleague who knows about everything, but makes mistakes. Gemini 2.5 feels like talking to a world-class expert with A+ intuitions
28.03.2025 17:16 โ ๐ 58 ๐ 4 ๐ฌ 10 ๐ 4already being scaled up...
27.03.2025 16:21 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0We could run it to analyze a transformer...
27.03.2025 16:13 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0How about running an experiment that runs this process for a known system? Assume the limits of sampling and see what kinds of insights we can get?
27.03.2025 16:12 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0I just donโt understand the proposalโฆ.what models are going to be the โfoundationโ for these brain models? Is it the transformer architecture? โฆbecause that is the one that is proven to be scalable so far. If so, how are brain insights going to be extracted from a trained transformer?
26.03.2025 23:10 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 0well, benchmarks are useful, greedy hill climbing on them might lead you to new opportunities even if it doesnโt lead you to new insights. And the job markets can remain irrational longer than you can remain solvent :-).
26.03.2025 22:54 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Give me 10 billion dollars and Iโll do it. 1 billion for developing the hardware and 9 billion to pay for my opportunity cost ๐
26.03.2025 22:15 โ ๐ 6 ๐ 0 ๐ฌ 0 ๐ 0Sure. Can you give me a 10billion dollars?
26.03.2025 22:12 โ ๐ 3 ๐ 0 ๐ฌ 2 ๐ 0Whatโs your beef with โprocessingโ? IMO, the specific architectural modifications are about โprocessingโ. The attention circuit is โprocessingโ. No?
26.03.2025 22:11 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0Again, tell me which architecture+algorithm you want to scale.
26.03.2025 22:04 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0