Ah nรฃo.............
14.08.2025 16:20 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0@rcpinto.bsky.social
CS / AI / ML PhD / Professor. ๐ง๐ท Working on meta-learning, continual learning, control, evo, RL, game AI. Hobbyist game dev (@megamanbeyond.com) Local news and useless facts about my life in PT-BR. EN otherwise.
Ah nรฃo.............
14.08.2025 16:20 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Olha a pegadinha, hein... nรฃo vai ter anistia sรณ pra quem planejou matar?
14.08.2025 16:15 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Open model performance rankings across most benchmarks these days.
Top half: China
Bottom half: Everyone else
So many such cases.
Certamente tinha uma base do Hamas escondida embaixo da tenda.
11.08.2025 02:08 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0EU Tร ME CAGANDO DE RIR!!!
11.08.2025 00:55 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0HHUAUHAHUAHUUHAHUAH VTNC
11.08.2025 00:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Jeff Hawkins even wrote a book about that back in 2004.
10.08.2025 21:10 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Even IQ tests have subcategories. Peforming badly at one of them doesn't make you straight dumb. I'd say you can even get a Ph.D. (or a few)
09.08.2025 03:53 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0It could be argued against that analogy and that's fine. Maybe "LLM physics" would be more adequate to refer to the part we know. What's important is that we know the lower level but are still struggling with the higher level that emerges from it.
09.08.2025 03:38 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0IOW, there are at least 2 kinds of "knowing" / "understanding" when talking about LLMs, and people usually confuse and/or conflate both.
09.08.2025 03:30 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Even if we had perfect knowledge of neuroscience, that wouldn't imply perfect knowledge of psychology.
We have perfect "LLM neuroscience", but not "LLM psychology". We know how parameters are updated and outputs are computed, but not what internal mechanisms it is learning and how it uses them.
Seems like you're a parrot.
08.08.2025 04:28 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Agora a Atlas faz enquete? Suprassumo da metodologia estatรญstica.
08.08.2025 02:38 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0lol
07.08.2025 18:48 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0Uรฉ
06.08.2025 02:59 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Tadinha da รustria, tรฃo pobrezinha.
06.08.2025 01:20 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0NรO POSSO MAIS COMETER CRIMES!!!!!!!!1!!1!111
05.08.2025 20:26 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0The "AI is useless" crowd should feel ashamed.
03.08.2025 18:07 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0The original definition of AGI in the 00's was basically "not-narrow-AI" or "multi-task-AI". The goalpost has moved so far away by now, and will keep moving...
03.08.2025 17:38 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Compartilhei com essa esperanรงa mesmo hahaha
01.08.2025 22:41 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0@daysdanilo.bsky.social olha quem recรฉm postou vรญdeo: www.youtube.com/watch?v=ZrN5...
01.08.2025 21:23 โ ๐ 1 ๐ 0 ๐ฌ 2 ๐ 0"I want AI to do my laudry so I can make art, not the other way around!!1!"
Ok, what now?
Tem sim, nรฃo seja modesto.
31.07.2025 15:41 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Nossa, ele nรฃo vai nem dormir hoje.
30.07.2025 17:32 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0TACO
30.07.2025 02:27 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0HRM: Hierarchical Reasoning Model
ngl this sounds like bullshit but i donโt think it is
- 27M (million parameters)
- 1000 training examples
- beats o3-mini on ARC-AGI
arxiv.org/abs/2506.21734
I automatically dismiss any argument that starts with "AI will never".
27.07.2025 16:31 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0โBoth architectures are optimized with Adam.
Who/what is โAdamโ? I think this is a very serious typoโฆโ