Abel_TM's Avatar

Abel_TM

@abeltm.bsky.social

Research Scientist. Implementing reasoning in AI. Theory and implementation of open ended reasoning algorithms for long term planning, robotics, math, protein design and science

85 Followers  |  722 Following  |  52 Posts  |  Joined: 19.11.2024  |  1.8609

Latest posts by abeltm.bsky.social on Bluesky

I know I don't have a literary mind, don't have the abilities of mixing colors in a canvas... Equally, there are people -good at many other stuff- but lacking the rigorous scientific mindset and education. These theories of consciousness are full of holes, that ship sinks in the waters of science

16.04.2025 21:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Indeed, there is enough knowledge as to make significant progress but it is hampered by:

- Specific people and groups with agendas that benefit from the status quo

- Collecting most of the talent in few companies with restricted research scope

- The lack of scientific rigor and critical thinking

05.01.2025 15:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You can still use the tool if you carefully review the output (or use a third party verifier), even if you ignore completely how the tool works. In this case it is used mostly as some probabilistic idea generator.

30.12.2024 14:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If the question translates into: can we use a tool for deriving valid conclusions if we ignore its scope of applicability?
The answer there would be NO.
Eg, current AI doesn't have guarantees of correctness as to assume that 20 pages of math manipulations will not include a mistake

30.12.2024 14:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We must bear in mind that while we are forecasting superhuman intelligence, current systems had not shown capabilities on asking questions or formulating hypothesis. Somehow, some people think this comes along with better problem solving and aren't architectural requirements

18.12.2024 09:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I mean the way the connectivity is configured. Eg current architectures don't allow for arbitrary amount of reasoning steps (open-endedness).

Same for the lack of robust reasoning: it should be part of the architectural design and not be expected being consistently "discovered" during training

17.12.2024 20:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We must be clear that current systems represent a very specific architecture of ANNs by design.

Even if we could abstract real neurons with artificial ones, the essence of system's dynamics relies on the architecture, which is radically different in current ANNs compared to the brain

17.12.2024 07:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Saying that humans are not a form of general intelligence isn't about putting an isolated human in a test tube; it is asserting that there are subjects in which humanity can't make progress given enough time and technology. Are there such areas? cc @ylecun.bsky.social
9/9

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Some, mistakenly, expect that such capability must be encapsulated in *single* intelligent agents, but β€˜general intelligence’ always relies on three pillars, it must be: social, generational and technological.
8/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- Intelligence is a collection of pattern manipulation mechanisms and detecting similar patterns in dissimilar environments is what we call β€˜generalizations’. Unbounded mechanisms of pattern-finding constitute β€˜general’ intelligence.
7/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Current systems are no replacement for scientific research since the topic of problem formulation is not even on the table right now. A theory-less science is as weak as like hypothesis-lacking experiments
6/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- I am convinced that efficient intelligent systems (comparable to biological ones) will come from robust models of cognition. Several people (Chollet @fchollet.bsky.social‬, Y. LeCun, me, others) are working on this direction and sooner than later we’ll see some prototypes of these projects
5/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The dystopian perspective is stronger in some places more than others. One thing should be clear: we should not expect technology to come rescuing us if our values are the ones in disarray and misaligned with respect to our own interests
4/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Third, the fundamentally nonaligned rogue ASI that treats humans like we treat other species is a deep moral and ethical question. What is the relation between economic and technological progress and human values?
3/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

- Would ASI decide to kill all humans? Clearly, any advanced AI will –correctly- conclude that many of our critical problems are of our own making, but as rightly pointed out, this realization is much more complex than noticing that removing humans is not a solution for humans
2/

12.12.2024 13:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Important questions on AI addressed by this series.
My comments shared with the -somewhat more quiet- bluesky audience:

- Should we teach AI like children? Learning like children needs the proper cognitive architecture, which AIs lack (similar to raising a chimp as a child)
1/

12.12.2024 13:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I would put a like but was stopped by the perfect number...

10.12.2024 09:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

A solid one. In the first 15 min I realized that my anti AI-hype view was aligned with your position. Further, I like your contrarian view on the standard AI dogma.

05.12.2024 15:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Exactly. That is what I am proposing in my framework that will demo shortly (in MiniGrid as start): set of rules (action space) can be modified ad hoc and the system adapts with robust reasoning to new conditions

05.12.2024 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Eg: "Find a possible sequence of movements from the start of a game of chess that leads to white pieces delivering checkmate in four moves. Only knights and pawns can be moved"

- GPT(4o, o1-mini, o1-preview): Impossible
- Gemini-1.5-Pro-002: 1. Nf3 Nf6 2. Ng1 Ng8 3. f4 e5 4. g4 h5# ???
- Claude:

05.12.2024 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In Flexibility the system needs to adapt online to new conditions and not only rely only on pretraining (eg a broken leg in a multilegged robot) or kids quickly learning to play 2x2 chess, exchanging pieces.

05.12.2024 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In Accuracy we need correct state to state transitions, I see that in your work β€˜hallucinations’ are reduced to less than 0.1%.

The challenge is that a single invalid transition (eg in theorem proving) renders the whole output invalid

05.12.2024 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Interesting results on reasoning potential with LLMs. I use regularly chess to test reasoning abilities and they usually β€˜hallucinate’ invalid moves and positions.

From my work on general reasoning agents I see two main required properties: accuracy and flexibility.

05.12.2024 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Still, if some future architectures require few to none tuning for a new task would seem weird assigning all the credit to the designers of the general architecture.

Highly autonomous and self learning AIs could be creative, discover new things and be just tools

04.12.2024 22:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Agree, I also see AI as a tool. Nowadays, we see a lot of attributions of results to the AI when actually it required lots of architecture design, data selection and finetuning.

04.12.2024 22:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Overall I agree that is up to us to solve our problems (that are mostly socio-cultural with economic and environmental consequences).

Still, proving a theorem or finding a new material could be done with AI, isn't that a breakthrough?

04.12.2024 14:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

AGI understood as β€œuniversal human replacement technology” is technically, socially and epistemologically rubbish

03.12.2024 09:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I see still important limitations in current architectures while doing precise algorithmic reasoning and I imagine those issues are also present but harder to spot on philosophical debates

02.12.2024 14:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Certainly, we can’t interpret LLMs outputs in terms of meaning, correctness or wisdom as we usually do when interacting with a person. I think we need more effort spent on educating the general public about these tools, and this article contributes in one perspective

02.12.2024 14:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Huge β€œfoundation” models are the antithesis of a general problem solving intelligence: in its solipsistic thinking only one perspective is pushed, while new discoveries are based on novel approaches to data

01.12.2024 18:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@abeltm is following 20 prominent accounts