I know I don't have a literary mind, don't have the abilities of mixing colors in a canvas... Equally, there are people -good at many other stuff- but lacking the rigorous scientific mindset and education. These theories of consciousness are full of holes, that ship sinks in the waters of science
16.04.2025 21:46 β π 0 π 0 π¬ 0 π 0
Indeed, there is enough knowledge as to make significant progress but it is hampered by:
- Specific people and groups with agendas that benefit from the status quo
- Collecting most of the talent in few companies with restricted research scope
- The lack of scientific rigor and critical thinking
05.01.2025 15:02 β π 2 π 0 π¬ 0 π 0
You can still use the tool if you carefully review the output (or use a third party verifier), even if you ignore completely how the tool works. In this case it is used mostly as some probabilistic idea generator.
30.12.2024 14:15 β π 3 π 0 π¬ 0 π 0
If the question translates into: can we use a tool for deriving valid conclusions if we ignore its scope of applicability?
The answer there would be NO.
Eg, current AI doesn't have guarantees of correctness as to assume that 20 pages of math manipulations will not include a mistake
30.12.2024 14:15 β π 3 π 0 π¬ 1 π 0
We must bear in mind that while we are forecasting superhuman intelligence, current systems had not shown capabilities on asking questions or formulating hypothesis. Somehow, some people think this comes along with better problem solving and aren't architectural requirements
18.12.2024 09:46 β π 1 π 0 π¬ 0 π 0
I mean the way the connectivity is configured. Eg current architectures don't allow for arbitrary amount of reasoning steps (open-endedness).
Same for the lack of robust reasoning: it should be part of the architectural design and not be expected being consistently "discovered" during training
17.12.2024 20:11 β π 1 π 0 π¬ 1 π 0
We must be clear that current systems represent a very specific architecture of ANNs by design.
Even if we could abstract real neurons with artificial ones, the essence of system's dynamics relies on the architecture, which is radically different in current ANNs compared to the brain
17.12.2024 07:46 β π 3 π 0 π¬ 1 π 0
Saying that humans are not a form of general intelligence isn't about putting an isolated human in a test tube; it is asserting that there are subjects in which humanity can't make progress given enough time and technology. Are there such areas? cc @ylecun.bsky.social
9/9
12.12.2024 13:46 β π 0 π 0 π¬ 0 π 0
Some, mistakenly, expect that such capability must be encapsulated in *single* intelligent agents, but βgeneral intelligenceβ always relies on three pillars, it must be: social, generational and technological.
8/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
- Intelligence is a collection of pattern manipulation mechanisms and detecting similar patterns in dissimilar environments is what we call βgeneralizationsβ. Unbounded mechanisms of pattern-finding constitute βgeneralβ intelligence.
7/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
Current systems are no replacement for scientific research since the topic of problem formulation is not even on the table right now. A theory-less science is as weak as like hypothesis-lacking experiments
6/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
- I am convinced that efficient intelligent systems (comparable to biological ones) will come from robust models of cognition. Several people (Chollet @fchollet.bsky.socialβ¬, Y. LeCun, me, others) are working on this direction and sooner than later weβll see some prototypes of these projects
5/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
The dystopian perspective is stronger in some places more than others. One thing should be clear: we should not expect technology to come rescuing us if our values are the ones in disarray and misaligned with respect to our own interests
4/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
Third, the fundamentally nonaligned rogue ASI that treats humans like we treat other species is a deep moral and ethical question. What is the relation between economic and technological progress and human values?
3/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
- Would ASI decide to kill all humans? Clearly, any advanced AI will βcorrectly- conclude that many of our critical problems are of our own making, but as rightly pointed out, this realization is much more complex than noticing that removing humans is not a solution for humans
2/
12.12.2024 13:46 β π 0 π 0 π¬ 1 π 0
Important questions on AI addressed by this series.
My comments shared with the -somewhat more quiet- bluesky audience:
- Should we teach AI like children? Learning like children needs the proper cognitive architecture, which AIs lack (similar to raising a chimp as a child)
1/
12.12.2024 13:46 β π 1 π 0 π¬ 1 π 0
I would put a like but was stopped by the perfect number...
10.12.2024 09:03 β π 1 π 0 π¬ 0 π 0
A solid one. In the first 15 min I realized that my anti AI-hype view was aligned with your position. Further, I like your contrarian view on the standard AI dogma.
05.12.2024 15:56 β π 1 π 0 π¬ 0 π 0
Exactly. That is what I am proposing in my framework that will demo shortly (in MiniGrid as start): set of rules (action space) can be modified ad hoc and the system adapts with robust reasoning to new conditions
05.12.2024 10:43 β π 0 π 0 π¬ 0 π 0
Eg: "Find a possible sequence of movements from the start of a game of chess that leads to white pieces delivering checkmate in four moves. Only knights and pawns can be moved"
- GPT(4o, o1-mini, o1-preview): Impossible
- Gemini-1.5-Pro-002: 1. Nf3 Nf6 2. Ng1 Ng8 3. f4 e5 4. g4 h5# ???
- Claude:
05.12.2024 10:31 β π 0 π 0 π¬ 1 π 0
In Flexibility the system needs to adapt online to new conditions and not only rely only on pretraining (eg a broken leg in a multilegged robot) or kids quickly learning to play 2x2 chess, exchanging pieces.
05.12.2024 10:31 β π 0 π 0 π¬ 1 π 0
In Accuracy we need correct state to state transitions, I see that in your work βhallucinationsβ are reduced to less than 0.1%.
The challenge is that a single invalid transition (eg in theorem proving) renders the whole output invalid
05.12.2024 10:31 β π 0 π 0 π¬ 1 π 0
Interesting results on reasoning potential with LLMs. I use regularly chess to test reasoning abilities and they usually βhallucinateβ invalid moves and positions.
From my work on general reasoning agents I see two main required properties: accuracy and flexibility.
05.12.2024 10:31 β π 0 π 0 π¬ 1 π 0
Still, if some future architectures require few to none tuning for a new task would seem weird assigning all the credit to the designers of the general architecture.
Highly autonomous and self learning AIs could be creative, discover new things and be just tools
04.12.2024 22:45 β π 1 π 0 π¬ 0 π 0
Agree, I also see AI as a tool. Nowadays, we see a lot of attributions of results to the AI when actually it required lots of architecture design, data selection and finetuning.
04.12.2024 22:45 β π 0 π 0 π¬ 1 π 0
Overall I agree that is up to us to solve our problems (that are mostly socio-cultural with economic and environmental consequences).
Still, proving a theorem or finding a new material could be done with AI, isn't that a breakthrough?
04.12.2024 14:57 β π 0 π 0 π¬ 1 π 0
AGI understood as βuniversal human replacement technologyβ is technically, socially and epistemologically rubbish
03.12.2024 09:39 β π 0 π 0 π¬ 0 π 0
I see still important limitations in current architectures while doing precise algorithmic reasoning and I imagine those issues are also present but harder to spot on philosophical debates
02.12.2024 14:07 β π 0 π 0 π¬ 0 π 0
Certainly, we canβt interpret LLMs outputs in terms of meaning, correctness or wisdom as we usually do when interacting with a person. I think we need more effort spent on educating the general public about these tools, and this article contributes in one perspective
02.12.2024 14:07 β π 0 π 0 π¬ 1 π 0
Huge βfoundationβ models are the antithesis of a general problem solving intelligence: in its solipsistic thinking only one perspective is pushed, while new discoveries are based on novel approaches to data
01.12.2024 18:17 β π 0 π 0 π¬ 0 π 0
Postdoctoral researcher @ KU Leuven
NeuroSymbolic AI & Knowledge Compilation
Assistant Professor at Imperial College London | EEE Department and I-X.
Neuro-symbolic AI, Safe AI, Generative Models
Previously: Post-doc at TU Wien, DPhil at the University of Oxford.
| PhD student @IMPRS-IS & Bosch Center for AI | Knowledge Graph | Large Language Model | Uncertainty |
π https://zhuyuqicheng.github.io/
Professor of Computer Science. Author of Neural-Symbolic Cognitive Reasoning. Founder of Cognitive Intelligence, NeSy Association, NeSy conference series. Editor-in-chief Neurosymbolic AI journal.
Post-doc @ University of Trento. I did my PhD @ University of Trento and the University of Pisa. I like #concepts, #symbols, and #representations, but I still don't know what they are.
π Trento, Italy
π§΅ #identifiability, #shortcuts, #interpretability
PhD Student at the University of Amsterdam | Neurosymbolic AI.
https://erkankarabulut.github.io/
PhD student @ KU Leuven, DTAI lab
https://kostis-init.github.io/
19th International conference on Neurosymbolic Learning and Reasoning
UC Santa Cruz, Santa Cruz, California
8 to 10 September 2025
https://nesy-ai.org/
https://2025.nesyconf.org
postdoc @ ai lab, Vrije Universiteit Brussel
working on providing reliable and verifiable ai mechanisms
#RL & formal methods
delgrange.me
PhD researcher @ KU Leuven, member of LEMUR MSCA doctoral network
NeuroSymbolic AI + LLM
AZBβU.TokyoβNeuro-Symbolic AI, RSM @MITIBMLab, @IBMResearch. Cycling/Gymkhana/Autox. My tweets don't represent the view of my organization. https://scholar.google.com/citations?user=b4UzH5 English tweets only. JP Tweets -> twitter.com/guicho271828
Ph.D. student in Artificial Intelligence at the University of Trento.
Postdoc at Amsterdam UMC, working on machine learning and graphs.
PostDoc Researcher. PhD in Computer Science @uniud. #NeuroSymbolicAI, #Data #science, and #AI applied to indoor #positioning and #healthcare.
Educator, Researcher, Entrepreneur.
https://www.linkedin.com/in/amitsheth/
http://amit.aiisc.ai
ML Scientist @CuspAI | Innovating for a greener future π±
Postdoc @rug.nl with Arianna Bisazza.
Interested in NLP, interpretability, syntax, language acquisition and typology.