Just under 10 days left to submit your latest endeavours in #tractable probabilistic models!
Join us at TPM @auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!
@daviddebot.bsky.social
PhD student @dtai-kuleuven.bsky.social in neurosymbolic AI and concept-based learning https://daviddebot.github.io/
Just under 10 days left to submit your latest endeavours in #tractable probabilistic models!
Join us at TPM @auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!
We developed a library to make logical reasoning embarrasingly parallel on the GPU.
For those at ICLR ๐ธ๐ฌ: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!
If you're at #AAAI2025, come check out our demo on neurosymbolic reinforcement learning with probabilistic logic shields ๐ค Tomorrow (Sat, March 1) from 12:30โ2:30 PM during the poster session ๐ป
28.02.2025 22:53 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 0We all know backpropagation can calculate gradients, but it can do much more than that!
Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
๐ฅ Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! ๐
๐ Paper: arxiv.org/pdf/2412.13023
๐ป Code: github.com/ML-KULeuven/...
๐งตโฌ๏ธ
See you at #AAAI2025!
Site: dtai.cs.kuleuven.be/projects/nes...
Video: youtu.be/3uLVxwlcSQc?...
@daviddebot.bsky.social, @gabventurato.bsky.social, @giuseppemarra.bsky.social, @lucderaedt.bsky.social
#ReinforcementLearning #AI #MachineLearning #NeurosymbolicAI
(8/8)
Open-source & easy to use!
๐ท Code: github.com/ML-KULeuven/...
๐ท Based on MiniHack & Stable Baselines3
๐ท Define new shields in just a few lines of code!
๐ Letโs make RL safer & smarter, together!
(7/8)
Want to try it yourself? ๐ฎ
Use our interactive web demo!
๐ท Modify environments (add lava, monsters!)
๐ท Test shielded vs. non-shielded agents
๐ฅ๏ธ Play with it here: dtai.cs.kuleuven.be/projects/nes...
(6/8)
Why does this matter?
๐ท Faster training โ
๐ท Safer exploration ๐
๐ท Better generalization ๐
(5/8)
How does it work? ๐ค๐ก๏ธ
The shield:
โ
Exploits symbolic data from sensors ๐
โ
Uses logical rules ๐
โ
Prevents unsafe actions ๐ซ
โ
Still allows flexible learning ๐ค
A perfect blend of symbolic reasoning & deep learning!
(4/8)
Enter MiniHack, our demo's testing ground! ๐ฐ๐ก๏ธ
There, RL agents face:
โ
Lava cliffs & slippery floors
โ
Chasing monsters
โ
Locked doors needing keys
Findings:
๐ท Standard RL struggles to find an optimal, safe policy.
๐ท Shielded RL agents stay safe & learn faster!
(3/8)
Deep RL is powerful, but...
โ ๏ธ It can take dangerous actions
โ ๏ธ It lacks safety guarantees
โ ๏ธ It struggles with real-world constraints
Yang et al.'s probabilistic logic shields fix this, enforcing safety without breaking learning efficiency! ๐
(2/8)
๐ Do you care about safe AI? Do you want RL agents that are both smart & trustworthy?
At #AAAI2025, we present our demo for neurosymbolic RLโcombining deep learning with probabilistic logic shields for safer, interpretable AI in complex environments. ๐ฐ๐ฅ
๐งต๐
(1/8)
A short overview video can be found on YouTube: youtu.be/CgSDhQKESD0?...
#NeurIPS2024
Or check out our Medium post: ๐ medium.com/@pyc.devteam... (7/7)
04.12.2024 08:50 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0With CMR, weโre reaching the sweet spot of accuracy and interpretability. Check it out at our poster at #NeurIPS2024! ๐ neurips.cc/virtual/2024... (6/7)
04.12.2024 08:49 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0During training, CMR learns embeddings as latent representations of logic rules, and a neural rule selector identifies the most relevant rule for each instance. Due to a clever factorization and rule selector, inference is linear in the number of concepts and rules. (5/7)
04.12.2024 08:49 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0CMR makes a prediction in 3 steps:
1) Predict concepts from the input
2) Neurally select a rule from a memory of learned logic rules โจ Accuracy
3) Evaluate the selected rule with the concepts to make a final prediction โจ Interpretability (4/7)
CMR has:
โก State-of-the-art accuracy that rivals black-box models
๐ Pure probabilistic semantics with linear-time exact inference
๐๏ธ Transparent decision-making so human users can interpret model behavior
๐ก๏ธ Pre-deployment verifiability of model properties (3/7)
CMR is our latest neurosymbolic concept-based model. A proven ๐ถ๐ฏ๐ช๐ท๐ฆ๐ณ๐ด๐ข๐ญ ๐ฃ๐ช๐ฏ๐ข๐ณ๐บ ๐ค๐ญ๐ข๐ด๐ด๐ช๐ง๐ช๐ฆ๐ณ irrespective of the concept set, CMR achieves near-black-box accuracy by combining ๐ฟ๐๐น๐ฒ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด and ๐ป๐ฒ๐๐ฟ๐ฎ๐น ๐ฟ๐๐น๐ฒ ๐๐ฒ๐น๐ฒ๐ฐ๐๐ถ๐ผ๐ป! (2/7)
04.12.2024 08:47 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0๐จ Interpretable AI often means sacrificing accuracyโbut what if we could have both? Most interpretable AI models, like Concept Bottleneck Models, force us to trade accuracy for interpretability.
But not anymore, due to Concept-Based Memory Reasoner (CMR)! #NeurIPS2024 (1/7)