Human abstraction ability applies not just to language but across all of the subjects we reason about.
AI wonโt reach its potential till we learn to blend symbolic and causal capabilities with the statistical pattern matching that powers todayโs LLMs.
#AI #NeurosymbolicAI #CausalAI
22.06.2025 22:41 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Instead of relying only on patterns in input, humans:
+ Form internal, rule-based models of language structure (e.g. grammar, syntax).
+ Infer underlying rules even when theyโre not explicitly taught.
+ Use these abstractions to generalize beyond what theyโve directly heard.
2/n
22.06.2025 22:40 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Table comparing human and LLM language learning patterns
LLMs are missing โtheoretical abstractionโ capability we see in children.
Multiple folks pointed this out, for example @teppofelin.bsky.social and Holweg in โTheory Is All You Need: AI, Human Cognition, and Causal Reasoningโ. papers.ssrn.com/sol3/papers....
#AI #CausalAI #SymbolicAI
1/n
22.06.2025 22:39 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Cleaning Up Policy Sludge: An AI Statutory Research System | Stanford HAI
This brief introduces a novel AI tool that performs statutory surveys to help governmentsโsuch as the San Francisco City Attorney Officeโidentify policy sludge and accelerate legal reform.
Legal reform can get bogged down by outdated or cumbersome regulations. Our latest brief with Stanford RegLab scholars presents an AI tool that helps governmentsโsuch as the San Francisco City Attorney's Officeโidentify and eliminate such โpolicy sludge.โ hai.stanford.edu/policy/clean...
19.06.2025 16:50 โ ๐ 5 ๐ 1 ๐ฌ 0 ๐ 0
Enjoying the graphic.
On your list of ways to address these concerns, where would you put implementation neurosymbolic AI?
Seems to me that combining deep learning (LLMs) with symbolic/causal models could go a long way to creating more reliable, auditable, and aligned AI.
#AI
19.06.2025 15:12 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
@jwmason.bsky.social The other thing worth knowing is that bigger LLMs are not the only path forward for AI. Combining LLMs with symbolic/causal models has the promise of creating hybrid AI systems that are much more reliable in reflecting the world as it is.
#AI #SymbolicAI #CausalAI
19.06.2025 15:06 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
LLMs form a semi-accurate representation of the world as it is reflected in the writing they train on. A next step would be to create hybrid AIs that combine LLMs with symbolic and causal models that have explicit (and more accurate/auditable) representations of the world #AI #SymbolicAI #CausalAI
19.06.2025 14:57 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
The opportunity here is for us to perfect hybrid systems that integrate deep learning with symbolic reasoning and causal understanding. This will reduce our dependence on filtering bad consequences, by having models that are inherently more reliable.
#AI #CausalAI #SymbolicAI
18.06.2025 19:49 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Progress on the "control layer" feels far behind our breakthroughs with the "genie".
Having the control layer be a smart filter on the input and output is helpful but in the end seems fundamentally wrong headed.
2/n
18.06.2025 19:48 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
โWishes have consequences. Especially when they run in production.โ - ain't that a fact!
Cassie Kozyrkov's genie metaphor rings true:
1/n
18.06.2025 19:48 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
First, through a think-aloud study (N=16) in which participants use ChatGPT to answer objective questions, we identify 3 features of LLM responses that shape users' reliance: #explanations (supporting details for answers), #inconsistencies in explanations, and #sources.
2/7
28.02.2025 15:21 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 0
โช@rohanpaul.bsky.socialโฌ this post is feeling lonely ;-)
Why not cross post on both X and Bluesky?
18.06.2025 00:14 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Table contrasting symbolic reasoning and causal models as next steps in AI evolution.
#SymbolicAI and #CausalAI companions in search for next #AI breakthrough
17.06.2025 23:51 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
When are AI/ML models unlikely to help with decision-making? | Statistical Modeling, Causal Inference, and Social Science
@jessicahullman.bsky.socialโฌ persuasively argues that current AI is poor tool for decisions that fit FIRE (forward-looking, individual/idiosyncratic, require reasoning or experimentation/intervention) profile
Hmm โฆ does this call for #CausalAI
statmodeling.stat.columbia.edu/2025/06/05/w...
17.06.2025 22:35 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
For example, Appleโs approach to callbacks to app code from the model as it reasons and their support for multi-layered guardrails illustrates what they have learned about needing components with use case specific checks and balances.
#PervasiveAI #AgenticAI #AppleAI #AISafety
2/n
17.06.2025 22:22 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Apple opening on device LLM take-aways:
1) Medium term โPervasive AIโ will have more reach than โAgentic AIโ
2) AI best implemented through systems of components and not a single blackbox neural net
3) Use case specific adjustments is needed to balance latency, cost, reliability and safety
1/n
17.06.2025 22:21 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Director of Research, DAIR
Roller derby athlete
https://alex-hanna.com
Book: thecon.ai
Pod+newsletter: https://dair-institute.org/maiht3k
๐ช๐ฌ๐ณ๏ธโโง๏ธ She/ูู
๐ธ @willtoft.bsky.social
Rep๐ @ianbonaparte.bsky.social
Book: https://thecon.ai
Web: https://faculty.washington.edu/ebender
ion foundation endowed prof | university of utah | cognition, AI, generative rationality, theory-based view, causal reasoning, economics, strategy
Associate Professor of Philosophy: ethics, social-political philosophy, cognitive science, philosophy of AI, mind, and metaphysics
Computer Scientist, SAP Expert, IT Manager, Consultant, Independent AI Researcher, AI Engineer ๐ AI = Deep Learning + Causal Inference + Symbol Manipulation
Junior Fellow in AI @wimmics, @univcotedazur.bsky.social. Knowledge Graphs, Semantic Web, Neuro-Symbolic AI. Spokesperson @afiainfo.bsky.social
@piermonn@sigmoid.social
Nucleoid is open source #NeuroSymbolic #AI with Knowledge Graph | Reasoning Engine ๐ฟ๐ฑ๐๐ Star us โญ https://github.com/NucleoidAI/Nucleoid
I believe in self sovereign data #ssi , data economy and decentralized tech. I am democratizing personal and privacy-first #ai at mykin.ai
Assistant Professor at Imperial College London | EEE Department and I-X.
Neuro-symbolic AI, Safe AI, Generative Models
Previously: Post-doc at TU Wien, DPhil at the University of Oxford.
human being | assoc prof in #ML #AI #Edinburgh | PI of #APRIL | #reliable #probabilistic #models #tractable #generative #neuro #symbolic | heretical empiricist | he/him
๐ https://april-tools.github.io
Science, AI, Tech. #Neurosymbolic #AI #innovation PhD from Imperial College; MBA from MIT. Professor, researcher, manager, dad. WA, MA, UK, BR.
Associate professor of economics, John Jay College-CUNY, senior fellow at the Groundwork Collaborative. Blog and other writing: jwmason.org. Study economics with me: https://johnjayeconomics.org. Anti-war Keynesian, liberal socialist, Brooklyn dad.
Visual Investigations at The New York Times
CEO, AI Advisor, Keynote Speaker, fmr Chief Decision Scientist at Google
Newsletter: decision.substack.com
โ๏ธ Assistant Professor of Computer Science at CU Boulder
๐ฉโ๐ป NLP, cultural analytics
๐ https://maria-antoniak.github.io
Previously: Pioneer Centre for AI in Copenhagen, Ai2, Microsoft Research, Twitter, Facebook, Cornell, UW
Assistant Professor of Computer Science at Princeton | #HCI #AR #VR #SpatialComputing
parastooabtahi.com
hci.social/@parastoo
Workshop on Visualization for AI Explainability at IEEE VIS.
visxai.io
Assistant Professor in CS: researching ML/AI in sociotechnical systems & teaching Data Science and Dev tools with an emphasis responsible computing
New Englander, NSBE lifetime member
profile pic: me in a purplish sweater with math vaguely on the w
Associate Professor at the UW iSchool, Co-Founder UW Center for an Informed Public | PhD in Sociology from UC Irvine | Research: social networks, sociology, information integrity & computational social science.
Asst Prof at Cornell Info Sci and Cornell Tech. Responsible AI
https://angelina-wang.github.io/