Fascinating take on the current AI hyperbole… ashvardanian.com/posts/llm-ep...
10.10.2025 07:08 — 👍 8 🔁 4 💬 0 📌 1@jake-browning.bsky.social
Philosophy of AI and Mind, but with a historical bent. Baruch College. My dog is better than your dog. https://www.jacob-browning.com/
Fascinating take on the current AI hyperbole… ashvardanian.com/posts/llm-ep...
10.10.2025 07:08 — 👍 8 🔁 4 💬 0 📌 1Happy to share that our BBS target article has been accepted: “Core Perception”: Re-imagining Precocious Reasoning as Sophisticated Perceiving
With Alon Hafri, @veroniqueizard.bsky.social, @chazfirestone.bsky.social & Brent Strickland
Read it here: doi.org/10.1017/S014...
A short thread [1/5]👇
This is a big one! A 4-year writing project over many timezones, arguing for a reimagining of the influential "core knowledge" thesis.
Led by @daweibai.bsky.social, we argue that much of our innate knowledge of the world is not "conceptual" in nature, but rather wired into perceptual processing. 👇
Do AI reasoning models abstract and reason like humans?
New paper on this from my group:
arxiv.org/abs/2510.02125
🧵 1/10
Had missed this absolutely brilliant paper. They take a widely used social media addiction scale & replace 'social media' with 'friends'. The resulting scale has great psychometric properties & 69% of people have friend addictions.
link.springer.com/article/10.3...
Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵
01.10.2025 01:26 — 👍 112 🔁 36 💬 5 📌 8Is there a book review of the recent Laurence and Margolis "Building Blocks of Thought"? Follow up: is anybody publishing Phil of mind reviews these days? I don't recall seeing any for Buckner, Burge or Shea, either.
27.09.2025 12:39 — 👍 0 🔁 0 💬 0 📌 0ChatGPT is surprisingly bad at generating / explaining garden path sentences, and my students, who had a garden path question on their homework, will soon find that out 😅
24.09.2025 21:52 — 👍 26 🔁 3 💬 1 📌 1Neurocognitive Foundations of Mind is out. Check it out.
19.09.2025 12:56 — 👍 7 🔁 2 💬 0 📌 0Dear universe,
I've got great karma as a reviewer. I'd appreciate it if you'd reward me in this life.
Thanks.
I wrote a response to Thomas Friedman's "magical thinking" on AI here: aiguide.substack.com/p/magical-th...
15.09.2025 16:27 — 👍 97 🔁 52 💬 9 📌 9A life well-lived--except for a foolish, multi-decade neck-beard that just didn't work.
13.09.2025 00:45 — 👍 0 🔁 0 💬 0 📌 0Our entry (with @ericman.bsky.social ) for the Open Encyclopedia of Cognitive Science, “The Language of Thought Hypothesis”, is now out.
doi.org/10.21428/e27...
Just reading the Friedman AI articles in the NYT. There is *a lot* of magical thinking in them. E.g.:
"We discovered in the early 2020s that if you built a neural network big enough, combined it with strong enough A.I. software and enough electricity, A.I. would just emerge." (1/2)
It was both broader and weirder in those first two decades, too. Lots of pissing contests between behaviorists and the rest of psychology, plus nutty stuff about emergence, qualia, and the contested meaning of both "naive" and "realism." We've lost a lot since then.
08.09.2025 20:31 — 👍 2 🔁 0 💬 0 📌 0The real philosophy journal scandal goes back to 1921 when The Journal of Philosophy, Psychology, and Scientific Methods shortened its name to The Journal of Philosophy so it was "more convenient for citation," which of course made it easier for Mind to drop psychology from its subtitle in 1974.
08.09.2025 20:13 — 👍 25 🔁 5 💬 1 📌 0They’re even censoring Bansky’s image of a protestor!
www.theguardian.com/artanddesign...
Our new lab for Human & Machine Intelligence is officially open at Princeton University!
Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
Here's my syllabus and reading list for an introductory, up-to-date course covering the philosophy, ethics, and politics of artificial intelligence: www.conspicuouscognition.com/p/philosophy....
08.09.2025 10:17 — 👍 42 🔁 9 💬 3 📌 0I was profoundly saddened to hear that my friend Brian Cantwell Smith has passed away after a long illness. Brian's work on on the foundations of computing and intelligence was deep and insightful. He was a remarkable person & overall mensch. May his memory be a blessing.
tinyurl.com/2u3fh42m
Dan Williams' 10-week course on the philosophy of artificial intelligence looks like one I'd love to take
www.conspicuouscognition.com/p/philosophy...
Logo of the new Journal "Experimental Philosophy".
The new journal "Experimental Philosophy" is now on BlueSky.
@xphijournal.bsky.social
bsky.app/profile/xphi...
Please share!
Robert is a good guy, but he's a philosopher, not a neuroscientist. And more cog psych than neuro.
07.09.2025 01:00 — 👍 2 🔁 0 💬 1 📌 0When reading AI reasoning text (aka CoT), we (humans) form a narrative about the underlying computation process, which we take as a transparent explanation of model behavior. But what if our narratives are wrong? We measure that and find it usually is.
Now on arXiv: arxiv.org/abs/2508.16599
The FT has been consistently good on this stuff, the only outlet to really take coreweave seriously, had great reporting, probably my fav business outlet. The Altman oil piece by bryce elder is one of the most devastating things I've ever read
www.ft.com/content/b180...
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb. At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories." In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖
Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning
So what role should symbols play in theories of the mind? For our answer...read on!
Paper: arxiv.org/abs/2508.05776
1/n
“The AI in the study probably prompted doctors to become over-reliant on its recommendations, ‘leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,’ the scientists said in the paper.”
12.08.2025 23:41 — 👍 5268 🔁 2587 💬 116 📌 543While GPT-5 may make for a better experience than the previous versions, it isn’t something revolutionary. trib.al/bTzEh5r
11.08.2025 19:29 — 👍 17 🔁 4 💬 0 📌 1