Jake Browning's Avatar

Jake Browning

@jake-browning.bsky.social

Philosophy of AI and Mind, but with a historical bent. Baruch College. My dog is better than your dog. https://www.jacob-browning.com/

1,507 Followers  |  3,155 Following  |  24 Posts  |  Joined: 21.08.2023  |  1.8998

Latest posts by jake-browning.bsky.social on Bluesky

Preview
Are LLMs the Epicycles of Intelligence? I’ve always been fascinated by AI and mega-projects — and as I work on AI infrastructure, you might assume I’m equally fascinated by the current LLM race. In reality, I’m far more skeptical than most....

Fascinating take on the current AI hyperbole… ashvardanian.com/posts/llm-ep...

10.10.2025 07:08 — 👍 8    🔁 4    💬 0    📌 1
Post image

Happy to share that our BBS target article has been accepted: “Core Perception”: Re-imagining Precocious Reasoning as Sophisticated Perceiving
With Alon Hafri, @veroniqueizard.bsky.social, @chazfirestone.bsky.social & Brent Strickland
Read it here: doi.org/10.1017/S014...
A short thread [1/5]👇

09.10.2025 15:51 — 👍 82    🔁 34    💬 3    📌 2

This is a big one! A 4-year writing project over many timezones, arguing for a reimagining of the influential "core knowledge" thesis.

Led by @daweibai.bsky.social, we argue that much of our innate knowledge of the world is not "conceptual" in nature, but rather wired into perceptual processing. 👇

09.10.2025 16:31 — 👍 116    🔁 46    💬 7    📌 6
Preview
Do AI Models Perform Human-like Abstract Reasoning Across Modalities? OpenAI's o3-preview reasoning model exceeded human accuracy on the ARC-AGI benchmark, but does that mean state-of-the-art models recognize and reason with the abstractions that the task creators inten...

Do AI reasoning models abstract and reason like humans?

New paper on this from my group:

arxiv.org/abs/2510.02125

🧵 1/10

06.10.2025 21:27 — 👍 84    🔁 24    💬 3    📌 1
Preview
Development of an Offline-Friend Addiction Questionnaire (O-FAQ): Are most people really social addicts? - Behavior Research Methods A growing number of self-report measures aim to define interactions with social media in a pathological behavior framework, often using terminology focused on identifying those who are ‘addicted’ to engaging with others online. Specifically, measures of ‘social media addiction’ focus on motivations for online social information seeking, which could relate to motivations for offline social information seeking. However, it could be the case that these same measures could reveal a pattern of friend addiction in general. This study develops the Offline-Friend Addiction Questionnaire (O-FAQ) by re-wording items from highly cited pathological social media use scales to reflect “spending time with friends”. Our methodology for validation follows the current literature precedent in the development of social media ‘addiction’ scales. The O-FAQ had a three-factor solution in an exploratory sample of N = 807 and these factors were stable in a 4-week retest (r = .72 to .86) and was validated against personality traits, and risk-taking behavior, in conceptually plausible directions. Using the same polythetic classification techniques as pathological social media use studies, we were able to classify 69% of our sample as addicted to spending time with their friends. The discussion of our satirical research is a critical reflection on the role of measurement and human sociality in social media research. We question the extent to which connecting with others can be considered an ‘addiction’ and discuss issues concerning the validation of new ‘addiction’ measures without relevant medical constructs. Readers should approach our measure with a level of skepticism that should be afforded to current social media addiction measures.

Had missed this absolutely brilliant paper. They take a widely used social media addiction scale & replace 'social media' with 'friends'. The resulting scale has great psychometric properties & 69% of people have friend addictions.

link.springer.com/article/10.3...

01.10.2025 11:32 — 👍 135    🔁 43    💬 7    📌 3
Preview
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...

Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵

01.10.2025 01:26 — 👍 112    🔁 36    💬 5    📌 8

Is there a book review of the recent Laurence and Margolis "Building Blocks of Thought"? Follow up: is anybody publishing Phil of mind reviews these days? I don't recall seeing any for Buckner, Burge or Shea, either.

27.09.2025 12:39 — 👍 0    🔁 0    💬 0    📌 0

ChatGPT is surprisingly bad at generating / explaining garden path sentences, and my students, who had a garden path question on their homework, will soon find that out 😅

24.09.2025 21:52 — 👍 26    🔁 3    💬 1    📌 1

Neurocognitive Foundations of Mind is out. Check it out.

19.09.2025 12:56 — 👍 7    🔁 2    💬 0    📌 0

Dear universe,

I've got great karma as a reviewer. I'd appreciate it if you'd reward me in this life.

Thanks.

18.09.2025 21:22 — 👍 3    🔁 0    💬 0    📌 0

I wrote a response to Thomas Friedman's "magical thinking" on AI here: aiguide.substack.com/p/magical-th...

15.09.2025 16:27 — 👍 97    🔁 52    💬 9    📌 9

A life well-lived--except for a foolish, multi-decade neck-beard that just didn't work.

13.09.2025 00:45 — 👍 0    🔁 0    💬 0    📌 0
Preview
The Language of Thought Hypothesis

Our entry (with @ericman.bsky.social ) for the Open Encyclopedia of Cognitive Science, “The Language of Thought Hypothesis”, is now out.

doi.org/10.21428/e27...

12.09.2025 19:06 — 👍 42    🔁 11    💬 1    📌 0

Just reading the Friedman AI articles in the NYT. There is *a lot* of magical thinking in them. E.g.:

"We discovered in the early 2020s that if you built a neural network big enough, combined it with strong enough A.I. software and enough electricity, A.I. would just emerge." (1/2)

08.09.2025 23:52 — 👍 73    🔁 11    💬 17    📌 3

It was both broader and weirder in those first two decades, too. Lots of pissing contests between behaviorists and the rest of psychology, plus nutty stuff about emergence, qualia, and the contested meaning of both "naive" and "realism." We've lost a lot since then.

08.09.2025 20:31 — 👍 2    🔁 0    💬 0    📌 0

The real philosophy journal scandal goes back to 1921 when The Journal of Philosophy, Psychology, and Scientific Methods shortened its name to The Journal of Philosophy so it was "more convenient for citation," which of course made it easier for Mind to drop psychology from its subtitle in 1974.

08.09.2025 20:13 — 👍 25    🔁 5    💬 1    📌 0
Preview
Court staff cover up Banksy image of judge beating a protester Artist’s latest work at Royal Courts of Justice in London is thought to refer to pro-Palestine demonstrations

They’re even censoring Bansky’s image of a protestor! ⁦‪‬⁩
www.theguardian.com/artanddesign...

08.09.2025 14:47 — 👍 19    🔁 10    💬 0    📌 0
Post image

Our new lab for Human & Machine Intelligence is officially open at Princeton University!

Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)

08.09.2025 13:59 — 👍 51    🔁 15    💬 2    📌 0
Preview
Google quietly removes net-zero carbon goal from website amid rapid power-hungry AI data center buildout — industry-first sustainability pledge moved to background amidst AI energy crisis Google's goal to be net-zero in carbon emissions by 2030 is still apparently company policy, it's just not broadcasting it anymore

ICYMI: Google removes net zero goal from website.

08.09.2025 08:53 — 👍 234    🔁 126    💬 7    📌 33
Post image

Here's my syllabus and reading list for an introductory, up-to-date course covering the philosophy, ethics, and politics of artificial intelligence: www.conspicuouscognition.com/p/philosophy....

08.09.2025 10:17 — 👍 42    🔁 9    💬 3    📌 0
Post image

I was profoundly saddened to hear that my friend Brian Cantwell Smith has passed away after a long illness. Brian's work on on the foundations of computing and intelligence was deep and insightful. He was a remarkable person & overall mensch. May his memory be a blessing.

tinyurl.com/2u3fh42m

07.09.2025 19:17 — 👍 73    🔁 4    💬 4    📌 3
Preview
Philosophy of Artificial Intelligence: 10-Week Syllabus & Readings Can computers think and feel? Will "super-intelligent" machines cause human extinction? How will advances in AI transform democracy, society, the information environment, and human relationships?

Dan Williams' 10-week course on the philosophy of artificial intelligence looks like one I'd love to take
www.conspicuouscognition.com/p/philosophy...

07.09.2025 16:57 — 👍 6    🔁 3    💬 0    📌 0
Logo of the new Journal "Experimental Philosophy".

Logo of the new Journal "Experimental Philosophy".

The new journal "Experimental Philosophy" is now on BlueSky.

@xphijournal.bsky.social

bsky.app/profile/xphi...

Please share!

06.09.2025 14:26 — 👍 19    🔁 16    💬 0    📌 1

Robert is a good guy, but he's a philosopher, not a neuroscientist. And more cog psych than neuro.

07.09.2025 01:00 — 👍 2    🔁 0    💬 1    📌 0
Preview
Humans Perceive Wrong Narratives from AI Reasoning Texts A new generation of AI models generates step-by-step reasoning text before producing an answer. This text appears to offer a human-readable window into their computation process, and is increasingly r...

When reading AI reasoning text (aka CoT), we (humans) form a narrative about the underlying computation process, which we take as a transparent explanation of model behavior. But what if our narratives are wrong? We measure that and find it usually is.

Now on arXiv: arxiv.org/abs/2508.16599

27.08.2025 21:30 — 👍 85    🔁 22    💬 4    📌 2
Preview
Three things we learned about Sam Altman by scoping his kitchen All Drizzle, no sizzle

The FT has been consistently good on this stuff, the only outlet to really take coreweave seriously, had great reporting, probably my fav business outlet. The Altman oil piece by bryce elder is one of the most devastating things I've ever read

www.ft.com/content/b180...

18.08.2025 06:19 — 👍 425    🔁 69    💬 20    📌 24
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb.

At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories."

In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.

The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb. At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories." In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.

🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n

15.08.2025 16:27 — 👍 98    🔁 16    💬 8    📌 3
Preview
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

“The AI in the study probably prompted doctors to become over-reliant on its recommendations, ‘leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,’ the scientists said in the paper.”

12.08.2025 23:41 — 👍 5268    🔁 2587    💬 116    📌 543
Preview
Sam Altman and the whale The most interesting things happening right now in AI aren’t happening in chatbots.

While GPT-5 may make for a better experience than the previous versions, it isn’t something revolutionary. trib.al/bTzEh5r

11.08.2025 19:29 — 👍 17    🔁 4    💬 0    📌 1

@jake-browning is following 20 prominent accounts