Computational Cognitive Science's Avatar

Computational Cognitive Science

@compcogsci.bsky.social

Account of the Computational Cognitive Science Lab at Donders Institute, Radboud University

250 Followers  |  148 Following  |  9 Posts  |  Joined: 24.12.2024  |  2.4437

Latest posts by compcogsci.bsky.social on Bluesky

Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia

Heartening to see that every day more people are signing our Open Letter "Stop the Uncritical Adoption of AI Technologies in Academia". We are now at ~900 signatories. Consider reading and signing as well and/or sharing. Help us get to the 1000! openletter.earth/open-letter-...

31.07.2025 19:47 โ€” ๐Ÿ‘ 16    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Who could have foreseen that the deskilling machine would lead to ... deskilling.

31.07.2025 07:36 โ€” ๐Ÿ‘ 83    ๐Ÿ” 18    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

The reaction to this being essentially completely positive has me pleasantly surprised. I sometimes get so worried that such analyses aren't wanted anymore and people just want to be left alone to gloat while cheating on coursework and rot their brains with vibe coding... Thanks everyone. ๐Ÿ’–

31.07.2025 14:52 โ€” ๐Ÿ‘ 26    ๐Ÿ” 4    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

Correlation is not cognition.[1] Stop with the nonsense.

Everyday we slip further into the abyss. I often regret reading emails from other academics.

[1] Guest & @andreaeyleen.bsky.social (2023). On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. doi.org/10.1007/s421...

31.07.2025 07:34 โ€” ๐Ÿ‘ 71    ๐Ÿ” 15    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1

Incredibly important paper that interrogates โ€œhuman-centered AIโ€ from a cogsci perspective.

Iโ€™m also super interested in reading AI critiques from a symbolic systems or org studies sensemaking framework. Anybody got reading recs?

30.07.2025 15:33 โ€” ๐Ÿ‘ 8    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

My goodness bibliography from Oliva Guest's paper is straight ๐Ÿ”ฅ

I will be reading for months!

30.07.2025 15:57 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I'm in Vienna for ACL 2025, right now, but I think I'll spend today reading this fantastic looking paper:

30.07.2025 09:33 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

A common gotcha is "but Olivia, proofs don't capture reality" which I find honestly so beautiful because that's THE point. A formal system, a proof, maths, code, ๐Ÿ’ฏ CANNOT solve reality, the frame problem, human cognition โ€” so their gut tells them exactly the answer. Lean into it! That's exactly it.

30.07.2025 09:01 โ€” ๐Ÿ‘ 26    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

we @irisvanrooij.bsky.social and co-authors, as many many others, also show it in: Modern Alchemy: Neurocognitive Reverse Engineering (philsci-archive.pitt.edu/25289) and Reclaiming AI as a theoretical tool for cognitive science (doi.org/10.1007/s421...). Also see: bsky.app/profile/oliv...
12/n

30.07.2025 09:01 โ€” ๐Ÿ‘ 14    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

that claims that machines can think are nonsense ALSO can formally & otherwise be shown as nonsense IF you take those fields seriously. Gรถdel proved it, Whitehead & Russell proved it, the frame problem captures it, and my current favourite Prigogine & Stengers show it in Order out of Chaos, and 11/n

30.07.2025 09:01 โ€” ๐Ÿ‘ 21    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

I feel I repeat myself, but it bears repeating sadly, that it's not that the critique of machines is somehow only possible with computer & cognitive science degrees (they can help tho, but they can also make you impervious to further analyses) BUT 10/n

30.07.2025 09:01 โ€” ๐Ÿ‘ 15    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

allowing such views to pass uncritically. The core traps here with AI are what @irisvanrooij.bsky.social and I outline (based on our previous work) in a forthcoming short publication, distilled into the table below: bsky.app/profile/iris...

30.07.2025 06:28 โ€” ๐Ÿ‘ 24    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

are what my field is uniquely poised to critique even if it also creates/condones (sadly) these very same positions. Notwithstanding this, many fall prey to this form of deeply problematic thinking inside my own field, people who should not only know better, but also be at the forefront of not 8/n

30.07.2025 06:26 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

What I mean by this is NOT that cognitive scientists are uniquely relevant, but that statements consistent with correlationism (arxiv.org/pdf/2507.19960 & philsci-archive.pitt.edu/25289), naive computationalism (see: philsci-archive.pitt.edu/24834), and modern connectionism (doi.org/10.31234/osf...)

30.07.2025 06:23 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

I also realised that in many of these discussions, the cognitive is disregarded (accidentally or otherwise) because, e.g. the people stating their otherwise expert opinions aren't cognitive computational scientists (which is what is highly relevant if we discuss human-like machines as thinking). 6/n

30.07.2025 06:21 โ€” ๐Ÿ‘ 15    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
extract from page 12 https://arxiv.org/pdf/2507.19960

extract from page 12 https://arxiv.org/pdf/2507.19960

Something I hinged on to get to this what I describe: the Marxian fetishisation of artefacts is so complete in the case of AI that not only do we somehow conclude machines think, but we accept for them to think, speak, draw instead of us, while also thinking these are (expressions of) our thoughts.

30.07.2025 06:18 โ€” ๐Ÿ‘ 39    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Table 1. The two steps required for the proposed redefinition of AI. At the top is step 1, where we decide whether a relationship exists between a technology and human cognition. This relationship, represented by the blue-green column between Machine and Human on row 1a, is AI. In 1b are terminological examples, both non-diagnostic on their own and incomplete as a list, that can aid in the diagnosis of a sociotechnical relationship as one of AI. The three columns below in step 2 represent three, not mutually exclusive, types of sociotechnical relationship between humans and artifacts. At this step, we sketch out if AI replaces, enhances, or displaces cognition (row 2a) โ€” with relevant properties and their typical values, non-exhaustively specified, listed on rows 2bโ€“h.

Table 1. The two steps required for the proposed redefinition of AI. At the top is step 1, where we decide whether a relationship exists between a technology and human cognition. This relationship, represented by the blue-green column between Machine and Human on row 1a, is AI. In 1b are terminological examples, both non-diagnostic on their own and incomplete as a list, that can aid in the diagnosis of a sociotechnical relationship as one of AI. The three columns below in step 2 represent three, not mutually exclusive, types of sociotechnical relationship between humans and artifacts. At this step, we sketch out if AI replaces, enhances, or displaces cognition (row 2a) โ€” with relevant properties and their typical values, non-exhaustively specified, listed on rows 2bโ€“h.

In this paper, @olivia.science Radically Redefines AI as any relationship between humans and artifacts โ€œwhere it appears as if cognitive labour is offloaded onto such artifactsโ€. She distinguishes 3 types of relationship: AI that โ€œreplaces, enhances, or displaces cognitionโ€.

See Table 1 below.

2/n

29.07.2025 18:44 โ€” ๐Ÿ‘ 30    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Abstract: 

While it seems sensible that human-centred artificial intelligence (AI) means centring โ€œhuman behaviour and experience,โ€ it cannot be any other way. AI, I argue, is usefully seen as a relationship between technology and humans where it appears that artifacts can perform, to a greater or lesser extent, human cognitive labour. This is evinced using examples that juxtapose technology with cognition, inter alia: abacus versus mental arithmetic; alarm clock versus knocker- upper; camera versus vision; and sweatshop versus tailor. Using novel definitions and analyses, sociotechnical relationships can be analysed into varying types of: displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. Ultimately, all AI implicates human cognition; no mat- ter what. Obfuscation of cognition in the AI context โ€” from clocks to artificial neural networks โ€” results in distortion, in slowing critical engagement, pervert- ing cognitive science, and indeed in limiting our ability to truly centre humans and humanity in the engineering of AI systems. To even begin to de-fetishise AI, we must look the human-in-the-loop in the eyes.

Keywords: artificial intelligence; cognitive science; sociotechnical relationship; cognitive labour; artificial neural network; technology; cognition; human-centred AI

Abstract: While it seems sensible that human-centred artificial intelligence (AI) means centring โ€œhuman behaviour and experience,โ€ it cannot be any other way. AI, I argue, is usefully seen as a relationship between technology and humans where it appears that artifacts can perform, to a greater or lesser extent, human cognitive labour. This is evinced using examples that juxtapose technology with cognition, inter alia: abacus versus mental arithmetic; alarm clock versus knocker- upper; camera versus vision; and sweatshop versus tailor. Using novel definitions and analyses, sociotechnical relationships can be analysed into varying types of: displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. Ultimately, all AI implicates human cognition; no mat- ter what. Obfuscation of cognition in the AI context โ€” from clocks to artificial neural networks โ€” results in distortion, in slowing critical engagement, pervert- ing cognitive science, and indeed in limiting our ability to truly centre humans and humanity in the engineering of AI systems. To even begin to de-fetishise AI, we must look the human-in-the-loop in the eyes. Keywords: artificial intelligence; cognitive science; sociotechnical relationship; cognitive labour; artificial neural network; technology; cognition; human-centred AI

๐Ÿ’ซ Just out! A tour de force by my colleague @olivia.science, new paper ๐Ÿ“:

What Does 'Human-Centred AI' Mean? ๐Ÿงฎ โฐ ๐Ÿง 

Keywords: AI; cognitive science; sociotechnical relationship; cognitive labour; ANN; technology; cognition; human-centred AI

Link to the paper on arXiv: lnkd.in/e9nHGkMK 1/n

29.07.2025 18:31 โ€” ๐Ÿ‘ 72    ๐Ÿ” 33    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 3
Preview
Why AI Shouldn't Be the Future of Academia Personal Perspective: Artificial intelligence in academia appears to be inevitableโ€”but it isn't, and it's worth deep consideration of what its use means for research, teaching, and scholarship.

Why AI Shouldn't Be the Future of Academia | Psychology Today -- by @shawpsych.bsky.social

www.psychologytoday.com/us/blog/how-...

29.07.2025 19:08 โ€” ๐Ÿ‘ 22    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Professor Margaret Boden - Human-level AI: Is it Looming or Illusory?
YouTube video by CSER Cambridge Professor Margaret Boden - Human-level AI: Is it Looming or Illusory?

@shaneir.bsky.social I have only just noticed that Margaret Boden died a few days ago at age 88.

In the mid-70s, her AI books were required reading for those of us working on various UK computing research projects.

Ten years ago she gave this presentation.

www.youtube.com/watch?v=wPRA...

29.07.2025 10:01 โ€” ๐Ÿ‘ 22    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

"To even begin to de-fetishise AI, we must look the human-in-the-loop in the eyes."

๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

29.07.2025 20:05 โ€” ๐Ÿ‘ 47    ๐Ÿ” 13    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
title and abstract from https://arxiv.org/pdf/2507.19960

title and abstract from https://arxiv.org/pdf/2507.19960

table 1 from https://arxiv.org/pdf/2507.19960

table 1 from https://arxiv.org/pdf/2507.19960

Boiling here at home in Cyprus but I put the finishing touches a couple of days ago on this preprint: What Does 'Human-Centred AI' Mean? doi.org/10.48550/arX...

Wherein I analyse HCAI & demonstrate through 3 triplets my new tripartite definition of AI (Table 1) that properly centres the human. 1/n

29.07.2025 11:52 โ€” ๐Ÿ‘ 138    ๐Ÿ” 46    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 9
table 2 from https://arxiv.org/pdf/2507.19960

table 2 from https://arxiv.org/pdf/2507.19960

table 3 from https://arxiv.org/pdf/2507.19960

table 3 from https://arxiv.org/pdf/2507.19960

table 4 from https://arxiv.org/pdf/2507.19960

table 4 from https://arxiv.org/pdf/2507.19960

I split AI into 3 non-mutually exclusive types (see Table 1 above): displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. More later possibly, but see Tables 2 to 4 (attached or here: arxiv.org/pdf/2507.19960) for the worked through examples. 2/n

29.07.2025 11:52 โ€” ๐Ÿ‘ 47    ๐Ÿ” 8    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 5

Hopefully this is a useful way for discussing AI. Currently we're mired in terminological disarray โ€” terms like agentic, generative, and so on fail to capture what we want to say about AI & in fact subserve industry hype. Hence I propose this analytical tool for discerning AI's properties. TTFN! 3/n

29.07.2025 11:52 โ€” ๐Ÿ‘ 37    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

"These 'AI' products are materially and psychologically detrimental to our students' ability to write and think for themselves, existing instead for the benefit of investors and multinational companies."

27.07.2025 06:41 โ€” ๐Ÿ‘ 12    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Excellent statement. Hope you sign.

27.07.2025 14:24 โ€” ๐Ÿ‘ 11    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

She was a legend. I compulsively, compulsorily, and committedly cite her.

28.07.2025 20:06 โ€” ๐Ÿ‘ 30    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Sad to hear that Margaret Boden, pioneer in cognitive science and artificial intelligence, has passed away.

Just a few days ago, someone reactivated the post below. I warmly recommend watching the video.

Thank you Margaret for founding and shaping our field.

www.sussex.ac.uk/broadcast/re...

28.07.2025 16:28 โ€” ๐Ÿ‘ 169    ๐Ÿ” 55    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 4
Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia

โ€œDe ondertekenaars van de open brief vinden dat AI-gebruik in de klas of collegezaal verboden moet worden bij opdrachten voor leerlingen en studenten.โ€

Lees hier de open brief in zโ€™n geheel. Je kunt nog ondertekenen!

openletter.earth/open-letter-...

23.07.2025 13:56 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia

Uncritical adoption of AI โ€œundermines our basic pedagogical values and principles of scientific integrity. It prevents us from maintaining our standards of independence & transparency. And most concerning, AI use โ€ฆ hinder[s] learning and deskill critical thought.โ€ openletter.earth/open-letter-...

26.07.2025 19:08 โ€” ๐Ÿ‘ 125    ๐Ÿ” 58    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 10

@compcogsci is following 20 prominent accounts