Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia
Heartening to see that every day more people are signing our Open Letter "Stop the Uncritical Adoption of AI Technologies in Academia". We are now at ~900 signatories. Consider reading and signing as well and/or sharing. Help us get to the 1000! openletter.earth/open-letter-...
31.07.2025 19:47 โ ๐ 16 ๐ 7 ๐ฌ 0 ๐ 0
Who could have foreseen that the deskilling machine would lead to ... deskilling.
31.07.2025 07:36 โ ๐ 83 ๐ 18 ๐ฌ 2 ๐ 0
The reaction to this being essentially completely positive has me pleasantly surprised. I sometimes get so worried that such analyses aren't wanted anymore and people just want to be left alone to gloat while cheating on coursework and rot their brains with vibe coding... Thanks everyone. ๐
31.07.2025 14:52 โ ๐ 26 ๐ 4 ๐ฌ 3 ๐ 0
Correlation is not cognition.[1] Stop with the nonsense.
Everyday we slip further into the abyss. I often regret reading emails from other academics.
[1] Guest & @andreaeyleen.bsky.social (2023). On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. doi.org/10.1007/s421...
31.07.2025 07:34 โ ๐ 71 ๐ 15 ๐ฌ 3 ๐ 1
Incredibly important paper that interrogates โhuman-centered AIโ from a cogsci perspective.
Iโm also super interested in reading AI critiques from a symbolic systems or org studies sensemaking framework. Anybody got reading recs?
30.07.2025 15:33 โ ๐ 8 ๐ 6 ๐ฌ 1 ๐ 0
My goodness bibliography from Oliva Guest's paper is straight ๐ฅ
I will be reading for months!
30.07.2025 15:57 โ ๐ 9 ๐ 2 ๐ฌ 1 ๐ 0
I'm in Vienna for ACL 2025, right now, but I think I'll spend today reading this fantastic looking paper:
30.07.2025 09:33 โ ๐ 9 ๐ 3 ๐ฌ 1 ๐ 0
A common gotcha is "but Olivia, proofs don't capture reality" which I find honestly so beautiful because that's THE point. A formal system, a proof, maths, code, ๐ฏ CANNOT solve reality, the frame problem, human cognition โ so their gut tells them exactly the answer. Lean into it! That's exactly it.
30.07.2025 09:01 โ ๐ 26 ๐ 8 ๐ฌ 2 ๐ 0
we @irisvanrooij.bsky.social and co-authors, as many many others, also show it in: Modern Alchemy: Neurocognitive Reverse Engineering (philsci-archive.pitt.edu/25289) and Reclaiming AI as a theoretical tool for cognitive science (doi.org/10.1007/s421...). Also see: bsky.app/profile/oliv...
12/n
30.07.2025 09:01 โ ๐ 14 ๐ 3 ๐ฌ 1 ๐ 1
that claims that machines can think are nonsense ALSO can formally & otherwise be shown as nonsense IF you take those fields seriously. Gรถdel proved it, Whitehead & Russell proved it, the frame problem captures it, and my current favourite Prigogine & Stengers show it in Order out of Chaos, and 11/n
30.07.2025 09:01 โ ๐ 21 ๐ 4 ๐ฌ 2 ๐ 0
I feel I repeat myself, but it bears repeating sadly, that it's not that the critique of machines is somehow only possible with computer & cognitive science degrees (they can help tho, but they can also make you impervious to further analyses) BUT 10/n
30.07.2025 09:01 โ ๐ 15 ๐ 4 ๐ฌ 1 ๐ 0
allowing such views to pass uncritically. The core traps here with AI are what @irisvanrooij.bsky.social and I outline (based on our previous work) in a forthcoming short publication, distilled into the table below: bsky.app/profile/iris...
30.07.2025 06:28 โ ๐ 24 ๐ 6 ๐ฌ 1 ๐ 0
are what my field is uniquely poised to critique even if it also creates/condones (sadly) these very same positions. Notwithstanding this, many fall prey to this form of deeply problematic thinking inside my own field, people who should not only know better, but also be at the forefront of not 8/n
30.07.2025 06:26 โ ๐ 14 ๐ 2 ๐ฌ 1 ๐ 0
What I mean by this is NOT that cognitive scientists are uniquely relevant, but that statements consistent with correlationism (arxiv.org/pdf/2507.19960 & philsci-archive.pitt.edu/25289), naive computationalism (see: philsci-archive.pitt.edu/24834), and modern connectionism (doi.org/10.31234/osf...)
30.07.2025 06:23 โ ๐ 13 ๐ 2 ๐ฌ 1 ๐ 2
I also realised that in many of these discussions, the cognitive is disregarded (accidentally or otherwise) because, e.g. the people stating their otherwise expert opinions aren't cognitive computational scientists (which is what is highly relevant if we discuss human-like machines as thinking). 6/n
30.07.2025 06:21 โ ๐ 15 ๐ 2 ๐ฌ 1 ๐ 0
extract from page 12 https://arxiv.org/pdf/2507.19960
Something I hinged on to get to this what I describe: the Marxian fetishisation of artefacts is so complete in the case of AI that not only do we somehow conclude machines think, but we accept for them to think, speak, draw instead of us, while also thinking these are (expressions of) our thoughts.
30.07.2025 06:18 โ ๐ 39 ๐ 7 ๐ฌ 1 ๐ 0
Table 1. The two steps required for the proposed redefinition of AI. At the top is step 1, where we decide whether a relationship exists between a technology and human cognition. This relationship, represented by the blue-green column between Machine and Human on row 1a, is AI. In 1b are terminological examples, both non-diagnostic on their own and incomplete as a list, that can aid in the diagnosis of a sociotechnical relationship as one of AI. The three columns below in step 2 represent three, not mutually exclusive, types of sociotechnical relationship between humans and artifacts. At this step, we sketch out if AI replaces, enhances, or displaces cognition (row 2a) โ with relevant properties and their typical values, non-exhaustively specified, listed on rows 2bโh.
In this paper, @olivia.science Radically Redefines AI as any relationship between humans and artifacts โwhere it appears as if cognitive labour is offloaded onto such artifactsโ. She distinguishes 3 types of relationship: AI that โreplaces, enhances, or displaces cognitionโ.
See Table 1 below.
2/n
29.07.2025 18:44 โ ๐ 30 ๐ 9 ๐ฌ 2 ๐ 0
Abstract:
While it seems sensible that human-centred artificial intelligence (AI) means centring โhuman behaviour and experience,โ it cannot be any other way. AI, I argue, is usefully seen as a relationship between technology and humans where it appears that artifacts can perform, to a greater or lesser extent, human cognitive labour. This is evinced using examples that juxtapose technology with cognition, inter alia: abacus versus mental arithmetic; alarm clock versus knocker- upper; camera versus vision; and sweatshop versus tailor. Using novel definitions and analyses, sociotechnical relationships can be analysed into varying types of: displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. Ultimately, all AI implicates human cognition; no mat- ter what. Obfuscation of cognition in the AI context โ from clocks to artificial neural networks โ results in distortion, in slowing critical engagement, pervert- ing cognitive science, and indeed in limiting our ability to truly centre humans and humanity in the engineering of AI systems. To even begin to de-fetishise AI, we must look the human-in-the-loop in the eyes.
Keywords: artificial intelligence; cognitive science; sociotechnical relationship; cognitive labour; artificial neural network; technology; cognition; human-centred AI
๐ซ Just out! A tour de force by my colleague @olivia.science, new paper ๐:
What Does 'Human-Centred AI' Mean? ๐งฎ โฐ ๐ง
Keywords: AI; cognitive science; sociotechnical relationship; cognitive labour; ANN; technology; cognition; human-centred AI
Link to the paper on arXiv: lnkd.in/e9nHGkMK 1/n
29.07.2025 18:31 โ ๐ 72 ๐ 33 ๐ฌ 3 ๐ 3
YouTube video by CSER Cambridge
Professor Margaret Boden - Human-level AI: Is it Looming or Illusory?
@shaneir.bsky.social I have only just noticed that Margaret Boden died a few days ago at age 88.
In the mid-70s, her AI books were required reading for those of us working on various UK computing research projects.
Ten years ago she gave this presentation.
www.youtube.com/watch?v=wPRA...
29.07.2025 10:01 โ ๐ 22 ๐ 8 ๐ฌ 2 ๐ 2
"To even begin to de-fetishise AI, we must look the human-in-the-loop in the eyes."
๐ฅ๐ฅ๐ฅ
29.07.2025 20:05 โ ๐ 47 ๐ 13 ๐ฌ 2 ๐ 0
title and abstract from https://arxiv.org/pdf/2507.19960
table 1 from https://arxiv.org/pdf/2507.19960
Boiling here at home in Cyprus but I put the finishing touches a couple of days ago on this preprint: What Does 'Human-Centred AI' Mean? doi.org/10.48550/arX...
Wherein I analyse HCAI & demonstrate through 3 triplets my new tripartite definition of AI (Table 1) that properly centres the human. 1/n
29.07.2025 11:52 โ ๐ 138 ๐ 46 ๐ฌ 6 ๐ 9
table 2 from https://arxiv.org/pdf/2507.19960
table 3 from https://arxiv.org/pdf/2507.19960
table 4 from https://arxiv.org/pdf/2507.19960
I split AI into 3 non-mutually exclusive types (see Table 1 above): displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. More later possibly, but see Tables 2 to 4 (attached or here: arxiv.org/pdf/2507.19960) for the worked through examples. 2/n
29.07.2025 11:52 โ ๐ 47 ๐ 8 ๐ฌ 4 ๐ 5
Hopefully this is a useful way for discussing AI. Currently we're mired in terminological disarray โ terms like agentic, generative, and so on fail to capture what we want to say about AI & in fact subserve industry hype. Hence I propose this analytical tool for discerning AI's properties. TTFN! 3/n
29.07.2025 11:52 โ ๐ 37 ๐ 2 ๐ฌ 1 ๐ 1
"These 'AI' products are materially and psychologically detrimental to our students' ability to write and think for themselves, existing instead for the benefit of investors and multinational companies."
27.07.2025 06:41 โ ๐ 12 ๐ 5 ๐ฌ 0 ๐ 0
Excellent statement. Hope you sign.
27.07.2025 14:24 โ ๐ 11 ๐ 6 ๐ฌ 1 ๐ 0
She was a legend. I compulsively, compulsorily, and committedly cite her.
28.07.2025 20:06 โ ๐ 30 ๐ 8 ๐ฌ 1 ๐ 1
Sad to hear that Margaret Boden, pioneer in cognitive science and artificial intelligence, has passed away.
Just a few days ago, someone reactivated the post below. I warmly recommend watching the video.
Thank you Margaret for founding and shaping our field.
www.sussex.ac.uk/broadcast/re...
28.07.2025 16:28 โ ๐ 169 ๐ 55 ๐ฌ 5 ๐ 4
Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia
โDe ondertekenaars van de open brief vinden dat AI-gebruik in de klas of collegezaal verboden moet worden bij opdrachten voor leerlingen en studenten.โ
Lees hier de open brief in zโn geheel. Je kunt nog ondertekenen!
openletter.earth/open-letter-...
23.07.2025 13:56 โ ๐ 10 ๐ 4 ๐ฌ 1 ๐ 0
Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia
Uncritical adoption of AI โundermines our basic pedagogical values and principles of scientific integrity. It prevents us from maintaining our standards of independence & transparency. And most concerning, AI use โฆ hinder[s] learning and deskill critical thought.โ openletter.earth/open-letter-...
26.07.2025 19:08 โ ๐ 125 ๐ 58 ๐ฌ 2 ๐ 10
Recovering behavioral scientist, posing as a statistical consultant. Applied stats #RStats, neurodiversity, learning Ukrainian, writing things
Self-funded PhD student (reading history in XVIII New Spain) & paid researcher (open science, research integrity, academic health systems) at Leiden University. Baking & reading aficionada. Managed by a Timneh. ORCID 0000-0002-5676-2122
Computer Scientist @utwente.bsky.social #LogicBasedAI #MultiAgentSystems | Intimate Computing #values #vulnerability https://intimate-computing.net | Disability | Towards a caring and inspiring digital society
Philosopher of science (neuroscience). Experiment, science & values, neuroethics, and feminist philosophy of science. University of Central Oklahoma
Statistics, cognitive modelling, and other sundry things. Mastodon: @richarddmorey@tech.lgbt
[I deleted my twitter account]
also @mattsiegel@mastodon.social | programmer | autistic | he/they | interests include arts, science, culture, "old fashioned" a.i., ecology
Principal Researcher @ Microsoft Research.
Cognitive computational neuroscience & AI.
Writer. Nature wanderer.
www.momen-nejad.org
PI + parent = professional cat-herder โข inclusiveness โข he/him โข studying the neuroscience of language at Northeastern University
Textbook: The Neuroscience of Language (Cambridge University Press)
http://jonathanpeelle.net/the-neuroscience-of-language
Former Academic. Neuroscience/Psychology. Retired.
Interests: Recording and mixing music; The Beatles; currently learning to play drums.
tomhartley.me.uk
Professor, Political Science | Syracuse University, Maxwell School | American politics, political psychology | Co-author of Anxious Politics and Pandemic Politics
At the CMDN Lab (@uni-hamburg.de), we study decision-making and how it is influenced by attention, learning and memory.
PI: @sgluth.bsky.social
Account managed by: @jennamarch.bsky.social
MSc Cognitive Sciences: Cognition and the Mind
Faculty of Humanities and Social Sciences, University of Rijeka
Integrate | Together | Openly
More details at https://cogsci.uniri.hr/
Leader of the Communication in Social Interaction (CoSI) research lab (Donders Institute for Brain, Cognition and Behaviour & Max Planck Institute for Psycholinguistics) https://www.cosilab.nl/ https://www.mpi.nl/people/holler-judith https://www.ru.nl/en/p
Philosophy and Psychology professor in Cincinnati. Embodied cognition, AI, social cognition, phenomenology, critical theory. (he/him)
Our mission is to raise awareness of queer issues in AI, foster a community of queer researchers and celebrate the work of queer scientists. More about us: queerinai.com
Rubicon research fellow at the University of Cambridge. Drinking massive amounts of tea and doing some research in between. Learning, information-seeking, cognitive and brain development. Comp modelling ethusiast. He/him. ๐
https://francescpoli.github.io/
Assoc. Prof. Learning Sciences, Harvard GSE. Study learning in early childhood using computational modeling & empirical studies. Speaking for self only. She/her
Developmental cognitive scientist. Assistant Professor at Vanderbilt University. Co-host of The It's Innate! Podcast. PI of the Computational Cognitive Development Lab. Dad. Husband. Human. (he/him/his)
Scientist, Inventor, author of the NTQR Python package for AI safety through formal verification of unsupervised evaluations. On a mission to eliminate Majority Voting from AI systems. E Pluribus Unum.
Personal Account
Founder: The Distributed AI Research Institute @dairinstitute.bsky.social.
Author: The View from Somewhere, a memoir & manifesto arguing for a technological future that serves our communities (to be published by One Signal / Atria