Computational Cognitive Science's Avatar

Computational Cognitive Science

@compcogsci.bsky.social

Account of the Computational Cognitive Science Lab at Donders Institute, Radboud University

308 Followers  |  160 Following  |  13 Posts  |  Joined: 24.12.2024  |  2.0695

Latest posts by compcogsci.bsky.social on Bluesky

Preview
Critical AI On this page are some resources for Critical AI Literacy (CAIL) from my perspective.

more here olivia.science/ai#activism

09.02.2026 09:44 — 👍 3    🔁 2    💬 0    📌 0

weekly reminder; happy Monday

Have you considered
NOT using
AI?

09.02.2026 09:44 — 👍 10    🔁 4    💬 2    📌 0

AI "impedes [theory because we're] interested in human-understandable theory and theory-based models, not statistical models which provide only a representation of the data. Scientific theories and models are only useful if [we understand them and] they connect transparently to research questions."

08.02.2026 07:33 — 👍 62    🔁 18    💬 5    📌 4

Statement from the organizers:

“We are a group of scholars committed to diversity, critical thinking, decoloniality, respect of expertise, slow science, and conceptual clarity values in academia. We strongly encourage people from underrepresented groups to apply”.

05.02.2026 23:12 — 👍 23    🔁 13    💬 2    📌 0
Preview
Critical AI Literacies for Resisting and Reclaiming | Radboud University This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices and social context.

☀️ Summer School 📚

“Critical AI Literacies for Resisting and Reclaiming"

Organisers and teachers:
👉 @marentierra.bsky.social
👉 @olivia.science
👉 myself

Deadline for application:
🐦 31 March 2026 (early bird fee)

1/🧵

www.ru.nl/en/education...

16.01.2026 20:36 — 👍 99    🔁 57    💬 2    📌 3

Looking forward to a relevant discussion with the @compcogsci.bsky.social lab ☺️

08.02.2026 21:05 — 👍 7    🔁 2    💬 0    📌 0
igure 1: The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Con- ceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly.

igure 1: The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Con- ceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly.

📝 Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. doi.org/10.5281/zeno... ✨

Thread with my favourite quotes 👇

1/🧵

07.12.2025 17:41 — 👍 113    🔁 36    💬 5    📌 5
Preview
a square with a small square in the middle and the words critical thinking written above it Alt: a square with a small square in the middle and the words “critical thinking” above it

Dimension 2:

CRITICAL THINKING 🤔

“Critical Thinking is deep engagement with the relationships between statements about the world.”

Guest, Suarez, & van Rooij (2025). Towards Critical Artificial Intelligence Literacies. doi.org/10.5281/zeno...

12/🧵

07.12.2025 19:17 — 👍 14    🔁 5    💬 1    📌 2
Cover of texbook: 

Blokpoel and van Rooij (2021-2025) Theoretical modeling for cognitive science and psychology.

Cover of texbook: Blokpoel and van Rooij (2021-2025) Theoretical modeling for cognitive science and psychology.

Mark Blokpoel & I have maintain a living open interactive textbook “Theoretical Modeling for Cognitive Science and Psychology.”

Recently, Mark updated Ch 9 & 10 so they have embedded, running and editable, code again.

Check it out! ✨

computationalcognitivescience.github.io/lovelace/

27.12.2025 20:42 — 👍 68    🔁 24    💬 3    📌 2

"Great man theorising requires the (re)orientation of the theory to direct all credit to one person (or a biased subset of a select few) reminiscent of monarchy—far from the pluralistic or meritocratic facade science often hides behind"

doi.org/10.1007/s421...

30.12.2025 20:46 — 👍 36    🔁 12    💬 1    📌 1

Ich beende den Thread aber mit etwas Positivem.

In diesem Preprint wird diskutiert, wie KI im Bereich Psychologie unterstützen kann:

📄 van Rooij, I., & Guest, O. (2025). Combining Psychology with Artificial Intelligence: What could possibly go wrong?

🔗 philpapers.org/rec/VANCPW

10.01.2026 11:56 — 👍 12    🔁 2    💬 0    📌 0
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative ad- versarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discrim- inant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative ad- versarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discrim- inant analysis (LDA); quadratic discriminant analysis (QDA).

Relevant reading 1 📖 ✨

Guest, O. Suarez, M., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. doi.org/10.5281/zeno...

By @olivia.science @marentierra.bsky.social @altibel.bsky.social @lucyavraamidou.bsky.social @jedbrown.org @felienne.bsky.social, me & others

4/🧵

16.01.2026 20:58 — 👍 29    🔁 7    💬 2    📌 0
Against the Uncritical Adoption of 'AI' Technologies in Academia Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...

In our position paper we looked at how the core principles of the Netherlands Code of Conduct for #ResearchIntegrity applies to AI usage. @olivia.science @irisvanrooij.bsky.social @jedbrown.org @marentierra.bsky.social @lucyavraamidou.bsky.social @felienne.bsky.social
doi.org/10.5281/zeno...

22.01.2026 20:56 — 👍 24    🔁 8    💬 1    📌 0
Preview
Critical AI On this page are some resources for Critical AI Literacy (CAIL) from my perspective.

✨ Summer School: Critical AI Literacies for Resisting and Reclaiming 📚

👉 For many more relevant resources for Critical AI literacy, check out this website maintained by @olivia.science with videos, news, opinion pieces, blogs, articles, posters, and more. 👇

olivia.science/ai

6/🧵

16.01.2026 22:13 — 👍 38    🔁 12    💬 1    📌 1
Figure 1: The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Con- ceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly.

Figure 1: The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Con- ceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly.

Relevant reading 2 📖 ✨

Guest, O., Suarez, M., & van Rooij, I. Zenodo. Towards Critical Artificial Intelligence Literacies. Zenodo. doi.org/10.5281/zeno...

By @olivia.science @marentierra.bsky.social and me.

5/🧵

16.01.2026 21:00 — 👍 16    🔁 3    💬 1    📌 0
Poster showing a robot, saying:

Does AI threaten research integrity?

30th January 2026 10:30-16:30 (Eleanor Rathbone Building 2.62 & online)

Session 1 (11:00-12:00): The Metascience of Peer Review - Guest speaker: Tom Stafford

Session 2 (13:00-14:30): Risks of Chatbots in Online Surveys - Guest speakers: Phoebe Wallman, Alexandra Lautarescu

Session 3 (15:00-16:30): The Uncritical Adoption of AI Technologies in Academia - Guest speakers: Olivia Guest, Iris van Rooij

Sign up here: https://www.ticketsource.co.uk/ukrn-liverpool/does-ai-threaten-research-integrity/2026-01-30/10:00/t-yzqrxyr

Poster showing a robot, saying: Does AI threaten research integrity? 30th January 2026 10:30-16:30 (Eleanor Rathbone Building 2.62 & online) Session 1 (11:00-12:00): The Metascience of Peer Review - Guest speaker: Tom Stafford Session 2 (13:00-14:30): Risks of Chatbots in Online Surveys - Guest speakers: Phoebe Wallman, Alexandra Lautarescu Session 3 (15:00-16:30): The Uncritical Adoption of AI Technologies in Academia - Guest speakers: Olivia Guest, Iris van Rooij Sign up here: https://www.ticketsource.co.uk/ukrn-liverpool/does-ai-threaten-research-integrity/2026-01-30/10:00/t-yzqrxyr

Interested in discussing how Artificial Intelligence can be helpful or hurtful to academia? 🤖

As a newly minted UKRN (@ukrepro.bsky.social) Local Network Lead, I'm co-organizing a hybrid event about this next Friday (30th January)

Join us at @livunipsyc.bsky.social or online via Teams!

22.01.2026 17:39 — 👍 51    🔁 25    💬 2    📌 4

From the moment we know a technology steals/obfuscates people's labour to perform task X, it matters 0 how effective the people's labour you steal to do X is. They will likely get better at X over time and you will get better at stealing. Nothing of this justifies the theft. bsky.app/profile/oliv...

19.01.2026 07:04 — 👍 76    🔁 23    💬 2    📌 3

I spent the whole day to read 'What Does 'Human-Centred AI' Mean?' titled paper by @olivia.science. I sincerely think that it is the best paper I've ever read. Worth of time and highly recommended!

26.01.2026 19:35 — 👍 13    🔁 4    💬 1    📌 0
Preview
Λούση Αβρααμίδου: Οι μεγάλες εταιρείες ΑΙ αποτελούν ένα σύγχρονο παράδειγμα τεχνοφασισμού και συγκεντρωτισμού Πώς τα μέσα κοινωνικής δικτύωσης «σαπίζουν» τον ανθρώπινο εγκέφαλο; Ποιες είναι οι επιπτώσεις στην ψυχική υγεία; Τι μπορεί να προσφέρει η τεχνητή

Great interview of @lucyavraamidou.bsky.social in Greek Cypriot newspaper titled:

Οι μεγάλες εταιρείες ΑΙ αποτελούν ένα σύγχρονο παράδειγμα τεχνοφασισμού και συγκεντρωτισμού

The big AI companies are a modern example of technofascism and centralisation

www.philenews.com/politismos/p...

26.01.2026 09:11 — 👍 37    🔁 14    💬 2    📌 0

Monthly reminder

26.01.2026 03:25 — 👍 33    🔁 11    💬 0    📌 1

PhD here answering the call. Resist the slop!!

eternalscientistmusings.wordpress.com/2025/12/03/r...

22.01.2026 12:52 — 👍 42    🔁 11    💬 2    📌 1
Preview
Critical AI Literacies for Resisting and Reclaiming | Radboud University This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices and social context.

Do you want to be able to demystify AI and recognize hyped claims? Do you worry about the social and environmental harms of AI? Do you struggle with how to protect and foster your students’ skills in the age of AI? Do you want to develop your own skills in resisting AI and critical AI literacy?

2/🧵

16.01.2026 20:38 — 👍 48    🔁 17    💬 1    📌 0
Learning objectives

1 Gain conceptual clarity and deeper understanding of ‘AI’, its various forms and meanings

2 Understand how the science of human cognition can help resist AI hype

3 Critically uncover pseudoscientific claims about AI

4 Analyse critical intersectional theories (racism, sexism, ableism and more) to analyse social harms such as the exploitation of labor behind AI technologies

5 Critically grasp the relationship between AI technologies, sustainability, and environmental devastation

6 Perform a critical AI literacy project to start ways of resisting AI in a specific context

Learning objectives 1 Gain conceptual clarity and deeper understanding of ‘AI’, its various forms and meanings 2 Understand how the science of human cognition can help resist AI hype 3 Critically uncover pseudoscientific claims about AI 4 Analyse critical intersectional theories (racism, sexism, ableism and more) to analyse social harms such as the exploitation of labor behind AI technologies 5 Critically grasp the relationship between AI technologies, sustainability, and environmental devastation 6 Perform a critical AI literacy project to start ways of resisting AI in a specific context

☀️ 📚

Summer School open for applications! www.ru.nl/en/education...

"This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices & social context”

Learning goals 👇

3/🧵

16.01.2026 20:49 — 👍 36    🔁 15    💬 2    📌 0

Read this article! Truly important considerations!!

18.01.2026 20:31 — 👍 22    🔁 7    💬 0    📌 0

When the rejection of AI includes AI researchers, you know that the phrases "adapt or die, Luddite" don't work.

18.01.2026 15:46 — 👍 59    🔁 21    💬 0    📌 0

excellent stuff, also if you need more or at a uni-level bsky.app/profile/oliv...

15.01.2026 07:34 — 👍 7    🔁 3    💬 0    📌 0

Enjoyed writing this short but sweet [altho not necessarily in message] piece for @projectsyndicate.bsky.social w @irisvanrooij.bsky.social:
> While the AI industry claims its models can “think,” “reason,” and “learn,” their supposed achievements rest on marketing hype & stolen intellectual labor

🧵

17.10.2025 15:08 — 👍 59    🔁 29    💬 1    📌 6
Preview
AI Is Hollowing Out Higher Education Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.

Tech companies want us to outsource all cognitive labor to their models. Instead, academics must defend universities by barring toxic, addictive AI technologies from classrooms, argue @olivia.science and @irisvanrooij.bsky.social .
bit.ly/48FNcJj

17.10.2025 14:38 — 👍 104    🔁 44    💬 0    📌 12

(frankly speaking, any european company deepening its dependency on us tech, be it cloud or confabulation machinery, is acting unwisely.

we need to divest of the technology provided by a country that very obviously wants to be our enemy.)

12.01.2026 10:41 — 👍 14    🔁 3    💬 0    📌 0

Weekend reading!

10.01.2026 10:56 — 👍 16    🔁 6    💬 1    📌 0

@compcogsci is following 20 prominent accounts