more here olivia.science/ai#activism
09.02.2026 09:44 — 👍 3 🔁 2 💬 0 📌 0@compcogsci.bsky.social
Account of the Computational Cognitive Science Lab at Donders Institute, Radboud University
more here olivia.science/ai#activism
09.02.2026 09:44 — 👍 3 🔁 2 💬 0 📌 0weekly reminder; happy Monday
Have you considered
NOT using
AI?
AI "impedes [theory because we're] interested in human-understandable theory and theory-based models, not statistical models which provide only a representation of the data. Scientific theories and models are only useful if [we understand them and] they connect transparently to research questions."
08.02.2026 07:33 — 👍 62 🔁 18 💬 5 📌 4Statement from the organizers:
“We are a group of scholars committed to diversity, critical thinking, decoloniality, respect of expertise, slow science, and conceptual clarity values in academia. We strongly encourage people from underrepresented groups to apply”.
☀️ Summer School 📚
“Critical AI Literacies for Resisting and Reclaiming"
Organisers and teachers:
👉 @marentierra.bsky.social
👉 @olivia.science
👉 myself
Deadline for application:
🐦 31 March 2026 (early bird fee)
1/🧵
www.ru.nl/en/education...
Looking forward to a relevant discussion with the @compcogsci.bsky.social lab ☺️
08.02.2026 21:05 — 👍 7 🔁 2 💬 0 📌 0igure 1: The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Con- ceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly.
📝 Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. doi.org/10.5281/zeno... ✨
Thread with my favourite quotes 👇
1/🧵
Dimension 2:
CRITICAL THINKING 🤔
“Critical Thinking is deep engagement with the relationships between statements about the world.”
Guest, Suarez, & van Rooij (2025). Towards Critical Artificial Intelligence Literacies. doi.org/10.5281/zeno...
12/🧵
Cover of texbook: Blokpoel and van Rooij (2021-2025) Theoretical modeling for cognitive science and psychology.
Mark Blokpoel & I have maintain a living open interactive textbook “Theoretical Modeling for Cognitive Science and Psychology.”
Recently, Mark updated Ch 9 & 10 so they have embedded, running and editable, code again.
Check it out! ✨
computationalcognitivescience.github.io/lovelace/
"Great man theorising requires the (re)orientation of the theory to direct all credit to one person (or a biased subset of a select few) reminiscent of monarchy—far from the pluralistic or meritocratic facade science often hides behind"
doi.org/10.1007/s421...
Ich beende den Thread aber mit etwas Positivem.
In diesem Preprint wird diskutiert, wie KI im Bereich Psychologie unterstützen kann:
📄 van Rooij, I., & Guest, O. (2025). Combining Psychology with Artificial Intelligence: What could possibly go wrong?
🔗 philpapers.org/rec/VANCPW
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative ad- versarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discrim- inant analysis (LDA); quadratic discriminant analysis (QDA).
Relevant reading 1 📖 ✨
Guest, O. Suarez, M., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. doi.org/10.5281/zeno...
By @olivia.science @marentierra.bsky.social @altibel.bsky.social @lucyavraamidou.bsky.social @jedbrown.org @felienne.bsky.social, me & others
4/🧵
In our position paper we looked at how the core principles of the Netherlands Code of Conduct for #ResearchIntegrity applies to AI usage. @olivia.science @irisvanrooij.bsky.social @jedbrown.org @marentierra.bsky.social @lucyavraamidou.bsky.social @felienne.bsky.social
doi.org/10.5281/zeno...
✨ Summer School: Critical AI Literacies for Resisting and Reclaiming 📚
👉 For many more relevant resources for Critical AI literacy, check out this website maintained by @olivia.science with videos, news, opinion pieces, blogs, articles, posters, and more. 👇
olivia.science/ai
6/🧵
Figure 1: The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Con- ceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly.
Relevant reading 2 📖 ✨
Guest, O., Suarez, M., & van Rooij, I. Zenodo. Towards Critical Artificial Intelligence Literacies. Zenodo. doi.org/10.5281/zeno...
By @olivia.science @marentierra.bsky.social and me.
5/🧵
Poster showing a robot, saying: Does AI threaten research integrity? 30th January 2026 10:30-16:30 (Eleanor Rathbone Building 2.62 & online) Session 1 (11:00-12:00): The Metascience of Peer Review - Guest speaker: Tom Stafford Session 2 (13:00-14:30): Risks of Chatbots in Online Surveys - Guest speakers: Phoebe Wallman, Alexandra Lautarescu Session 3 (15:00-16:30): The Uncritical Adoption of AI Technologies in Academia - Guest speakers: Olivia Guest, Iris van Rooij Sign up here: https://www.ticketsource.co.uk/ukrn-liverpool/does-ai-threaten-research-integrity/2026-01-30/10:00/t-yzqrxyr
Interested in discussing how Artificial Intelligence can be helpful or hurtful to academia? 🤖
As a newly minted UKRN (@ukrepro.bsky.social) Local Network Lead, I'm co-organizing a hybrid event about this next Friday (30th January)
Join us at @livunipsyc.bsky.social or online via Teams!
From the moment we know a technology steals/obfuscates people's labour to perform task X, it matters 0 how effective the people's labour you steal to do X is. They will likely get better at X over time and you will get better at stealing. Nothing of this justifies the theft. bsky.app/profile/oliv...
19.01.2026 07:04 — 👍 76 🔁 23 💬 2 📌 3I spent the whole day to read 'What Does 'Human-Centred AI' Mean?' titled paper by @olivia.science. I sincerely think that it is the best paper I've ever read. Worth of time and highly recommended!
26.01.2026 19:35 — 👍 13 🔁 4 💬 1 📌 0Great interview of @lucyavraamidou.bsky.social in Greek Cypriot newspaper titled:
Οι μεγάλες εταιρείες ΑΙ αποτελούν ένα σύγχρονο παράδειγμα τεχνοφασισμού και συγκεντρωτισμού
The big AI companies are a modern example of technofascism and centralisation
www.philenews.com/politismos/p...
Monthly reminder
26.01.2026 03:25 — 👍 33 🔁 11 💬 0 📌 1PhD here answering the call. Resist the slop!!
eternalscientistmusings.wordpress.com/2025/12/03/r...
Do you want to be able to demystify AI and recognize hyped claims? Do you worry about the social and environmental harms of AI? Do you struggle with how to protect and foster your students’ skills in the age of AI? Do you want to develop your own skills in resisting AI and critical AI literacy?
2/🧵
Learning objectives 1 Gain conceptual clarity and deeper understanding of ‘AI’, its various forms and meanings 2 Understand how the science of human cognition can help resist AI hype 3 Critically uncover pseudoscientific claims about AI 4 Analyse critical intersectional theories (racism, sexism, ableism and more) to analyse social harms such as the exploitation of labor behind AI technologies 5 Critically grasp the relationship between AI technologies, sustainability, and environmental devastation 6 Perform a critical AI literacy project to start ways of resisting AI in a specific context
☀️ 📚
Summer School open for applications! www.ru.nl/en/education...
"This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices & social context”
Learning goals 👇
3/🧵
Read this article! Truly important considerations!!
18.01.2026 20:31 — 👍 22 🔁 7 💬 0 📌 0When the rejection of AI includes AI researchers, you know that the phrases "adapt or die, Luddite" don't work.
18.01.2026 15:46 — 👍 59 🔁 21 💬 0 📌 0excellent stuff, also if you need more or at a uni-level bsky.app/profile/oliv...
15.01.2026 07:34 — 👍 7 🔁 3 💬 0 📌 0Enjoyed writing this short but sweet [altho not necessarily in message] piece for @projectsyndicate.bsky.social w @irisvanrooij.bsky.social:
> While the AI industry claims its models can “think,” “reason,” and “learn,” their supposed achievements rest on marketing hype & stolen intellectual labor
🧵
Tech companies want us to outsource all cognitive labor to their models. Instead, academics must defend universities by barring toxic, addictive AI technologies from classrooms, argue @olivia.science and @irisvanrooij.bsky.social .
bit.ly/48FNcJj
(frankly speaking, any european company deepening its dependency on us tech, be it cloud or confabulation machinery, is acting unwisely.
we need to divest of the technology provided by a country that very obviously wants to be our enemy.)
Weekend reading!
10.01.2026 10:56 — 👍 16 🔁 6 💬 1 📌 0