Caspar van Lissa ๐ŸŸฅ's Avatar

Caspar van Lissa ๐ŸŸฅ

@cjvanlissa.bsky.social

Associate professor of social data science at Tilburg University, chair of the Open Science Community Tilburg, board member of Tilburg Young Academy

1,133 Followers  |  225 Following  |  29 Posts  |  Joined: 18.10.2023  |  1.8331

Latest posts by cjvanlissa.bsky.social on Bluesky

Yes, please! I'm fluent :)

22.09.2025 18:41 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
LinkedIn This link will take you to a page thatโ€™s not on LinkedIn

Is anyone willing to share their slides on philosophy of science for undergraduate (statistics) students, especially as it pertains to the hypothetico-deductive framework? Here are my current notes on the topic: lnkd.in/eB-TZGDK

22.09.2025 17:48 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Any experiences with "Journal of Computational Social Science"? I reviewed a manuscript that should have been desk-rejected (grammatically incorrect, not embedded in relevant literature, ad-hoc and nonsensical analysis decisions); the decision was "minor revision" based on ONLY my review. How!?

22.09.2025 05:21 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users โ€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and
Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! ๐Ÿคฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industryโ€™s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 โ€” ๐Ÿ‘ 3071    ๐Ÿ” 1559    ๐Ÿ’ฌ 96    ๐Ÿ“Œ 241
Preview
Welcome to Hotel Elsevier: you can check-out any time you like โ€ฆ not ยป Eiko Fried A journey by Robin Kok and Eiko Fried trying to understand what private data Elsevier collects; what private data Elsevier sells; and what to do about it.

I was reminded today of the heroic work done by @eikofried.bsky.social and
@robinnkok.bsky.social to see what information Elsevier collects on academics and was re-horrified. ๐Ÿงต (1/5)

eiko-fried.com/welcome-to-h...

#academicsky

23.07.2025 14:57 โ€” ๐Ÿ‘ 29    ๐Ÿ” 17    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
Post image Post image Post image Post image

Day 2 Keynotes kicked off with Laura Nelson's inspiring presentation, โ€œWhy Qualitative Research Needs Computational Social Scienceโ€. What is the state of this maturing field called qualitative computational methods? What are the ongoing debates and futures? #ic2s2

23.07.2025 07:39 โ€” ๐Ÿ‘ 24    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

Loving @ic2s2.bsky.social , but I wonder if we're asking the right questions. Is "How can I outsource this task to a proprietary, black box, resource-intensive, often un-validated LLM?" when fit-for-purpose classifiers abound a computational a computational research question? Or even scientific?

23.07.2025 15:54 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
LinkedIn This link will take you to a page thatโ€™s not on LinkedIn

At my preconference workshop @ic2s2.bsky.social participants will conduct a *fully reproducible* and *FAIR theory based* simulation study.
The full interactive tutorial is in the theorytools R-package docs: cjvanlissa.github.io/theorytools/...
And slides: cjvanlissa.github.io/worcshop/ic2...

21.07.2025 06:00 โ€” ๐Ÿ‘ 9    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
OSF

To be FAIR: Theory specification needs an update! Theories that others can reuse and update help navigate the "theory crisis". The theorytools R-package now has tutorials showing how to make theories FAIR and use them to select covariates, simulate data, etc

osf.io/preprints/ps...

24.06.2025 08:51 โ€” ๐Ÿ‘ 10    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Keynote speaker
Caspar J. van Lissa
To be FAIR: Theory specification needs an update

Keynote speaker Caspar J. van Lissa To be FAIR: Theory specification needs an update

๐Ÿšจ Program Highlight!

We're thrilled to welcome @cjvanlissa.bsky.social as our keynote speaker on June 5th! Heโ€™ll present his work on FAIR theoryโ€”a framework to make theories Findable, Accessible, Interoperable & Reusableโ€”and share his path as a meta-scientist. #openscience #FAIRtheory #metascience

21.05.2025 14:44 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The late Caryl Rusbult told me: in academia, celebrate all the small successes.

29.04.2025 18:14 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Job opening: PhD candidate in Machine Learning-Informed Formal Theory Construction (22752)

I'm hiring a PHD candidate in Machine Learning-Informed Formal Theory Construction. Please encourage talented students to apply, or reach out if you want to collaborate. Looking for machine learning, theory development, and programming skills, and interdisciplinary interests! See tiu.nu/22752

28.04.2025 08:54 โ€” ๐Ÿ‘ 11    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
The AI bubble: Why scientific models remain essential | Tilburg University From ChatGPT to self-driving cars, AI promises a revolutionary future. But is this technology truly as groundbreaking as often claimed? A conversation with Caspar van Lissa, Associate Professor of Soc...

For @tilburg-university.bsky.social 's #sciencequest podcast, I discussed the question: Do we still need scientific models? Can we not just use AI to answer all of our questions? The interview is in Dutch, accompanying text in English: www.tilburguniversity.edu/magazine/ove...

11.04.2025 10:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Under Pressure, Psychology Accreditation Board Suspends Diversity Standards As the Trump administration threatens to strip accrediting bodies of their power, many are scrambling to purge diversity requirements.

Extremely disappointing & cowardly. The American Psychological Association is rescinding diversity requirements. Not due to any actual mandate from the federal government, but only because it may *someday* face pressure from the government www.nytimes.com/2025/03/27/h...

28.03.2025 01:16 โ€” ๐Ÿ‘ 207    ๐Ÿ” 103    ๐Ÿ’ฌ 20    ๐Ÿ“Œ 35

They're leggings, best kept secret for climbing hard!

27.03.2025 15:44 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Went bouldering with Noah van Dongen and @cjvanlissa.bsky.social. Noah got a Veni grant this year, Caspar a Vidi, and I received a Vici, so we had to make this picture ๐Ÿ˜Š We will also work together in our projects on improving theorising in psychology!

27.03.2025 15:14 โ€” ๐Ÿ‘ 30    ๐Ÿ” 3    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Preview
Kabinet wil geen actieplan om Amerikaanse wetenschappers aan te trekken - ScienceGuide Maak speciaal actieplan om Amerikaanse wetenschappers en experts naar Nederland te halen, zeggen D66 en Volt. Premier Dick Schoof voelt echter weinig voor een specifieke focus op Amerikaanse kennismig...

Alle Europese landen zijn serieus werk aan het maken van het aantrekken van Amerikaanse wetenschappers. Behalve de Nederlandse coalitie van "boekhouders en ruziemakers", zegt
Rob Jetten van D66. Het kabinet is dit ook niet van plan, zegt Dick Schoof.

19.03.2025 10:27 โ€” ๐Ÿ‘ 3    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Bridging the gap between #R package documentation and #openscience Open Educational Materials, I'm embedding formative quizzes in the package vignettes (based on @debruine.bsky.social 's webexercises)! Check the development version of github.com/cjvanlissa/t... to see how it's done.

18.03.2025 06:52 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
OSF

To be FAIR: Theory needs an update! FAIR theory can be shared, reused in analysis software, and updated based on new findings. Using existing #openscience infrastructure, it streamlines collaboration, reduces research waste, and accelerates cumulative knowledge development osf.io/preprints/ps...

10.03.2025 08:05 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
revise: Dynamic Revision Letters for 'Rmarkdown' Manuscripts Extracts tagged text from markdown manuscripts for inclusion in dynamically generated revision letters. Provides an R markdown template based on papaja::revision_letter_pdf() with comment cross-refere...

Revising papers became easy when I discovered James Conigrave's "revise" package, which inserts tagged snippets from a manuscript into a revision letter! No more back-and-forth, no more consistency checking. Now on CRAN, may it save you as much time as it did me! cran.r-project.org/web/packages...

06.03.2025 07:42 โ€” ๐Ÿ‘ 18    ๐Ÿ” 7    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
Mini-symposium: The Researcher of Tomorrow - Studium Generale - Tilburg University
YouTube video by TilburgUniversity Mini-symposium: The Researcher of Tomorrow - Studium Generale - Tilburg University

Who is "the researcher of tomorrow"? Will we be replaced by AI? I really enjoyed discussing these issues in front of a live audience with Studium Generale Tilburg

The video starts on the panel discussion, but make sure to check Nathan Wildman's inspiring lecture too

www.youtube.com/watch?v=0oRx...

04.03.2025 13:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
In order to understand cognition, we often recruit analogies as building blocks of theories to aid us in this quest. One such attempt, originating in folklore and alchemy, is the homunculus: a miniature human who resides in the skull and performs cognition. Perhaps surprisingly, this appears indistinguishable from the implicit proposal of many neurocognitive theories, including that of the 'cognitive map,' which proposes a representational substrate for episodic memories and navigational capacities. In such 'small cakes' cases, neurocognitive representations are assumed to be meaningful and about the world, though it is wholly unclear who is reading them, how they are interpreted, and how they come to mean what they do. We analyze the 'small cakes' problem in neurocognitive theories (including, but not limited to, the cognitive map) and find that such an approach a) causes infinite regress in the explanatory chain, requiring a human-in-the-loop to resolve, and b) results in a computationally inert account of representation, providing neither a function nor a mechanism. We caution against a 'small cakes' theoretical practice across computational cognitive modelling, neuroscience, and artificial intelligence, wherein the scientist inserts their (or other humans') cognition into models because otherwise the models neither perform as advertised, nor mean what they are purported to, without said 'cake insertion.' We argue that the solution is to tease apart explanandum and explanans for a given scientific investigation, with an eye towards avoiding van Rooij's (formal) or Ryle's (informal) infinite regresses.

In order to understand cognition, we often recruit analogies as building blocks of theories to aid us in this quest. One such attempt, originating in folklore and alchemy, is the homunculus: a miniature human who resides in the skull and performs cognition. Perhaps surprisingly, this appears indistinguishable from the implicit proposal of many neurocognitive theories, including that of the 'cognitive map,' which proposes a representational substrate for episodic memories and navigational capacities. In such 'small cakes' cases, neurocognitive representations are assumed to be meaningful and about the world, though it is wholly unclear who is reading them, how they are interpreted, and how they come to mean what they do. We analyze the 'small cakes' problem in neurocognitive theories (including, but not limited to, the cognitive map) and find that such an approach a) causes infinite regress in the explanatory chain, requiring a human-in-the-loop to resolve, and b) results in a computationally inert account of representation, providing neither a function nor a mechanism. We caution against a 'small cakes' theoretical practice across computational cognitive modelling, neuroscience, and artificial intelligence, wherein the scientist inserts their (or other humans') cognition into models because otherwise the models neither perform as advertised, nor mean what they are purported to, without said 'cake insertion.' We argue that the solution is to tease apart explanandum and explanans for a given scientific investigation, with an eye towards avoiding van Rooij's (formal) or Ryle's (informal) infinite regresses.

Figure 1 in https://philsci-archive.pitt.edu/24834/

Figure 1 in https://philsci-archive.pitt.edu/24834/

Box 1 in https://philsci-archive.pitt.edu/24834/

Box 1 in https://philsci-archive.pitt.edu/24834/

Box 2 in https://philsci-archive.pitt.edu/24834/

Box 2 in https://philsci-archive.pitt.edu/24834/

Tired but happy to say this is out w @andreaeyleen.bsky.social: Are Neurocognitive Representations 'Small Cakes'? philsci-archive.pitt.edu/24834/

We analyse cog neuro theories showing how vicious regress, e.g. the homunculus fallacy, is (sadly) alive and well โ€” and importantly how to avoid it. 1/

01.03.2025 14:16 โ€” ๐Ÿ‘ 238    ๐Ÿ” 74    ๐Ÿ’ฌ 24    ๐Ÿ“Œ 19
Preview
โ€˜Wetenschap kan wat AI niet kanโ€™ - ScienceGuide Nu AI-modellen onbetrouwbaar blijken, moeten wetenschappers de hunne naar de praktijk brengen, zegt Caspar van Lissa.

Is the AI bubble bursting? Maybe for ChatGPT, but there is now widespread support for automating everyday tasks. Academics build small specialized models, trained on high quality data, informed by domain knowledge. It's time to incorporate these into everyday workflows instead of "asking ChatGPT".

28.02.2025 08:30 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Open Science and Open Source only with Diversity, Equity, Inclusion, and Accessibility Including all of humanity is and always will be at the heart of open science.

Open Science and Open Source only with Diversity, Equity, Inclusion and Accessibility.

Inclusion is essential to science, and science is only worthwhile if it lifts everyone up together.

ropensci.org/blog/2025/02... #OpenSource #OpenScience

05.02.2025 15:40 โ€” ๐Ÿ‘ 93    ๐Ÿ” 37    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Het loopbaanpad van: Caspar van Lissa
YouTube video by TilburgUniversity Het loopbaanpad van: Caspar van Lissa

Did this interview (Dutch) for @tilburg-university.bsky.social series "The Career Path", about interdisciplinarity, open science, and collaboration. Hopefully, this will encourage young scholars to pursue their curiosity and contribute to research that matters. www.youtube.com/watch?v=9z5h...

05.02.2025 13:09 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
OpenAI's nightmare: Deepseek R1 on a Raspberry Pi
YouTube video by Jeff Geerling OpenAI's nightmare: Deepseek R1 on a Raspberry Pi

DeepSeek powerfully demonstrates the value of open source and the importance of optimization. I hope it will also help democratize LLMs. Watch it run on a Raspberry Pi, accelerated by an AMD graphics card!
youtu.be/o1sN1lB76EA?...

29.01.2025 06:06 โ€” ๐Ÿ‘ 10    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
An AIC-type information criterion evaluating theory-based hypotheses for contingency tables

Not every study needs super-complicated statistics, sometimes all you need is a contingency table. Now, thanks to Yasin Altinisik, Rebecca Kuiper et al, you can test informative hypotheses for contingency tables with the gorica R-package! link.springer.com/epdf/10.3758...

23.01.2025 08:38 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Do not cut back on open science The proposed budget cuts in higher education will directly impact funding for open science. But if the goal is to work more efficiently, now is precisely the ti

Open science is efficient: reusable code, data, results, and educational materials saves unnecessary work. Wen facing cutbacks, it's a bad idea to cut back on practices that increase efficiency!

universonline.nl/nieuws/2025/...

15.01.2025 11:23 โ€” ๐Ÿ‘ 29    ๐Ÿ” 13    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Bezuinig niet op open science De plannen voor de bezuinigingen in het hoger onderwijs raken direct het budget voor open science. Maar als het doel is om efficiรซnter te werken, is dit juist

๐Ÿ™๐Ÿฝ โœจ

@cjvanlissa.bsky.social and @michelenuijten.bsky.social !
#openscience
universonline.nl/nieuws/2025/...

14.01.2025 18:02 โ€” ๐Ÿ‘ 9    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Use evidence-based methods for decision-making on complex policy issues - De Jonge Akademie News

It was a pleasure to contribute to this Participatory Value Evaluation on Research Funding by the Dutch Young Academy; one main conclusion - that dependence on grant funding threatens academic freedom - is more relevant than ever in the context of cutbacks. dejongeakademie.nl/en/news/2937...

13.12.2024 08:23 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@cjvanlissa is following 20 prominent accounts