@psforscher.bsky.social
Director of the CREME developmental meta-research team at Busara, a non-profit that does behavioral science in service of poverty alleviation. https://patrickforscher.com/
a table about lemurs
a table about students and schools
a table about wines
{tinytable} 0.14.0 for #RStats makes it super easy to draw tables in html, tex, docx, typ, md & png.
There are only a few functions to learn, but don't be fooled! Small ๐ฆs can still be powerful.
Check out the new gallery page for fun case studies.
vincentarelbundock.github.io/tinytable/vi...
Intervening on a central node in a network likely does little given that its connected neighbors will "flip it back" immediately. Happy to see this position supported now.
"Change is most likely [..] if it spreads first among relatively poorly connected nodes."
www.nature.com/articles/s41...
The deadline to provide inputs into this piece of work is swiftly approaching -- 8 October.
Please consider filling out the survey we're using to structure people's input!
www.who.int/news-room/ar...
This study of intelligence in the UK Biobank is typical of a lot of current social science genomics. Impressive technically, and not over-interpreted. But still, a main result gets lost in the sauce. Within-families, the direct-effect polygenic score explains no more that 1-3% of the variance. /1
22.09.2025 12:27 โ ๐ 59 ๐ 15 ๐ฌ 2 ๐ 3Historical and experimental evidence that inherent properties are overweighted in early scientific explanation
๐This paper has been ~11 years in the making - and probably my favorite project of all time. Thrilled to see it in @pnas.org! I'm so lucky that Zach decided to do a second PhD and join my lab @psychillinois.bsky.social back in 2014 - a fabulous scientist & human being! www.pnas.org/doi/10.1073/...
22.09.2025 14:27 โ ๐ 40 ๐ 9 ๐ฌ 4 ๐ 1The net result is the worst of both worlds. Universities invoke the rhetoric of business discipline, but they lack the governance structures that give that discipline bite. They operate without the checks that private ownership provides, yet subject staff and students to the cost-cutting and efficiency drives that profit-maximising firms pursue. The result is waste at the top and insecurity at the bottom.
this is a very sharp piece on why it makes no sense to run universities as if they are businesses. They're not businesses.
www.afr.com/work-and-car...
Is analytical flexibility really the biggest problem while youโre confusing the ephemeral statistical effects of psychological processes with the ephemeral statistical effects of language prediction trained on massive data sets? Hah.
18.09.2025 12:28 โ ๐ 8 ๐ 3 ๐ฌ 1 ๐ 0What people use chatgpt for graph
Nothing. I use it for nothing at all because AI is good at zero of the tasks I do regularly
Honestly I don't even know what its web address is, is it like a 2000s style ChatGPT.com or something funkier like chat.g.pt
Waiting for my preprint to be accepted, so in the meantime a teaser: here's what happens when you try to estimate a between-scale correlation based on LLM-generated datasets of participants, while varying 4 different analytic decisions (blue is the true correlation from human data):
17.09.2025 12:53 โ ๐ 39 ๐ 8 ๐ฌ 1 ๐ 0McSweeney (2002) is required reading for anyone who studies or references Hofstede's individualism-collectivism--or Markus & Kitayama's independence-interdependence, for that matter. Just shoddy work up and down. doi.org/10.1177/0018...
21.05.2025 17:53 โ ๐ 12 ๐ 5 ๐ฌ 0 ๐ 0Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโs marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโs ChatGPT and Appleโs Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! ๐คฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industryโs marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
In a new paper, my colleagues and I set out to demonstrate how method biases can create spurious findings in relationship science, by using a seemingly meaningless scale (e.g., "My relationship has very good Saturn") to predict relationship outcomes. journals.sagepub.com/doi/10.1177/...
10.09.2025 18:18 โ ๐ 158 ๐ 69 ๐ฌ 7 ๐ 11ICYMI: Google removes net zero goal from website.
08.09.2025 08:53 โ ๐ 235 ๐ 126 ๐ฌ 7 ๐ 33An arrow with a LaTeX equation
Trigonometric functions and a unit circle
A bivariate change model with structured residuals
A hierarchical model of cognitive abilities
Now on CRAN, ggdiagram is a #ggplot2 extension that draws diagrams programmatically in #Rstats. Allows for precise control in how objects, labels, and equations are placed in relation to each other.
wjschne.github.io/ggdiagram/ar...
How should the behavioral sciences be mainstreamed into public health? How would we know if this goal is achieved?
With the WHO Behavioural Insights Unit, my team has been working on these questions.
Curious what we came up with? Check out the public consultation below
www.who.int/news-room/ar...
How should the behavioral sciences be mainstreamed into public health? How would we know if this goal is achieved?
With the WHO Behavioural Insights Unit, my team has been working on these questions.
Curious what we came up with? Check out the public consultation below
www.who.int/news-room/ar...
Not for nothing, whenever someone conflates search algorithms, LLMs, and whatever the fuck AI is, I send them this article.
02.09.2025 06:47 โ ๐ 109 ๐ 30 ๐ฌ 2 ๐ 11This is what I've been saying since 2023 (image below)
"prediction: use of "AI" [...] will come to be broadly associated with cheating, deception, lack of respect for other people, and low quality work that cannot be trusted in important settings"
Utterly horrifying
28.08.2025 19:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0AI-adjacent people working in cognitive science should pay attention to this horrifying lawsuit about ChatGPT helping (encouraging!) a child to commit suicide
26.08.2025 20:48 โ ๐ 26 ๐ 10 ๐ฌ 1 ๐ 0A comic on the bridge from Star Trek the Next Generation. Picard: COMMANDER DATA, PLEASE IDENTIFY THAT ROMULAN VESSEL. Data: THAT'S A GREAT IDEA CAPTAIN! IDENTIFYING A VESSEL IS A GREAT PLACE TO START - IN ANY TACTICAL OR STRATEGIC OUTER SPACE SITUATION. THIS VESSEL APPEARS TO BE A 23rd CENTURY KLINGON BIRD OF PREY! ๐๐ฆ โจ Picard: ARE YOU SURE? LIKE I SAID WE'RE... PRETTY SURE IT'S ROMULAN. Data: ... Data: OF COURSE! SO SORRY ABOUT THAT, YOU'RE RIGHT! ON CLOSER EXAMINATION IT'S A ROMULAN VESSEL! CAN I RECOMMEND SOME SOONGโข BRAND PRODUCTS THAT CAN HELP YOU WITH THAT? Picard cradles his face in his hand in a gesture of frustration. Data: DID I MENTION THE PLIGHT OF OPRESSED WHITES IN SOUTH AFRICA?
realistic Star Trek
26.08.2025 17:23 โ ๐ 6530 ๐ 2507 ๐ฌ 45 ๐ 80โThe study authors asked GPT 4o-mini to evaluate the quality of 217 papers. The tool didnโt mention in any of the reports that the papers being analyzed had been retracted or had validity issues.
In 190 cases, GPT described the papers as world leading, internationally excellent, or close to thatโ
Just sharing a public thank you to the SIPs + PsyArXiv volunteers for maintaining such a successful resource
I understand the frustration of those with preprints that were temporarily removed but I think this is a good chance to reflect on the volunteer work of others that we often take for granted
AI stocks drop sharply after an MIT report says 95% of AI investments generate zero (0) return. Even AI prophet Sam Altman acknowledges there's a bubble:
20.08.2025 13:56 โ ๐ 75 ๐ 35 ๐ฌ 2 ๐ 5Large Language Models Do Not Simulate Human Psychology
arxiv.org/pdf/2508.06950
If and when our in-house ethics committee at Busara gets off the ground it will have this policy
18.08.2025 23:12 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Racial diversity in healthcare providers reduces disparities in healthcare outcomes.
14.08.2025 14:00 โ ๐ 10 ๐ 6 ๐ฌ 1 ๐ 3Check out our new pre-print in which we argue something that might be obvious but doesn't seem to be obvious to everyone: Large Language Models (such as ChatGPT) do not simulate human psychology.
13.08.2025 12:37 โ ๐ 50 ๐ 16 ๐ฌ 1 ๐ 1