Clintin Davis-Stober

Clintin Davis-Stober

@clintin.bsky.social

Professor, quantitative psychology, decision theory, data science, mathematics, statistics, open science, modeling, weight lifting, photography, enjoyer of poetry www.davis-stober.com

2,507 Followers 2,071 Following 51 Posts Joined Sep 2023
1 month ago
  The decline effect (Protzko & Schooler, 2017) is an observed phenomenon where effect sizes in experiments apparently diminish in size from the first paper demonstrating the effect to later replications. This has been taken as a symptom of an unhealthy scientific ecosystem, possibly caused by the "winner's curse" (selection on significance and regression to the mean), publication bias or opportunistic analyses. I show that decline effects can arise as an artifact from a much simpler source: the original article determining the sign of the effect in a meta-analysis. Moreover, such artifactual decline effects will show correlations with some of the same experimental properties that one would expect from biases from poor behavior, such as the sample size of the original study.

New draft: "Decline effects, statistical artifacts, and a meta-analytic paradox". In this manuscript I show how a common practice in meta-analysis (eg the 2015 Open Science Collaboration) creates artifactual signatures of poor scientific behavior. PDF: raw.githubusercontent.com/richarddmore... 1/x

77 29 7 4
3 months ago
Preview
a man with a mustache is standing in front of a sign that says farm ALT: a man with a mustache is standing in front of a sign that says farm
3 0 0 0
4 months ago

My front yard :)

1 0 2 0
5 months ago

Simonsohn has now posted a blog response to our recent paper about the poor statistical properties of the P curve. @clintin.bsky.social and I are finishing up a less-technical paper that will serve as a response. But I wanted to address a meta-issue *around* this that may clarify some things. 1/x

77 31 2 8
6 months ago

I love this post about science and metascience. A lot of quotables but I’ll lead with this:

“Those seeking a scientific method – one that can be written down and followed mechanically […] – betray a kind of childish impatience with a process they clearly don’t understand.”

37 8 2 2
6 months ago
Preview
AI slop and the destruction of knowledge This week I was looking for info on what cognitive scientists mean when they speak of ‘domain-general’ cognition. I was curious, because the nuances are relevant for something I am researching at t…

AI slop and the destruction of knowledge irisvanrooijcogsci.com/2025/08/12/a...

524 266 22 50
7 months ago
Preview
RFK Jr. in interview with Scripps News: ‘Trusting the experts is not science’ HHS Secretary RFK Jr. sat down with Scripps News for a wide-ranging interview, discussing mRNA vaccine funding policy changes and a recent shooting at the Centers for Disease Control and Prevention.

1. "'Trusting the experts is not a feature of either a science or democracy," Kennedy said."

It's literally a vital feature of both science and of representative democracy.

I've written a fair bit about trust in expertise as a vital mechanism in the collective epistemology of science.

9,976 2,847 534 476
7 months ago

We proved they could not scale link.springer.com/article/10.1...

75 23 4 3
7 months ago

Definitely something worth digging into. I’ll give it some thought

1 0 0 0
7 months ago
Preview
The case for formal methodology in scientific reform | Royal Society Open Science Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological ref...

here was our call for methodological standards in metaresearch four years ago. instead of getting fixated on a particular inference we'd like to make, we need to maintain scientific standards, do the hard work, respect the evidence. we can't keep jumping at self-serving solutions without question.

21 4 1 0
7 months ago

some of the discussion around the p-curve paper is depressing. i still see many missing the point clearly stated in the conclusion, and instead of demanding strong standards or questioning whether they're even asking good questions, they've immediately started asking for replacement methods.

30 7 3 0
7 months ago

I would add that while pcurve is comprised of tests, that indeed correspond to error rates, the actual hypotheses being tested have little to do with “evidential value”

1 0 1 0
7 months ago

Test statistics being used are just simple sums, no 3rd moment information enters the test.

3 0 1 0
7 months ago

Would this be grounds for dismissing the remaining 54 studies as lacking value? It makes no sense. Part of the problem is that the original p curve papers aren’t clear on what exactly is being tested. The authors claim they are tests of skew, but this is incorrect as the

3 0 1 0
7 months ago

Happy to clarify. Pcurve is used to test whether a set of studies have (or lack) “evidential value” (which is not really defined). But the actual hypotheses being tested by pcurve don’t permit this, as Richard and I show. Suppose one study WAS underpowered in a set of 55 studies -

1 0 1 0
7 months ago

All pcurve tests are just a simple sum of transformed pvalues. There is a fundamental disconnect between the null hypotheses being tested by p-curve and the claims being made.

15 2 1 1
7 months ago

What this means is that a significant result for either test only allows one to claim that “at least one” study (out of the set) doesn’t have the property being considered. Why does this happen? Because pcurve completely ignores the configuration of the pvalues being considered.

6 0 1 0
7 months ago

The test for evidential value simply examines whether the effect size is zero for all studies. The test for lack of evidential value tests whether all studies are “underpowered”, I.e., have small non-centrality parameters.

5 0 1 0
7 months ago

The developers of p-curve claim that p-curve can be used to make claims about the evidential value (or lack thereof) of whole sets of studies. We show that the actual hypotheses being tested do not allow for such strong conclusions.

4 0 1 0
7 months ago

The basic idea of p-curve rests on the idea that the skew of a set of p-values is informative about whether QRPs are occurring. As we show, the p-curve tests have nothing to do with skew. It is trivial to create left skewed pvalues that p-curve would confidently label as right skewed.

11 2 1 0
7 months ago

New paper with @richarddmorey.bsky.social now out in JASA, where we critically examine p-curve. Below is Richard’s excellent summary of the many poor statistical properties of p-curve (with link to paper). I wanted to add some conceptual issues that we also tackle in the paper.

52 20 2 2
8 months ago

New paper by my PhD student @semihaktepe.bsky.social now published 🚀

"Revisiting the effect of discrepant perceptual fluency on truth judgments" tinyurl.com/2eepue5y

--> Two experiments & a meta-analysis indicate that high visual contrast does not lead to higher truth judgments.

14 3 0 1
9 months ago

I'm so sorry this happened to you. There is no excuse for such bs.

4 0 1 0
9 months ago
Preview
Why Trump’s push for ‘gold-standard science’ has researchers alarmed Many scientists fear the Trump administration’s new standard means putting political appointees in charge, which could undercut independent research.

wherein I'm quoted with "Science is essentially effective to the extent that it can remain independent, decentralized, and democratic." (gift link, email required)

87 22 3 2
9 months ago
Postdoc

🚀Postdoc position @unimarburg.bsky.social in the project:

"Bridging the Gap Between Verbal Psychological Theories & Formal Statistical Modeling with Large Language Models"
(funded by @volkswagenstiftung.de)

📅Start: 01.10.2025 | ⏳4 years
🔗 Apply now: uni-marburg.de/jhbCen
🔄 Thanks for sharing!

28 23 1 1
9 months ago
Table 1
Typology of traps, how they can be avoided, and what goes wrong if not avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.

NEW paper! 💭🖥️

“Combining Psychology with Artificial Intelligence: What could possibly go wrong?”

— Brief review paper by @olivia.science & myself, highlighting traps to avoid when combining Psych with AI, and why this is so important. Check out our proposed way forward! 🌟💡

osf.io/preprints/ps...

349 105 15 25
9 months ago

Great pic

2 0 0 0
9 months ago
Screenshot of the APS program for "Current Issues in Meta-Science", Saturday, May 24, 3:00pm Titles and authors in the session:

* Statistical Power in the Light of Methodological Reform (Jolynn Pek)

* The Poor Statistical Properties of the P-Curve Procedures (Richard Morey)

* Consistent Methods Protect Against False Findings Produced By p Hacking (Duane Wegener)

* Accumulating Evidence across Studies (Blakeley McShane)

If you'll be at APS2025 in DC next week, I'll be talking about the terrible statistical properties of the p curve procedure in the "Current Issues in Meta-Science" session Saturday, May 24th at 3pm. This will likely be my last US conference in a very long time. A brief summary follows. 1/

47 18 2 2
10 months ago

I am looking forward to expanding the scope of my professorship by combining cognitive and statistical modeling with LLMs😊

There will be two job openings for postdoc positions soon - one starting in September 2025 and another one a year later.

26 7 0 0
10 months ago

Another reminder that this online event will take place this Friday. I might be alone representing the West Coast (it's at 7am PT) but hope colleagues from other time zones consider stopping by and joining the discussion. My talk will be not-so-subtly titled "Replication Is (Not)".

38 14 3 1