James E. Pustejovsky's Avatar

James E. Pustejovsky

@jepusto.bsky.social

Statistician interested in meta-analysis, data science, R, special education. Associate Professor at UW Madison. Also @jepusto@fediscience.org https://jepusto.com

1,478 Followers  |  789 Following  |  285 Posts  |  Joined: 18.08.2023  |  2.435

Latest posts by jepusto.bsky.social on Bluesky

Coming up in 1 hour (11 am CST)...

14.11.2025 16:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 07.11.2025 13:20 β€” πŸ‘ 3711    πŸ” 980    πŸ’¬ 126    πŸ“Œ 50
Video thumbnail
05.11.2025 02:41 β€” πŸ‘ 87343    πŸ” 18485    πŸ’¬ 3216    πŸ“Œ 2411

Anyone on here have experience with Springer's ResearchSquare preprint service? I have two different papers under review where the preprints are shared through this. There's some nice features to it but I'm starting to discover there's some other quite questionable aspects to its design.

02.11.2025 18:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If this is happening with other pre-prints, it is also effectively unmasking the authors on papers that might be under review with a double-blind process.

(This particular journal does single-blind review so it was inconsequential for me, but seems like it could well be a concern for others.)

30.10.2025 20:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It seems that Web of Science picked up that a pre-print of the paper appeared on OSF. Last I checked, that's not the same thing as being published in a journal.

Web of Science is sending me hallucinatory emails. Presumably there's some AI involved?

30.10.2025 14:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Just got an email from Web of Science that an article I reviewed had been published in the journal. That's funny, I thought, I still owe the journal my review of the revised version. Did the editor accept the paper without my review? (And can I scratch it off my to-do list?)

Turns out, no...

30.10.2025 14:04 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Thank you!

02.10.2025 19:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Here's your utterly inconsequential statistics trivia question for the day. (I ask because I don't know but would like to find appropriate keywords.) Complete the analogies:

L1 loss : Laplace distribution
as
L2 loss : Gaussian distribution
as
L3 loss : ???
as
L4 loss : ???

02.10.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

As co-chair of AERA's @srma-sig.bsky.social, I am pleased to announce our Fall 2025 webinar series focused on meta-analysis and systematic reviews!

On Friday (Oct 3), our first webinar will be given by James Pustejovsky @jepusto.bsky.social! πŸŽ‰

Register here: us06web.zoom.us/meeting/regi...

01.10.2025 16:49 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Two open positions on COS's research team!

Project Coordinator: Undergraduate degree in research, or equivalent experience ats.rippling.com/cos-careers/...

Program Manager: 10 yrs of project management experience or 2+ program management ats.rippling.com/cos-careers/...

Please share w/colleagues

01.10.2025 13:22 β€” πŸ‘ 28    πŸ” 21    πŸ’¬ 2    πŸ“Œ 1
Preview
a woman in a catsuit says thank you ALT: a woman in a catsuit says thank you

Without maintaining the constant shift assumption, I understand MW’s U as a test of stochastic dominance, which is something that some statisticians get into on the weekends with other consenting statisticians.

25.09.2025 00:43 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

As far as I understand, inference for the median requires assuming a constant shift in the distribution (e.g., a unit-constant treatment effect), in which case you can also interpret MW’s U as a test of mean differences, or differences in 3rd quartiles, or 42nd percentiles.

25.09.2025 00:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
patrick star from spongebob squarepants is standing in front of a building and says fite me ALT: patrick star from spongebob squarepants is standing in front of a building and says fite me

Mann-Whitney U is not a test of medians.

24.09.2025 23:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Psychological Methods invites early-career psychologists to apply to be a 2026 Editorial Fellow. This year kicked off our EF program -- it was enriching for the EFs and rewarding for everyone involved. Let's do it again!

For details:
www.apa.org/pubs/journal...

@apajournals.bsky.social

14.09.2025 22:42 β€” πŸ‘ 17    πŸ” 14    πŸ’¬ 0    πŸ“Œ 1

Everyone should really listen to this episode. One of the things I appreciate most about The War on Cars is that so much of what they discuss is applicable not just to urbanism/livable streets issues, but to broader progressive causes. A good reminder of how important it is to dismantle car culture.

10.09.2025 13:05 β€” πŸ‘ 122    πŸ” 18    πŸ’¬ 3    πŸ“Œ 0

And @jrzhang.bsky.social and I have been thinking about PPCs for evaluating meta-analytic models, especially around choices of effect metric.

21.08.2025 02:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
A gentle introduction to Bayesian posterior predictive checking for single-case researchers – James E. Pustejovsky Education Statistics and Meta-Analysis

For sure this is one direction I have in mind. @paulinagrekov.bsky.social and I have been writing about PPCs as a bridge for helping applied researchers (w/o much statistical training, from Special Education) interpret and evaluate Bayesian GLMMs. jepusto.com/publications...

21.08.2025 02:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For sure. Psychometricians will talk about predictive validity (or predictive validation evidence) but that does quite capture the vibe of what I mean.

21.08.2025 01:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m not familiar. Will look it up!

21.08.2025 01:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

(As oppose to using a model merely as a conventional summary of a dataset, or worse, a traditional incantation one recites over one’s data before offering it up for publication.)

21.08.2025 00:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Stats Q: Is there a word or succinct phrase for the discipline or humility that comes from having to use a statistical model to actually make predictions about future data?

21.08.2025 00:41 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0
Preview
MIT Backs Away From Paper Claiming Scientists Make More Discoveries with AI The retracted paper had impressed a Nobel Prize winner in economics.

This should get WIDE circulation:
MIT stating that it β€œhas no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.”

gizmodo.com/mit-backs-aw...

16.08.2025 16:42 β€” πŸ‘ 3548    πŸ” 1691    πŸ’¬ 45    πŸ“Œ 71

I've been playing with it too, with pretty good results, but I'm still struggling with latex table layout and placement. Have you had success using it with flextable or any other tools for programmatically generating tex tables?

12.08.2025 22:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Adjusting for Publication Bias in Meta-Analysis - Blakeley B. McShane, Ulf BΓΆckenholt, Karsten T. Hansen, 2016 We review and evaluate selection methods, a prominent class of techniques first proposed by Hedges (1984) that assess and adjust for publication bias in meta-an...

It’s not a surprise. I’d say methodology folks were well aware that p-curve had problems since McShane, Bockenholt, & Hansen 2016 doi.org/10.1177/1745.... The contribution of the new paper is to build up theory for *why* and what specific features of the method create problems.

09.08.2025 13:06 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

T&F is a zombie method. I still see it used regularly in meta-analysis, sometimes without any other publication bias analysis methods. (Though perhaps less so in mainstream psych, and not really ever for forensic meta-science.)

09.08.2025 12:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@richarddmorey.bsky.social @clintin.bsky.social would you consider doing trim-and-fill next pleeezzzz?

08.08.2025 22:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Given what is needed to improve the P-curve tests, we do not recommend their use in their current form. Their statistical properties are problematic and it is not clear what substantive conclusions they afford. Given the stated purpose of the P-curveβ€”evaluating the trustworthiness of scientific literaturesβ€”the stakes are too high to use tests with such poor, or poorly-understood, properties.

Given what is needed to improve the P-curve tests, we do not recommend their use in their current form. Their statistical properties are problematic and it is not clear what substantive conclusions they afford. Given the stated purpose of the P-curveβ€”evaluating the trustworthiness of scientific literaturesβ€”the stakes are too high to use tests with such poor, or poorly-understood, properties.

Also this...

08.08.2025 22:41 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is an insightful and timely analysis. This point in the conclusions resonated

08.08.2025 22:40 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Cover page for the manuscript: Morey, R. D., & Davis-Stober, C. P. (2025). On the poor statistical properties of the P-curve meta-analytic procedure. Journal of the American Statistical Association, 1–19. https://doi.org/10.1080/01621459.2025.2544397

Cover page for the manuscript: Morey, R. D., & Davis-Stober, C. P. (2025). On the poor statistical properties of the P-curve meta-analytic procedure. Journal of the American Statistical Association, 1–19. https://doi.org/10.1080/01621459.2025.2544397

Abstract for the paper: The P-curve (Simonsohn, Nelson, & Simmons, 2014; Simonsohn, Simmons, & Nelson, 2015) is a widely-used suite of meta-analytic tests advertised for detecting problems in sets of studies. They are based on nonparametric combinations of p values (e.g., Marden, 1985) across significant (p < .05) studies and are variously claimed to detect β€œevidential value”, β€œlack of evidential value”, and β€œleft skew” in p values. We show that these tests do not have the properties ascribed to them. Moreover, they fail basic desiderata for tests, including admissibility and monotonicity. In light of these serious problems, we recommend against the use of the P-curve tests.

Abstract for the paper: The P-curve (Simonsohn, Nelson, & Simmons, 2014; Simonsohn, Simmons, & Nelson, 2015) is a widely-used suite of meta-analytic tests advertised for detecting problems in sets of studies. They are based on nonparametric combinations of p values (e.g., Marden, 1985) across significant (p < .05) studies and are variously claimed to detect β€œevidential value”, β€œlack of evidential value”, and β€œleft skew” in p values. We show that these tests do not have the properties ascribed to them. Moreover, they fail basic desiderata for tests, including admissibility and monotonicity. In light of these serious problems, we recommend against the use of the P-curve tests.

Paper drop, for anyone interested in #metascience, #statistics, or #metaanalysis! @clintin.bsky.social and I show in a new paper in JASA that the P-curve, a popular forensic meta-analysis method, has deeply undesirable statistical properties. www.tandfonline.com/doi/full/10.... 1/?

08.08.2025 18:55 β€” πŸ‘ 286    πŸ” 122    πŸ’¬ 17    πŸ“Œ 27

@jepusto is following 20 prominent accounts