@drewhalbailey.bsky.social
education, developmental psychology, research methods at UC Irvine
Random Intercepts and Slopes in Longitudinal Models: When Are They "Good" and "Bad" Controls?
or
Illusory Traits 2: Revenge of the Slopes
Led by Siling Guo, with Nicolas HΓΌbner, Steffen Zitzmann, Martin Hecht, and Kou Murayama.
Comments welcome!
osf.io/preprints/ps...
New blog post! Let's say you've measured two variables repeatedly and want to investigate how one affects the other over time. Here are some recommendations for how to do that well.
www.the100.ci/2025/06/25/r...
Although field-specific authorship norms probably mostly just reflect the values of people in the field, I also think they can affect those values too. This seems like a good example! (I have some guesses about unintended consequences of tiny authorship teams too, btw.)
23.06.2025 14:14 β π 1 π 0 π¬ 0 π 06) LCGAs never replicate across datasets or in the same dataset. They usually just produce the salsa pattern (Hi/med/low) or the cats cradle (Hi/low/increasing/decreasing).
This has misled entire fields (see all of George Bonnano's work on resilience, for example).
psycnet.apa.org/fulltext/201...
Treemap showing measurement fragmentation across subfields in psychology. Hill-Shannon Diversity π·=1626.05
How often measures in the APA PsycTESTS database are (re)used according to the APA PsycInfo database: rarely, the majority are never reused.
Our fragmentation index (Hill-Shannon diversity) over time across subdisciplines shows fragmentation rising.
Our paper "A fragmented field" has just been accepted at AMPPS. We find it's not just you, psychology is really getting more confusing (construct and measure fragmentation is rising).
We updated the preprint with the (substantial) revision, please check it out.
osf.io/preprints/ps...
But I really hope we get 10 more years of strong studies now on the effects of large increases in access on outcomes for "always takers" and especially for elite students. There are lots of good reasons to expect these effects should differ. (2/2)
11.06.2025 20:35 β π 1 π 0 π¬ 0 π 0I have seen lots of higher ed talks and papers in the last 10 years convincingly demonstrating that just making some cutoff (getting into a more selective college or major, not taking remedial classes) helps the marginal student. Great to see an emerging consensus. (1/2)
11.06.2025 20:35 β π 0 π 0 π¬ 1 π 0For every cause, x, there is some group of people (often disproportionately people who study x) who think the effects of x are way bigger than they are. Therefore, I think we are doomed to read (or worse, make) "Yeah, but the effect of x is small" takes forever.
11.06.2025 20:24 β π 0 π 0 π¬ 0 π 0Mix of Figures 2 and 4 from the paper
I investigated how often papers' significant (p < .05) results are fragile (.01 β€ p < .05) p-values. An excess of such p-values suggests low odds of replicability.
From 2004-2024, the rates of fragile p-values have gone down precipitously across every psychology discipline (!)
Hope to see at least one of these in each APS policy brief from now on!
15.05.2025 19:57 β π 1 π 0 π¬ 0 π 0IN MEMORY OF LYNN FUCHS
The field of special education lost a visionary and beloved leader with the passing of Lynn Fuchs on May 7, 2025. Her absence leaves a profound voidβnot only in oο»Ώur scholarly community, but in the hearts of all who had the privilege ofο»Ώ knowing her.
[ click reading below ]
Really like it!
12.05.2025 03:43 β π 0 π 0 π¬ 0 π 0Thanks to everybody who chimed in!
I arrived at the conclusion that (1) there's a lot of interesting stuff about interactions and (2) the figure I was looking for does not exist.
So, I made it myself! Here's a simple illustration of how to control for confounding in interactions:>
(Not saying the public is right necessarily; you can get programs that pass a cost-benefit test with much smaller effects on test scores than laypeople want. But it is a problem for policymakers that the public wants them policy to deliver unrealistically sized effects.)
08.05.2025 18:36 β π 1 π 0 π¬ 0 π 0If you ask people what kinds of effects theyβd need to decide to implement something new, theyβre much bigger than realistically sized effects in ed policy. Weβve decided collectively to pretend this isnβt a problem and then get surprised at the backlash when it comes.
08.05.2025 18:34 β π 2 π 0 π¬ 1 π 0Is there a name for the fallacy that, because things are different from each other, one cannot compare them? (If not, I propose the βapples and oranges fallacyβ)
@stefanschubert.bsky.social
Starting to feel like "don't look at the coefficients, just calculate whatever metric is relevant to your research question" is a highly underappreciated stats hack and also I may have to get myself a marginaleffects T-shirt.
29.04.2025 12:28 β π 68 π 12 π¬ 6 π 6And you can think of the RI-CLPM as doing something like this too, using repeated measures of the same x over time.
22.04.2025 17:57 β π 1 π 0 π¬ 1 π 0Not eloquently. But in the appendix of this paper, we show that a "multivariate intercept" model that does this (constraining all loadings to equality) reproduces patterns of causal impacts of some RCTs better than OLS (see Table S4 + Fig S1):
pmc.ncbi.nlm.nih.gov/articles/PMC...
Do one for when people realize the extracted factor might be more useful as a *control* for estimating the effects of interest than as the key predictor of interest.
22.04.2025 17:39 β π 2 π 0 π¬ 1 π 0You like good music and are in North Carolina: are you into Wednesday?
18.04.2025 05:24 β π 0 π 0 π¬ 1 π 0The Paul Meehl Graduate School! Very cool.
03.04.2025 15:42 β π 1 π 0 π¬ 1 π 0Ah got it, thanks. In this case, I guess I agree the link between theory and these statistics is often squishy!
03.04.2025 12:33 β π 1 π 0 π¬ 0 π 0I think that's the way some people talk about types of validity and reliability. But I view (threats to) validity typologies as compatible with estimands: threats to validity are ways that mapping between estimates and estimands can go wrong!
02.04.2025 20:27 β π 2 π 0 π¬ 1 π 0Incredibly excited to have this finally come out! Model evaluation should be about comparisons, so we have a metric that puts comparisons in predictive performance on a common scale. I canβt make a thread about this better than @crahal.com, so Iβll let him take it away.
28.03.2025 03:43 β π 5 π 2 π¬ 0 π 0Surreal read of the day: a paper using USAID-funded and now terminated Demographic & Health Surveys to count the huge number of lives saved by the now frozen US PEPFAR program to fight HIV, co-authored by current US adminβs nominee to lead cuts in health research
jamanetwork.com/journals/jam...
After a long wait, the working paper for the Many-Economists Project: The Sources of Researcher Variation in Economics. We had 146 teams perform the same research three times, each time with less freedom. What source of freedom leads to different choices and results? papers.ssrn.com/sol3/papers....
25.02.2025 19:17 β π 351 π 165 π¬ 12 π 41A clear and compelling read on IES. I hope policymakers pay attention to this. There is a very strong bipartisan case to be made for continuing to fund the development, evaluation, and syntheses of evaluations of educational programs.
12.02.2025 22:27 β π 7 π 3 π¬ 0 π 0Check out my amazing colleague, collaborator and leader of the Playful Learning Landscapes work in Orange County: news.uci.edu/2025/02/07/u...
10.02.2025 22:18 β π 2 π 1 π¬ 0 π 0