Ive started puttering with shinylive/webr: github.com/coatless-tut...
05.03.2026 02:03 β π 3 π 1 π¬ 0 π 0Ive started puttering with shinylive/webr: github.com/coatless-tut...
05.03.2026 02:03 β π 3 π 1 π¬ 0 π 0
βΌοΈ Postdoc recruitment
Want to help build and understand the future of scientific collaboration? We are seeking a postdoc in computational metaβscience.
π UF (Gainesville, FL)
π° $55β60k (1-3 years)
π§ High intellectual agency
π
Deadline March 10
Send us your idea. Details attached!
I bet that many faculty and/or alumni from institutions that had training grants would chip in to get something like this together. (Iβm a Northwestern alum and faculty at UW Madison.)
01.03.2026 20:35 β π 3 π 0 π¬ 1 π 0Also this is basically the same thing as the CR3 cluster-robust standard error, implemented in clubSandwich: jepusto.github.io/clubSandwich/
26.02.2026 00:27 β π 4 π 0 π¬ 1 π 0Visiting Poverty Scholars Program, 2026-2027 The Institute for Research on Poverty is calling for applications for its Visiting Poverty Scholars Program. The Visiting Poverty Scholars program funds up to four poverty scholars per year to visit IRP or any one of its U.S. Collaborative of Poverty Centers (CPC) partners for five days in order to interact with its resident faculty, present a poverty-related seminar, and become acquainted with staff and resources. Visiting scholars will confer with a faculty host, who will arrange for interactions with others on campus. The application deadline is 11:59 p.m. Central on Friday. April 3, 2026 Eligibility: Applicants must be PhD-holding, U.S.-based poverty scholars at any career level who are from economically disadvantaged backgrounds.
#FundSocSci
www.irp.wisc.edu/visiting-pov...
Map showing βOne-year change in ZIP Code home prices between January 2025 and January 2026β with Wisconsin seeing some of the highest increases
itβs almost like Wisconsin needs a statewide housing strategyβ¦
21.02.2026 14:11 β π 35 π 10 π¬ 4 π 0
Thinking of running an RCT in postsecondary education?
MDRC has created a fantastic set of resources to help you in projecting minimum effect sizes, randomizing, and processing data
Proud to have helped advise this project!
www.mdrc.org/the-rct
Aerial photo of Madisonβs state capital building with both sides of the isthmus visible
Madison, Wisconsin β 2026
17.02.2026 02:22 β π 81 π 12 π¬ 1 π 1Katie Fitzgerald and Beth Tipton (@statstipton.bsky.social) make a similar argument here: doi.org/10.3102/1076...
13.02.2026 22:50 β π 4 π 0 π¬ 2 π 0(This is not solely about meta-analysis, either. I would argue the same if a field relied on narrative / interpretive review methods.)
13.02.2026 21:33 β π 0 π 0 π¬ 0 π 0But I think it is critical that journals very carefully consider how their selection criteria might distort the published record in a way that hinders the systematic accumulation of evidence.
13.02.2026 21:32 β π 0 π 0 π¬ 2 π 0I think we could agree that there's no need for journals to publish poorly conducted studies, e.g., where assignment to condition was haphazard, where implementation of an intervention was compromised, where there were major confounds, where instrumentation was bad, etc.
13.02.2026 21:28 β π 1 π 0 π¬ 1 π 0
Things that I did not assert and that I would not argue for:
1) that journals should publish all studies ever done
2) that journals should be indifferent to the nature of the evidence.
My argument was that the point of journals should be to curate the scientific record, and that this requires using systems of evaluation that allow for accumulation of evidence across individual studies.
13.02.2026 21:10 β π 1 π 0 π¬ 1 π 0Relevant for both yes, but I worry about the cure being worse than the disease. Sample reliability coefficients are noisy, so I think it's not obvious that one should routinely use them for artifact correction (for r or for d).
13.02.2026 21:06 β π 1 π 0 π¬ 1 π 0Hunter & Schmidt (2007, methods.sagepub.com/book/mono/me...) describe this as the artifact of direct range restriction. Much more well known for correlations, but your example is a great illustration that the issue is relevant for SMDs too.
13.02.2026 19:56 β π 4 π 0 π¬ 1 π 0What about Hedges (2007, doi.org/10.3102/1076...)? He describes several different ways of defining SMDs for cluster-randomized experiments, though in practice I've only ever standardized by total variance.
13.02.2026 19:54 β π 1 π 0 π¬ 0 π 0I agree with your main point that d = 22 is ridiculous in substantive terms and should not be included in a meta-analysis. But I would also note that this is partly because there is no universal SMD metric. There are many different ways of defining SMD, which are not all commeasurable.
13.02.2026 18:58 β π 2 π 0 π¬ 1 π 0which will usually be only a small part of the total variation in scores. I would think that the GRIM calculations would need to take this into account to determine whether a set of reported scores are plausible or not. Does your PubPeer comment do so? I couldn't tell from what you wrote.
13.02.2026 18:53 β π 2 π 0 π¬ 1 π 0In this article, the Ms and SDs in Table 2 are calculated by first averaging the individual scores at the classroom level, and then taking M and SD across classrooms (of which there were only a few per condition). So, roughly, the SD in the SMD is based only on between-classroom variation...
13.02.2026 18:50 β π 1 π 0 π¬ 1 π 1It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
11.02.2026 17:00 β π 640 π 223 π¬ 30 π 51Aha very interesting. I would be interested to hear more about whatever alternative dissemination model you have in mind. (I'm by no means a proponent of the current journal-focused system, but I also don't have any vision for a better way to run things.)
11.02.2026 23:07 β π 2 π 0 π¬ 1 π 0I would also push back on the idea that non-significant finding = no new knowledge. A precisely estimated zero might well amount to new knowledge---knowledge that an intervention is ineffective or that there is no relation between two constructs.
11.02.2026 20:55 β π 6 π 0 π¬ 0 π 0The purpose of journals is to build a scientific record, so if it's difficult-to-impossible to accumulate and build, then there's something very wrong.
11.02.2026 20:53 β π 4 π 0 π¬ 2 π 0Among other reasons, selecting on statistical significance makes it much, much more difficult to accumulate evidence across studies, whether using quantitative meta-analysis methods or other synthesis techniques.
11.02.2026 20:51 β π 9 π 0 π¬ 2 π 0More of this type of careful meta-research, please. #SystematicReview #MetaAnalysis
08.02.2026 20:19 β π 2 π 0 π¬ 1 π 0Groundhog Harassed By Dipshits In Stupid Hats
Groundhog Harassed By Dipshits In Stupid Hats
02.02.2026 19:00 β π 5404 π 1079 π¬ 49 π 41If you're not already familiar, you might like Reichardt (2011, doi.org/10.1002/ev.364).
24.01.2026 22:30 β π 0 π 0 π¬ 2 π 0