You must be pretty sure indeed - who would review for Scientific Reports???
Publisher value-adding intensifies. (I'm pretty sure I didn't click the "Accept" button, so I guess maybe my not clicking "Decline" either counts as acceptance.)
Also, someone should study this! People out in the cold, liminal spaces between institutional fields, the boundaries of which are policed by intellectual gatekeepers. STS, metascience, platform capitalism? I don't know, you're the experts. There are lots of these folks, and they are easy to find
/5
Actually, a few days ago I was hovering over the block button for you. It seems like you don't add much, and just take away time with your comments on here. I might still do it, so don't be surprised if we never interact again.
I would prefer it more if they weren't needed. We have been discussing this topic for more than a decade. There are so many places and ways you could have learned this information by now - to me it almost feels like you'd need to actively try not to know the answer by now....
This is explained in my textbook, chapters on sample size justification and sequential analysis. Maybe you prefer a course over self-study. Then sign up here
statisticalhorizons.com/seminars/sam...
I turned them on with 2 strict templates. Felt a bit early though - what were you thinking of adding? Based on which version?
Yes, because I am not interested in people submitting issues at this moment.
If this is an honest question, you might want to read online.ucpress.edu/collabra/art..., especially the sections on setting a smallest effect size of interest, and sequential analysis. Those are the solution that people use to solve this problem.
Power After the Results are Known (PARKing) floggingpvalues.blog/2026/03/13/p...
😂
I wish more people knew this. Power analysis should be based on the smallest effect size of interest. Not on a guess, or a hope. You also need to specify that effect to make your claim falsifiable, and to know when the effect is statistically significant, but practically irrelevant.
I already made inline equations work! There is a new button for this. You run code chuck, click the button. In the code editor, we of course do not change the code - you get a pop-up with the result. But you see the result in the preview window. So this problem seems solved, right?
Yes. It uses WebR now. So you are just locally running R. Inline already works.
Yes, made it yesterday evening and it works. What would you want it to do, as I am adding features?
New Nullius In Verba episode on miscitations:
nulliusinverba.podbean.com/e/ep-77-misc...
How often are citations to the scientific literature outright misleading? Do we really need to spell out that people are supposed to read what they cite? And should we review citations as they do in law?
Everyone should wait a few weeks for him to debug, but what @lakens.bsky.social did to my brittle idea is great, it'll become a huge success (and I'll happily contribute if he'll let me (i added it into a full app here for example)) his iteration seems the better idea here so happy someone did this!
That is perfect! Thank you!! Will make that change. And keep you posted :)
You can find my fork here github.com/Lakens/Quart... but it is under extremely active development, and probably not fully documented, so maybe check in end of next week? It is functional, but setting it up might still be some effort without assistance at the moment.
I have been wanting a solution like this for years. Now with Claude Code it is surprisingly easy to build it. Added WebR to run R code, Zotero plugin, a Diff viewer (can export all changes between submission and resubmission), basically ready to be used in our lab!
New Paul Meehl Graduate School workshop announced: Computational Cognitive Models
paulmeehlschool.github.io/2026-03-10-c...
Please note that the workshop will be exclusively in-person on April 20, 2026.
It goes without saying, but of course good to check :) We will make all code available openly, and try to develop it to be as widely accessible as possible, and credit your work and ideas! Thanks!
Good to hear! I will keep you posted. Now, 2 people can set it up locally, and work turn taking. A server version with collaborative writing also seems very feasible. Not sure I would need it but might be useful for others.
We will first try it in the lab, then keep you posted.
I am very proud of our students, arranging this spontaneous counter protest! Well done!
Hi Michel, I took the source code of Resolve and used Claude to turn it into a locally running editor to directly comment on Qmd files. I added some features (webR to run R chunks, spelling check, Zotero integration, dark mode). It seems to work well! What were your plans with it?
Tracking and mainstreaming replications in the social, cognitive, and behavioral sciences: https://doi.org/10.31222/osf.io/ad2w6_v1
If you currently do not have a reproducible workflow where you can share data and code where possible, I expect you will soon not be able to publish in good journals.
Beyond being best practice, journals will use this to identify papers written by AI.
Plan a new project accordingly.
in some papers, which is arguably the best reason to deviate from 0.05, especially ( as we explain in 2022) when making practical decisions. So, there is my view on this :)
a massive author list as an authority argument to push a ridiculously dumb idea. I mean, we don't want a science like that, we can all agree. So we intentionally recruited more authors to 'win' but I do not expect any to justify alpha levels (and. 05 is fine). We did compromise power >
2022, see journals.sagepub.com/doi/pdf/10.1... And in most cases it is not possible. In these cases I have become a much stronger proponent of. 05. See journals.sagepub.com/doi/10.1177/... for a general defense.
Finally 4) many just signed on to push back to 'eminent scholars' using >