Awesome, this is what I was looking for! Will definitely test it out today, thank you for sharing!
27.02.2026 07:10 β π 3 π 0 π¬ 0 π 0@anoukbouma.bsky.social
PhD candidate studying Monte Carlo simulations in the social/behavioral sciences | Meta-Research Center, Tilburg University | Board member at the Platform for Young Meta-Scientists (PYMS)
Awesome, this is what I was looking for! Will definitely test it out today, thank you for sharing!
27.02.2026 07:10 β π 3 π 0 π¬ 0 π 0Right? Seems like a missing link!
26.02.2026 19:13 β π 3 π 0 π¬ 1 π 0Itβs doable to write something like that sure, but I would have just expected there to be a basic counter function to the sessionInfo() one already
26.02.2026 18:39 β π 1 π 0 π¬ 1 π 0Yes I always use renv myself! But I wanted to reproduce an analysis by someone else who only shared sessionInfo() output. So then I wondered if there isnβt an easy way to automatically update the environment using that output
26.02.2026 18:29 β π 1 π 0 π¬ 1 π 0Yes, packages in the right version, and ideally also the R version
26.02.2026 17:42 β π 0 π 0 π¬ 1 π 0
For basic reproducibility sharing sessionInfo() output is sometimes recommended
But I can't find a function to automatically install the right package version (let alone R vers)
Do you install them by hand? Write own code to install automatically? Seems cumbersome for output that is standardized
Preprint!π’
We examined reporting- and open science practices in simulation studies in psychology with a questionnaire
Importantly, we asked βwhy?β
Why were results omitted? Why werenβt MCSEs reported?
Also: how do researchers evaluate simulation studies in their field?
doi.org/10.31234/osf...
π§΅
I've talked about this with some of the developers here at the Bennett Institute over the years and they don't think CC is a great fit for code and while MIT isn't perfect, it does the job better. Honestly, we probably need some legal minds to come up with a new license type for open science usage.
23.02.2026 16:40 β π 2 π 1 π¬ 0 π 0Thatβs perfect thank you!
23.02.2026 17:06 β π 1 π 0 π¬ 0 π 0
Yes, because the code is not really 'software' it is shared for reproducibility
As far as I understand, no license strictly means that no one could ever use the code again (does that also mean to reproduce the paper?)
Because I want people to be free to do whatever with it, I wanted to license it
Ah thank you!
So what license do you use for analysis code that belongs to a paper? The MIT license?
On Github, the CC by 4.0 International license is not one of the standard options when creating a repository.
Does anyone know why that is? Is the MIT license more appropriate for code somehow? Permission to sell seems so strange to me...
But I know rather little about licensing #helpmechoose
It's been a week, but we left inspired after #PSE8 in Leiden.
Thanks to all who participated in our mentor-mentee lunch! Hopefully you all had interesting conversations, and made new connections.
ECR and want to see more of PYMS? Sign up for the mailing list: tinyurl.com/3mkn6f2a
"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library." Pubpeer, journals are next!
theshamblog.com/an-ai-agent-...
Leifβs #PSE8 keynote was ridiculously good, basically a live episode of @datacolada.bsky.social ... if anyone deserves a detective/sitcom series based in their work, itβs Leif, not Ariely β take note @netflix.com
12.02.2026 14:00 β π 8 π 3 π¬ 1 π 0
Thanks to everyone that was interested in my poster for the great conversations and discussions!
The preprint on reporting, open science, and trustworthiness in simulation studies is available here: osf.io/jn9sy_v2
Ready for day two of #PSE8!
Reporting Practices, Open Science Practices, and Trustworthiness of Simulation Studies in Psychology: A Questionnaire Study: https://osf.io/jn9sy
05.02.2026 15:08 β π 4 π 2 π¬ 0 π 0
Thanks to my supervisors for the collaboration on this project!
Marcel van Assen, Robbie van Aert, and @liekevoncken.bsky.social
Preprint: doi.org/10.31234/osf...
We investigated more practices (e.g., guidelines, preregistration, reproducibility measures, etc.) and have too many interesting results to list here.
We shared all open-ended answers in our supplements, which I think give interesting context to our results
Researchers estimated the probability that a typical simulation study in their field has trustworthy conclusions at .74, which was higher than the estimate for reproducibility(.64) & comprehensive reporting(.55)
Indicating that the last two are not always seen as prerequisites for trustworthiness
We also investigated why articles did not disclose the number of missing values and nonconvergent iterations (panel A) and failed to report MCSEs (panel B).
09.02.2026 10:43 β π 0 π 0 π¬ 1 π 0Reasons for selective reporting were mainly due to the academic requirement of streamlined presentations; focus was placed on relevant results and readability, and choices had to be made because of journal requirements.
09.02.2026 10:43 β π 0 π 0 π¬ 1 π 0
Only 19% of articles in our sample were neutral (authors of the article were not involved in developing any of the methods under evaluation in the simulation).
We did not find evidence that selective reporting was less prevalent in neutral studies
Selective reporting (i.e., results being either omitted entirely or split between the body of the paper and supplementary materials) occurred at least once in 50.2% of simulation studies across conditions, methods, and performance measures
09.02.2026 10:43 β π 0 π 0 π¬ 1 π 0
Preprint!π’
We examined reporting- and open science practices in simulation studies in psychology with a questionnaire
Importantly, we asked βwhy?β
Why were results omitted? Why werenβt MCSEs reported?
Also: how do researchers evaluate simulation studies in their field?
doi.org/10.31234/osf...
π§΅
I wrote a blog for the Meta-Research Center expressing my infinite frustration about not getting data. What else is new, you might think? Well, I added an extra layer of annoyance directed at the journals who do NOTHING to enforce promised data sharing.
metaresearch.nl/blog/2026/2/...
π PYMS just passed 100 members on Discord!
Huge thanks to everyone who joined our growing community of young metaβscientists π§‘
Want to be part of the server too? Send us a DM!
Thanks for the blog, interesting! This makes sense.
The most important thing I think is that most people are unaware that this is how seeds 'behave'. Especially when people set the same seed multiple times in their code or through parallelization, weird correlations can end up in simulated data
I do think itβs interesting to know, but I only discovered this correlation issue some time ago by playing around in R after reading the seed paragraph in the paper you posted in this thread. 'Figuring out seeds' is on my list for when I have some time to play around with it more
08.01.2026 13:00 β π 1 π 0 π¬ 1 π 0Haven't looked into it that deeply, but that would be interesting to figure out
08.01.2026 12:54 β π 1 π 0 π¬ 0 π 0