Anouk Bouma's Avatar

Anouk Bouma

@anoukbouma.bsky.social

PhD candidate studying Monte Carlo simulations in the social/behavioral sciences | Meta-Research Center, Tilburg University | Board member at the Platform for Young Meta-Scientists (PYMS)

211 Followers  |  243 Following  |  62 Posts  |  Joined: 02.10.2024
Posts Following

Posts by Anouk Bouma (@anoukbouma.bsky.social)

Awesome, this is what I was looking for! Will definitely test it out today, thank you for sharing!

27.02.2026 07:10 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Right? Seems like a missing link!

26.02.2026 19:13 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It’s doable to write something like that sure, but I would have just expected there to be a basic counter function to the sessionInfo() one already

26.02.2026 18:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes I always use renv myself! But I wanted to reproduce an analysis by someone else who only shared sessionInfo() output. So then I wondered if there isn’t an easy way to automatically update the environment using that output

26.02.2026 18:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes, packages in the right version, and ideally also the R version

26.02.2026 17:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For basic reproducibility sharing sessionInfo() output is sometimes recommended

But I can't find a function to automatically install the right package version (let alone R vers)

Do you install them by hand? Write own code to install automatically? Seems cumbersome for output that is standardized

26.02.2026 09:10 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 4    πŸ“Œ 0
Post image

Preprint!πŸ“’

We examined reporting- and open science practices in simulation studies in psychology with a questionnaire

Importantly, we asked β€˜why?’

Why were results omitted? Why weren’t MCSEs reported?

Also: how do researchers evaluate simulation studies in their field?

doi.org/10.31234/osf...
🧡

09.02.2026 10:43 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

I've talked about this with some of the developers here at the Bennett Institute over the years and they don't think CC is a great fit for code and while MIT isn't perfect, it does the job better. Honestly, we probably need some legal minds to come up with a new license type for open science usage.

23.02.2026 16:40 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

That’s perfect thank you!

23.02.2026 17:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes, because the code is not really 'software' it is shared for reproducibility

As far as I understand, no license strictly means that no one could ever use the code again (does that also mean to reproduce the paper?)

Because I want people to be free to do whatever with it, I wanted to license it

23.02.2026 15:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ah thank you!

So what license do you use for analysis code that belongs to a paper? The MIT license?

23.02.2026 14:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

On Github, the CC by 4.0 International license is not one of the standard options when creating a repository.

Does anyone know why that is? Is the MIT license more appropriate for code somehow? Permission to sell seems so strange to me...

But I know rather little about licensing #helpmechoose

23.02.2026 13:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 2    πŸ“Œ 1
PYMS PYMS Email Forms

It's been a week, but we left inspired after #PSE8 in Leiden.

Thanks to all who participated in our mentor-mentee lunch! Hopefully you all had interesting conversations, and made new connections.

ECR and want to see more of PYMS? Sign up for the mailing list: tinyurl.com/3mkn6f2a

23.02.2026 10:14 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
An AI Agent Published a Hit Piece on Me Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into acceptin…

"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library." Pubpeer, journals are next!
theshamblog.com/an-ai-agent-...

13.02.2026 10:03 β€” πŸ‘ 13    πŸ” 9    πŸ’¬ 1    πŸ“Œ 1

Leif’s #PSE8 keynote was ridiculously good, basically a live episode of @datacolada.bsky.social ... if anyone deserves a detective/sitcom series based in their work, it’s Leif, not Ariely – take note @netflix.com

12.02.2026 14:00 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

Thanks to everyone that was interested in my poster for the great conversations and discussions!

The preprint on reporting, open science, and trustworthiness in simulation studies is available here: osf.io/jn9sy_v2

Ready for day two of #PSE8!

12.02.2026 07:25 β€” πŸ‘ 17    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Reporting Practices, Open Science Practices, and Trustworthiness of Simulation Studies in Psychology: A Questionnaire Study: https://osf.io/jn9sy

05.02.2026 15:08 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
OSF

Thanks to my supervisors for the collaboration on this project!

Marcel van Assen, Robbie van Aert, and @liekevoncken.bsky.social

Preprint: doi.org/10.31234/osf...

09.02.2026 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We investigated more practices (e.g., guidelines, preregistration, reproducibility measures, etc.) and have too many interesting results to list here.

We shared all open-ended answers in our supplements, which I think give interesting context to our results

09.02.2026 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Researchers estimated the probability that a typical simulation study in their field has trustworthy conclusions at .74, which was higher than the estimate for reproducibility(.64) & comprehensive reporting(.55)

Indicating that the last two are not always seen as prerequisites for trustworthiness

09.02.2026 10:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also investigated why articles did not disclose the number of missing values and nonconvergent iterations (panel A) and failed to report MCSEs (panel B).

09.02.2026 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Reasons for selective reporting were mainly due to the academic requirement of streamlined presentations; focus was placed on relevant results and readability, and choices had to be made because of journal requirements.

09.02.2026 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Only 19% of articles in our sample were neutral (authors of the article were not involved in developing any of the methods under evaluation in the simulation).

We did not find evidence that selective reporting was less prevalent in neutral studies

09.02.2026 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Selective reporting (i.e., results being either omitted entirely or split between the body of the paper and supplementary materials) occurred at least once in 50.2% of simulation studies across conditions, methods, and performance measures

09.02.2026 10:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Preprint!πŸ“’

We examined reporting- and open science practices in simulation studies in psychology with a questionnaire

Importantly, we asked β€˜why?’

Why were results omitted? Why weren’t MCSEs reported?

Also: how do researchers evaluate simulation studies in their field?

doi.org/10.31234/osf...
🧡

09.02.2026 10:43 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Promised Data Unavailable? – I’m Sorry, Ma’am, There’s Nothing We Can Do β€” Meta-Research Center This blogpost has been written by MichΓ¨le Nuijten. MichΓ¨le is an assistant professor of our research group who investigates reproducibility and replicability in psychology. Also, she is the developer ...

I wrote a blog for the Meta-Research Center expressing my infinite frustration about not getting data. What else is new, you might think? Well, I added an extra layer of annoyance directed at the journals who do NOTHING to enforce promised data sharing.

metaresearch.nl/blog/2026/2/...

03.02.2026 15:03 β€” πŸ‘ 59    πŸ” 36    πŸ’¬ 6    πŸ“Œ 4
Video thumbnail

πŸŽ‰ PYMS just passed 100 members on Discord!

Huge thanks to everyone who joined our growing community of young meta‑scientists 🧑

Want to be part of the server too? Send us a DM!

21.01.2026 13:04 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thanks for the blog, interesting! This makes sense.

The most important thing I think is that most people are unaware that this is how seeds 'behave'. Especially when people set the same seed multiple times in their code or through parallelization, weird correlations can end up in simulated data

08.01.2026 13:17 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I do think it’s interesting to know, but I only discovered this correlation issue some time ago by playing around in R after reading the seed paragraph in the paper you posted in this thread. 'Figuring out seeds' is on my list for when I have some time to play around with it more

08.01.2026 13:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Haven't looked into it that deeply, but that would be interesting to figure out

08.01.2026 12:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0