Yashvin Seetahul's Avatar

Yashvin Seetahul

@yashvin.bsky.social

Psychology Postdoctoral Researcher Aggression, Emotion, Methods, Cumulative Science, Partially Overlapping Density Plots, Nontrailblazing Discoveries

236 Followers  |  707 Following  |  91 Posts  |  Joined: 26.09.2023
Posts Following

Posts by Yashvin Seetahul (@yashvin.bsky.social)

The raincloud part is a bit overkill imo

The two density plots with the lines are great!

27.02.2026 23:39 — 👍 1    🔁 0    💬 1    📌 0
Preview
Knowledge centre META/e: home for those improving science TU/e has gained a new research centre: META/e. Daniël Lakens and Krist Vaesen were among the founders of this knowledge hub for metascience—research aimed at improving the practice of science itself. ...

TU/e has gained a new research centre: META/e. Daniël Lakens and Krist Vaesen were among the founders of this knowledge hub for metascience—research aimed at improving the practice of science itself. “We want to be a home for every researcher who occasionally wonders: what are we even doing?”

13.11.2025 14:50 — 👍 26    🔁 13    💬 0    📌 1

Thumbnail is emptiness, and emptiness is thumbnail.

Here's a link to the video tho, in case anybody is wondering what they're apologizing for 😂

www.youtube.com/watch?v=XG-6...

21.02.2026 07:32 — 👍 76    🔁 13    💬 5    📌 2
OSF

A great new preprint on the importance of pilot studies for the validity of studies that are performed. Such an important tooic, that is discussed too little. I especially liked the section on the need for transparent reporting. osf.io/t968e_v1 By @yashvin.bsky.social and collaborators.

20.02.2026 15:55 — 👍 12    🔁 7    💬 1    📌 0
1 Daniël Lakens: "The role of background assumptions in severity appraisal"
YouTube video by Error Statistics 1 Daniël Lakens: "The role of background assumptions in severity appraisal"

Thanks for sharing and for the feedback, Daniël! I actually got the original idea for this paper while listening to your talk about severity vs validity when having to deviate from a pre-registered plan (youtu.be/LfZqE4e3w-k?...)

20.02.2026 17:24 — 👍 2    🔁 1    💬 1    📌 0
OSF

Check out our preprint: "What Pilot Studies Can (and Cannot) Do for Validity in Psychological Research"

Great job @yashvin.bsky.social and @mbneff.bsky.social for leading!

doi.org/10.31234/osf...

16.02.2026 10:38 — 👍 16    🔁 8    💬 0    📌 0
Preview
Behavioural science is unlikely to change the world without a heterogeneity revolution - Nature Human Behaviour Behavioural science increasingly informs policy, but findings are not always replicated. Bryan et al. describe an emerging heterogeneity revolution. They recommend that researchers use heterogeneity i...

For more on the "heterogeneity revolution" see Bryan et al. (2021).

#MetaSci #PsycSci

08.02.2026 18:42 — 👍 9    🔁 1    💬 0    📌 0
Preview
Psychology needs a… heterogeneity revolution | BPS Audrey Linden argues we should drop the assumption that interventions will have a single, underlying effect size.

A Heterogeneity Revolution in Psychology

"When studies that may appear similar are repeated, findings often vary more than we would expect due to sampling error. This is not necessarily a problem if we understand why this happens."

08.02.2026 18:42 — 👍 17    🔁 7    💬 1    📌 1

Pour les attaques ad hominem, je te propose de relire ce que t'as écrit juste avant.
T'as refusé de réagir par rapport à un récapitulatif des effets trouvés dans toutes les méta-analyses des effets court termes des JVV pcq que deux personnes y sont.

07.02.2026 17:17 — 👍 0    🔁 0    💬 0    📌 0

Ce sont les méta-analyses, pas les "études" sur la capture d'écran. Et comme tu peux voir Ferguson détecte des effets plus larges.

(Et puis c'est un peu hypocrite comme propos sur la malhonnêteté, t'as pas publié une méta-analyse avec Pascual? Il a combien de ses papiers qui sont frauduleux déjà?)

07.02.2026 17:07 — 👍 0    🔁 0    💬 1    📌 0
Post image

Ça n'a absolument pas de sens de dire ça... c'est comme dire "si t'es pas bourré après une gorgée d'alcool alors tu ne peux pas être bourré après 100 gorgées".
Un effet détecté dépendra toujours du dosage du stimulus.
De plus, il n'a jamais été démontré qu'il n'y a pas d'effet à court termes.

07.02.2026 16:53 — 👍 0    🔁 0    💬 1    📌 0
Promised Data Unavailable? – I’m Sorry, Ma’am, There’s Nothing We Can Do — Meta-Research Center This blogpost has been written by Michèle Nuijten. Michèle is an assistant professor of our research group who investigates reproducibility and replicability in psychology. Also, she is the developer ...

I wrote a blog for the Meta-Research Center expressing my infinite frustration about not getting data. What else is new, you might think? Well, I added an extra layer of annoyance directed at the journals who do NOTHING to enforce promised data sharing.

metaresearch.nl/blog/2026/2/...

03.02.2026 15:03 — 👍 59    🔁 36    💬 6    📌 4

Methodology. European Journal of Research Methods for the Behavioral and Social Sciences

07.02.2026 10:33 — 👍 1    🔁 0    💬 0    📌 0
Preview
A structural after measurement approach to structural equation modeling - PubMed In structural equation modeling (SEM), the measurement and structural parts of the model are usually estimated simultaneously. In this article, we revisit the long-standing idea that we should first estimate the measurement part, and then estimate the structural part. We call this the "structural-af …

pubmed.ncbi.nlm.nih.gov/36355708/

This is not bayesian, but perhaps it can help?

07.02.2026 10:25 — 👍 1    🔁 0    💬 1    📌 0

👀

19.12.2025 17:16 — 👍 3    🔁 0    💬 0    📌 0
Changing Minds: When Do People Resist Scientific Findings? | SPSP Research can change minds—but people who feel personally targeted may push back.

1. Changing Minds: When Do People Resist Scientific Findings?

When scientific findings touch on people's identities or values, people don't simply weigh the evidence. They also try to protect their beliefs, self-image, and more.

Read more: spsp.org/news/charact...

17.12.2025 19:54 — 👍 3    🔁 2    💬 1    📌 1

Congratulations!!

17.12.2025 15:35 — 👍 1    🔁 0    💬 0    📌 0
Video thumbnail

We built the openESM database:
▶️60 openly available experience sampling datasets (16K+ participants, 740K+ obs.) in one place
▶️Harmonized (meta-)data, fully open-source software
▶️Filter & search all data, simply download via R/Python

Find out more:
🌐 openesmdata.org
📝 doi.org/10.31234/osf...

22.10.2025 19:34 — 👍 277    🔁 144    💬 14    📌 14
Post image

Preregistrations without Code do not Prevent P-Hacking: You can increase your chances for a significant finding in the absence of real effects even with correlations and t test despite having preregistered your hypothesis (e.g., simply changing arguments in the functions).

doi.org/10.31222/osf...

25.11.2025 12:33 — 👍 10    🔁 9    💬 0    📌 1

I made an error that I found thanks to @lukaswallrich.bsky.social. The code and preprint have been corrected/updated. Effects on false positive rates are smaller and only for alternative and correlation method but I think the overall argument still works.

04.12.2025 23:12 — 👍 14    🔁 6    💬 1    📌 1

Check out www.scienceverse.org/metacheck/ by @debruine.bsky.social and @lakens.bsky.social

03.12.2025 14:27 — 👍 0    🔁 0    💬 0    📌 0

Metacheck can now automatically 1) download a preprint from PsyArXiv, 2) create a report of the checks (stats, reporting guidelines, references, code, etc), 3) retrieve the authors email, and 4) send them the report.

Would you as an author appreciate this? When yes, when no?

02.12.2025 18:12 — 👍 25    🔁 7    💬 4    📌 2
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

haha, well, at least 4% of people say shape-shifting lizards control the govt.

Also, some great work by Seetahul and Greitemeyer suggests that participants are more likely to react when they think studies will counteract their interests (journals.sagepub.com/doi/full/10....)

15.09.2025 18:39 — 👍 3    🔁 1    💬 0    📌 0
Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Our paper on improving statistical reporting in psychology is now online 🎉

As a part of this paper, we also created the Transparent Statistical Reporting in Psychology checklist, which researchers can use to improve their statistical reporting practices

www.nature.com/articles/s44...

14.11.2025 20:43 — 👍 236    🔁 93    💬 8    📌 5

If you’ve ever attempted a meta-analysis, you’ll know that authors generally do a poor job reporting statistics. If you do this well, you’ll improve your chances of your work being included in a future meta-analysis.

15.11.2025 07:08 — 👍 39    🔁 14    💬 3    📌 1
Open Science Blog Browser Open Science Blog Browser

My Shiny app containing 3530 Open Science blog posts discussing the replication crisis is updated - you can now use the SEARCH box. I fixed it as my new PhD Julia wanted to know who had called open scientists 'Methodological Terrorists' :) shiny.ieis.tue.nl/open_science...

08.11.2025 19:15 — 👍 44    🔁 19    💬 3    📌 1

Check out this really cool multilab project about the difference in short-term memory between musicians and non-musicians! By @masssimo006.bsky.social, @francescatalamini.bsky.social, and researchers from 33 institutions around the world.

09.11.2025 11:22 — 👍 5    🔁 1    💬 0    📌 0
Post image

🚨Just published in PSPB:

When heavy violent video game players read research that say “these games increase aggression”, they often shift their beliefs in the *opposite* direction.

Open Access link: journals.sagepub.com/doi/10.1177/...

30.10.2025 14:29 — 👍 6    🔁 1    💬 1    📌 2

#AcademicSky #PsychSciSky

I'm always keen to read research where an intervention's success depends on individual differences.

Here, science-relevant info on videogame (VG) use & aggression:

✳️ WORKS among low VG users
✳️ FAILS among high users (i.e, no change)
✳️ BACKFIRES among very high users!

30.10.2025 15:34 — 👍 4    🔁 1    💬 0    📌 0

Need faire une vidéo sur cet article.

30.10.2025 14:52 — 👍 2    🔁 1    💬 0    📌 0