Tam Pham's Avatar

Tam Pham

@tam-pham.bsky.social

PhD Candidate | Clinical Psychology | Macquarie University, Sydney

27 Followers  |  41 Following  |  5 Posts  |  Joined: 03.07.2025
Posts Following

Posts by Tam Pham (@tam-pham.bsky.social)

Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Our paper on improving statistical reporting in psychology is now online šŸŽ‰

As a part of this paper, we also created the Transparent Statistical Reporting in Psychology checklist, which researchers can use to improve their statistical reporting practices

www.nature.com/articles/s44...

14.11.2025 20:43 — šŸ‘ 235    šŸ” 94    šŸ’¬ 8    šŸ“Œ 5

The biggest thanks to my amazing supervisors @miriforbes.bsky.social, @dominiquemakowski.bsky.social and @carlyjohnco.bsky.social, and Zen Juen Lau šŸ¤—šŸ¤—šŸ¤—šŸ¤—

12.11.2025 20:36 — šŸ‘ 1    šŸ” 0    šŸ’¬ 1    šŸ“Œ 0
OSF

Link to preprint: osf.io/preprints/ps...

12.11.2025 20:36 — šŸ‘ 1    šŸ” 0    šŸ’¬ 1    šŸ“Œ 0

We also (attempted to) discuss why some indices are more related than others and provided practical recommendations for choosing HRV indices more meaningfully.

I’d love to hear how you navigate the HRV index landscape and whether you think this framework could be helpful!

3/

12.11.2025 20:36 — šŸ‘ 2    šŸ” 0    šŸ’¬ 1    šŸ“Œ 0
Post image

We clustered 89 HRV indices across:
• Two different clustering algorithms
• Two timepoints
• Two samples

and found a set of robust HRV clusters that replicate across time, methods, and samples.

2/

12.11.2025 20:36 — šŸ‘ 2    šŸ” 0    šŸ’¬ 1    šŸ“Œ 0
Post image

Heart rate variability (HRV) is one of the widely used physiological measures in psychophysiological research. But with over 100 indices to choose from, how do we know which ones to use?

In our latest paper, we take a data-driven approach to help answer this.

doi.org/10.1111/psyp...
1/

12.11.2025 20:36 — šŸ‘ 40    šŸ” 14    šŸ’¬ 2    šŸ“Œ 1