Ricardo Rey-Sáez (β)'s Avatar

Ricardo Rey-Sáez (β)

@ricardoreysaez.bsky.social

I'm a psychometrician in the experimental field, currently doing my PhD research on the Psychometric Properties of Experimental Tasks at the Universidad Autónoma de Madrid.

182 Followers  |  73 Following  |  15 Posts  |  Joined: 18.11.2024  |  1.6202

Latest posts by ricardoreysaez.bsky.social on Bluesky

cover of Bayesian Meta-Analysis: A Practical Introduction by Robert Grant and Gian Luca Di Tanna

cover of Bayesian Meta-Analysis: A Practical Introduction by Robert Grant and Gian Luca Di Tanna

I haven't read this book yet, but the authors (Grant & Di Tanna) know their stuff, and they provide code for every script/engine. Meta- and mega-analysis one of those places where Bayes is natural and often easier than non-Bayes. bayesian-ma.net

18.07.2025 06:56 — 👍 116    🔁 26    💬 4    📌 0
Post image

Super grateful to the WM team for such an exciting project 🥹🗺️🤍

You can check the slides of the @assc28.bsky.social presentation at| www.researchgate.net/publication/...

11.07.2025 21:29 — 👍 10    🔁 5    💬 1    📌 0

Yes, KL is very interesting, although its interpretation can be a bit unintuitive... Maybe the Jensen-Shannon divergence would be easier to use in applied contexts because it's easier to interpret, although I’m not sure if it has any disadvantages compared to KL... 🤷‍♂️🤷‍♂️🤷‍♂️

17.04.2025 21:25 — 👍 1    🔁 1    💬 0    📌 0

Check out the supplementary material too! The idea of using Kullback-Leibler divergence as a measure of differences between experimental conditions still occasionally crosses my mind...

17.04.2025 18:48 — 👍 1    🔁 0    💬 1    📌 0

In fact, much of my current work (the first paper of my PhD, hopefully a preprint before summer!) can't be understood without this article. This is where I first got into Bayesian stats, a year before I took Lee & Wagenmakers' JAGS course in Amsterdam!

17.04.2025 18:48 — 👍 1    🔁 0    💬 1    📌 0

Six years of revisions for one of the best papers I've ever read. Seriously, a must-read: from the underlying theoretical logic of statistical modeling to the most efficient programming of models in Stan (I still remember their four-dimensional arrays for RTs!).

17.04.2025 18:48 — 👍 2    🔁 0    💬 1    📌 0

Congrats! This is easily one of the best papers I've read on Hedge's reliability paradox—especially clear and accessible for experimental psychologists without a strong statistical background. Great to see it published in Psychological Methods!

17.04.2025 18:46 — 👍 1    🔁 0    💬 1    📌 0
Preview
Free Course Book Our course book “Bayesian inference from the ground up: The theory of common sense” will be made freely available on this website. Click here or on the cover page below to obtain the fi…

"Even a scientific demigod such as Gottfried Leibniz faltered when confronted with a simple problem in probability theory. Or perhaps there are no simple problems in probability theory!" (Wagenmakers & Matzke, 2024, p. 83).

www.bayesianspectacles.org/free-course-...

15.04.2025 08:29 — 👍 0    🔁 0    💬 0    📌 0

🤓¡Primer paper de mi tesis!

La historia de cómo aprendí a modelar un experimento de aprendizaje implícito. Modelar es muy poderoso: permite simular qué pasaría si asumo que tengo ruido de medida y de muestreo. ¿Tendría potencia para detectar mi efecto teórico?

authors.elsevier.com/a/1km4d,H2pb...

19.03.2025 18:21 — 👍 13    🔁 6    💬 5    📌 0
Post image

Ideas para tatuarme en toda la piel:

19.03.2025 16:46 — 👍 3    🔁 2    💬 0    📌 0
Preview
The misalignment of incentives in academic publishing and implications for journal reform | PNAS For most researchers, academic publishing serves two goals that are often misaligned—knowledge dissemination and establishing scientific credential...

I just read the paper ‘The misalignment of incentives in academic publishing and implications for journal reform’ and loved it. It gives a very illustrative overview of the roles of journals and I found it inspiring to see all the alternatives that are working!

www.pnas.org/doi/10.1073/...

03.03.2025 08:15 — 👍 5    🔁 3    💬 0    📌 0

You write: Random intercepts were included for each subject.

I read: 𝗢𝗻𝗹𝘆 random intercepts were included for each subject.

#stats

25.02.2025 10:17 — 👍 30    🔁 2    💬 5    📌 1

Violencia justificada.

03.02.2025 15:54 — 👍 2    🔁 0    💬 2    📌 0

Yo estoy tentado de hacer uno sobre psicometría bayesiana, pero me estoy conteniendo...

04.01.2025 12:48 — 👍 0    🔁 0    💬 0    📌 0

Ojo!!! Muy bonito!! Puedes compartir por aquí que tal la recepción cuando lo des??

03.01.2025 18:26 — 👍 0    🔁 0    💬 1    📌 0
Video thumbnail

Introducing PowerLMM.js!

A new tool for power analysis of longitudinal linear mixed-effects models (LMMs) – with support for missing data, plus non-inferiority and equivalence tests.

powerlmmjs.rpsychologist.com

Would really appreciate your feedback as I refine this app! Details below 🧵👇

11.12.2024 10:20 — 👍 289    🔁 110    💬 11    📌 10
Post image

Our JSS article is out!

And now I get to focus on {marginaleffects} 1.0.0. Stay tuned.

www.jstatsoft.org/article/view...

02.12.2024 04:00 — 👍 380    🔁 113    💬 13    📌 14
Post image

Doing a deeper dive into reflective vs. formative modeling, thanks to some wonderful reviewers.

#StatsSky #AcademicSky

pdfs.semanticscholar.org/d156/8a44bb3...

05.12.2024 21:19 — 👍 6    🔁 2    💬 0    📌 0

Al mejor de tres!!

27.11.2024 22:45 — 👍 1    🔁 0    💬 0    📌 0

The GLLAMM/GLVM framework goes one step further than linear mixed models because it allows us to decompose the so-called random-effects covariance matrix into common and unique latent factors. These frameworks (especially GLLAMM) provided the most empowering statistical insights I've encountered.

27.11.2024 21:52 — 👍 0    🔁 0    💬 0    📌 0
Small simulation with 300 subjects and 50 trials per experimental condition. True correlation values are 0.50.

Small simulation with 300 subjects and 50 trials per experimental condition. True correlation values are 0.50.

Small simulation with 50 subjects and 300 trials per experimental condition. True correlation values are 0.50.

Small simulation with 50 subjects and 300 trials per experimental condition. True correlation values are 0.50.

As always, the estimation improves as the number of subjects and trials increases.

1. First picture: 300 subjects and 50 trials per condition.
2. Second picture: 50 subjects and 300 trials per condition.

27.11.2024 21:52 — 👍 0    🔁 0    💬 1    📌 0
Post image

100 subjects, 50 trials per condition (e.g., congruent/incongruent trials in Stroop task), and six experimental tasks. The true correlation between all tasks is 0.50. Results are shown from a Bayesian linear mixed model (first matrix) and a Bayesian GLLAMM/GLVM model (second matrix).

27.11.2024 21:52 — 👍 1    🔁 0    💬 1    📌 0

Frameworks like GLLAMM (Skrondal & Rabe-Hesketh, 2004) or GLVM (Muthén, 2002) will be the future of psychometrics in the experimental world. They allow us to estimate correlations between experimental measures of latent cognitive processes with less uncertainty than any other traditional approach!

27.11.2024 21:52 — 👍 4    🔁 3    💬 1    📌 0

Hola, amiguísima 👽

18.11.2024 08:58 — 👍 1    🔁 0    💬 0    📌 0

@ricardoreysaez is following 20 prominent accounts