We present 2 versions of TMS-RAT:
Version 1.0 provides *recommendations* for TMS reporting. There are 80 items (72 fully tested), but some did not reach high inter-rater reliability. v1.0 is for further development.
Version 1.1 is fully-tested & all 50 items have high inter-rater reliability ⚡🐀.
Could we improve the reproducibility of TMS studies by improving how we report our methods?
Let’s try that.
17 of us worked on developing and validating the TMS-RAT, a tool to guide reporting and evaluate the completeness of reporting in TMS studies.
Check out the preprint👇🏻 All feedback welcome!
There’s probably a proper post coming from @thehandlab.bsky.social soon, but you heard it from me first. TMS-RAT ⚡️🐀 just landed.
@solyas.bsky.social @tmsmultilab.bsky.social
TMSMultiLab's next monthly meeting:
27th February 14:00 GMT 🤝⚡️🧠
"Dose-response (input-output) curves: Ground truth for testing TMS methods?"
by me (@TheHandLab)
Details & Zoom link via github 👇 & slack (request an invite)
github.com/TMSMultiLab/...
Watch out for this absolute beast of a meta-analysis 🐲
Is there a link this manuscript, please? :)
The most compelling null finding you're likely to see
Work with 5/6 of my awesome supervisory team ☀️ @jendavies.bsky.social, @drgbuckingham.bsky.social, Chris Chambers, Ezio Preatoni, and Janet Bultitude.
Thanks to gw4biomeddtp.bsky.social and psychologybath.bsky.social
We hypothesise that the changes in motor system engagement during action observation associated with imagery vividness ('aphantasia') may reflect the extent to which the motor system participates in generating predictions about the observed action, not a necessary mechanism for action understanding.
We found that neither motor nor general visual imagery vividness predicted action understanding, Bayesian analyses provided moderate to strong evidence for the null hypothesis, and the results were stable across multiple robustness checks/ sensitivity analyses.
We examined the relationship between this measure and self-reported vividness of visual and motor imagery in an online sample of 392 participants.
We developed a measure of action understanding, in which participants judged the weight of objects from observing a hand lifting them. Action understanding was indexed by the slope of the relationship between the change in the objects’ actual weights and participants’ precipitation of the weights.
The question was motivated by past research linking action observation and imagery,e.g., evidence for overlapping neural representation, and by recent work (Dupont et al., 2024), finding an association between imagery vividness and motor system engagement during passive observation of hand actions.
Check out our new preprint!! 🤏👀
Motor Imagery Vividness Does Not Predict the Ability to Judge Object Weight from Observed Hand Actions
doi.org/10.31234/osf...
We explored the relationship between the ability to understand hand actions and self-reported motor/general visual imagery vividness.
Now published in the Journal of Neurophysiology:
journals.physiology.org/doi/full/10....
Get in touch if you think this tool could help in your science! We will be developing improvements and extensions over the next year.
@gw4biomeddtp.bsky.social @psychologybath.bsky.social @drgbuckingham.bsky.social @jendavies.bsky.social
Are you someone, or do you know anyone, who has pain in their arm/hand/shoulder (for any reason) and would be interested in contributing to ONLINE research on movement perception in upper limb pain?
Participate/find out more here: run.pavlovia.org/Szekely/unil...
Today at 14:00 GMT
👀🧠🧲⚡️
Who wants to be a postdoc in Cardiff?
Great place, great looking project, fantastic PI✨
Thank you for the update, it's pretty! Would it be possible to do something about the loading times?
To find out about how #TMS #BrainStim users collect, analyse & share their data, we've made a very brief TMS survey:
forms.gle/83Abpj5F8qzd...
It should take you less than 5 minutes & does not ask for any personal information.
We'll use the answers to plan our future work.
Please share!
🙏🙏🙏
⚡
Friday is a big day for the TMS-RAT🤩
This fall I will have a blind student in my coding class for the first time. Do any other instructors or visually impaired coders have advice beyond making sure my book has useful alt-text for the images? #rstats #accessibility
Class book: psyteachr.github.io/reprores-v5/
Thanks :))
Had a wonderful time at #BACN last week. I’m very grateful for all the in-depth discussions, and so excited that our project with @mmarneweck.bsky.social was awarded the poster prize 🏆
We're now up to 29 labs across 16 countries for our replication project! There is still room for more labs to join us. We are particularly seeking labs capable of testing native English speakers, though all labs are welcome to participate. For more information: rolfzwaan.substack.com/p/memory-mis...
A huge thank you to @mmarneweck.bsky.social for hosting me and to @gw4biomeddtp.bsky.social for making this opportunity possible.
This year, I spent March–June at the University of Oregon, working with @mmarneweck.bsky.social on an fMRI analysis of the representational pattern similarity of lip and hand movements after upper-limb amputation.
Excited to share this at #BACN25 ✨ Find me at the datablitz/poster sessions on Friday!
I previously got responses (and eventually found a wonderful mathematician collaborator) to a similar query from the HUB of Quantitative Modelling in Exeter
www.exeter.ac.uk/research/qua...
Over the last few months, members of TMSMultiLab have read, analysed & assessed #TMS research papers to produce a new tool:
17 people
333 papers
80 criteria
2 raters per paper
The dataset contains 53,280 ratings!
It's an immense piece of work, the first empirical contribution of TMSMultiLab...
☺️