Alan Wrench's Avatar

Alan Wrench

@awrench.bsky.social

Developing instrumentation for imaging the tongue. Neuroanatomy and biomechanical models. Pulse-Step model of motor control. Natural sceptic.

176 Followers  |  39 Following  |  49 Posts  |  Joined: 02.10.2023  |  1.8058

Latest posts by awrench.bsky.social on Bluesky

Join the lab Brain, Language, & Acoustic Behavior Laboratory

The Speech Motor Neuroscience Group at the University of Wisconsin–Madison inviting applications for an NIH-funded postdoctoral research position in the field of speech motor control and speech motor neuroscience. Details found under the β€œpostdoctoral researchers” tab blab.wisc.edu/join-the-lab/

15.10.2025 00:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Attending many great talks at #ESSD2025 in Athens, and a great opportunity to present our work on using the compartmental tongue theory to reshape how we quantify tongue movement in swallowing.

10.10.2025 19:20 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Rubber arm illusion in octopus The feeling of a body as belonging to oneself is called the sense of body ownership and the centerpiece of conscious experience. Kawashima and Ikeda investigated the sense of body ownership in an octo...

Happy World Octopus Day. Here's a recent paper demonstrating octopuses have a sense of body ownership similar to mamals and rodents.
www.cell.com/current-biol...

08.10.2025 09:10 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Temporal integration in human auditory cortex is predominantly yoked to absolute time - Nature Neuroscience Temporal integration throughout the human auditory cortex is predominantly locked to absolute time and does not vary with the duration of speech structures such as phonemes or words.

Temporal integration in human auditory cortex is predominantly yoked to absolute time www.nature.com/articles/s41... There's a difference between integrating across absolute time & structure (say) phonemes. Do cortical computations reflect time or structure? Results showed time-yoked computations ⏱️

19.09.2025 11:45 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Note all the articulation happening in the posterior tongue and hyoid which is not captured by EMA.

17.09.2025 15:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Co-registered EMA and ultrasound. From top left: Ultrasound with tongue contour, Ultrasound keypoints, 3D head with EMA sensors, , Spectrogram, Glossogram showing vocal tract constrictions in red cavities in blue, waveform. Movie created by AAA app. Use settings to hear audio.

17.09.2025 15:54 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

Yup.

13.09.2025 12:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

🚨 Open PhD Position – Grenoble, France 🚨

Join us at GIPSA-lab to explore how Speech Language Models can learn like children: through physical and social interaction. Think AI, robots, development πŸ§ πŸ€–πŸŽ™οΈ
Fully funded (3 yrs) β€’ @cnrs.fr / @ugrenoblealpes.bsky.social
Details πŸ‘‰ tinyurl.com/bde988b3

01.09.2025 11:48 β€” πŸ‘ 8    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0

I understand that you need to minimize kit to take to remote parts. A mirror would be convenient. I think we need to do two things. Change the 50mm camera for a wide angle one so that a peripheral mirror is in view. Then design a 45Β° mirror mount to fit on the side camera mount. I can try this out.

01.09.2025 15:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hi Matt, AAA can only record one video channel for reasons to do with having to associate splines with input data streams and also the large amount of disk space that would accumulate. In the past we used a CCTV camera mixer. However, I purchased one recently and couldn't get it to work.

01.09.2025 15:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

x.com/i/status/195...

Video of Prof Takayuki Arai with his vocal tract models at #Interspeech2025
Love this analogue demonstration.

22.08.2025 09:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

We're thrilled to introduce ATHENA: Automatically Tracking Hands Expertly with No Annotations – our open-source, Python-based toolbox for 3D markerless hand tracking!

Paper: www.biorxiv.org/content/10....

16.08.2025 00:49 β€” πŸ‘ 52    πŸ” 7    πŸ’¬ 2    πŸ“Œ 3
DeepCode
YouTube video by Zongwei Li DeepCode

Really? github.com/HKUDS/DeepCode
www.youtube.com/watch?v=PRgm...
Paper2Code: Convert research papers into working implementations
Text2Web: Generate frontend applications from descriptions
Text2Backend: Create scalable backend systems automatically
Auto Code Validation: Guaranteed working code

19.08.2025 10:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Swallowing in a new light: how ultrasound could transform care In this hands-on, interactive workshop, you'll explore how ultrasound is opening a new window into the science of swallowing.

Royal Society of Edinburgh workshop this September run by drjoanma.bsky.social from Queen Margaret University.

rse.org.uk/event/swallo...

15.08.2025 11:30 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

The work coming out of the Person lab is a must-read for me. www.biorxiv.org/content/10.1...

This new paper shows that cerebellar output neurons encode both predictive and corrective movements, mechanistically linking feedforward and feedback control.

14.08.2025 13:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
When practice leads to co-articulation: the evolution of geometrically defined movement primitives - Experimental Brain Research The skilled generation of motor sequences involves the appropriate choice, ordering and timing of a sequence of simple, stereotyped movement elements. Nevertheless, a given movement element within a w...

Sosnik et al 2004 link.springer.com/article/10.1...
Sosnik et al 2006 link.springer.com/article/10.1...
Sosnik et al 2007 journals.physiology.org/doi/full/10....

22.07.2025 14:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The effects of haloperidol on motor vigour and movement fusion during sequential reaching Reward is a powerful tool to enhance human motor behaviour with previous research showing that during a sequential reaching movement, a monetary incentive leads to increased speed of each movement (mo...

With my interest in speech, I very much welcome this new work on limb movement sequences. Readers may also be interested in the seminal series of papers published 20 years ago on this topic by Sosnik et al most recently cited by @galeaj.bsky.social (refs 12-14) journals.plos.org/plosone/arti...

22.07.2025 14:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Although I no longer subscribe to the Equilibrium Point Hypothesis as currently formulated, this proposal is intriguing.

Almanzor et al. (2025). Self-organising bio-inspired reflex circuits for robust motor coordination in artificial musculoskeletal systems.
iopscience.iop.org/article/10.1...

19.07.2025 13:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Not inconsistent with the proposal that an initial feedforward motor cortical pulse determines direction and velocity of movement and a step change near peak velocity modulates deceleration to bring the movement on target. This paper, finds that the cerebellum generates the deceleration step change.

13.07.2025 15:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Cerebellar associative learning underlies skilled reach adaptation Cerebellar output has been shown to enhance movement precision by scaling the decelerative phase of reaching movements in mice. We hypothesized that during reach, initial kinematics cue late-phase adj...

"we discovered a naturally occurring PC population suppression during mouse reaching movements that scaled with the velocity of outreach and occurred shortly before the transition to the decelerative phase of movement." www.biorxiv.org/content/10.1...

13.07.2025 15:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I believe the cerebellum and possibly motor nuclei learn and map the expected sensory input for a given set of feedforward muscle activations. If there is a mismatch, the cerebellum generates corrective output. If the mismatch persists the cerebellum slowly adapts to the new sensory expectation.

12.07.2025 14:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Nice paper but overlooks the groundbreaking and elegant work describing movement sequence learning. Sosnik, R., Hauptmann, B., Karni, A., & Flash, T. (2004). When practice leads to co-articulation: the evolution of geometrically defined movement primitives. Experimental Brain Research, 156, 422-438.

09.07.2025 10:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Quantitative results of SonoSpeech Cleft Pilot: a mixed-methods pilot randomised control trial of ultrasound visual biofeedbackΒ versus standard intervention for children with cleft palate Β± cleft lip ... Background Despite its growing popularity, there is limited evidence of the effectiveness of ultrasound visual biofeedback speech therapy for children with cleft palate ± cleft lip (CP ± L). This stud...

A new article by @maria-cairney.bsky.social, @drjoannecleland.bsky.social and colleagues reporting promising results in a trial on using #ultrasound biofeedback #speechtherapy for children with #cleft palate Β± lip.

#SLT #openaccess

tinyurl.com/3jnamh7e

29.05.2025 09:11 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Video corresponding to above glossogram. Red line is midsagittal tongue contour automatically estimated using #DeepLabCut Blue line indicates base of mandible to hyoid and purple line indicates base of mandible to short tendon. In collaboration with @drjoanma.bsky.social

27.05.2025 11:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Post image

Glossogram with dark red indicating constriction and blue diagonal (tongue compartment contracted) demonstrating peristaltic transfer of water bolus from oral-pharyngeal. This is easiest to explain as sequential extension of neuromuscular compartments of the tongue.

27.05.2025 11:51 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Great to see this out. Will read it carefully.

01.05.2025 10:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Debunking the Myth of Excitatory and Inhibitory Repetitive Transcranial Magnetic Stimulation in Cognitive Neuroscience Research Abstract. Repetitive TMS (rTMS) is a powerful neuroscientific tool with the potential to noninvasively identify brain–behavior relationships in humans. Early work suggested that certain rTMS protocols...

direct.mit.edu/jocn/article...

16.04.2025 11:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Theta-burst stimulation over primary somatosensory cortex modulates the tactile acuity of the tongue | Journal of Neurophysiology | American Physiological Society Emerging studies in humans have established the modulatory effects of repetitive transcranial magnetic stimulation (rTMS) over primary somatosensory cortex (S1) on somatosensory cortex activity and pe...

Not sure what these 50Hz pulses are doing but would be interested to know if 37Hz pulses had a different effect. journals.physiology.org/doi/abs/10.1...

16.04.2025 11:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If your ultrasound displays the scanning frame rate, you can adjust the parameters until you get a good compromise between image quality and frame rate. However you may still get occasional duplicated frames as the video buffering and scanning rates are not synchronized.

16.04.2025 10:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

I suspect that your recordings have a scan frame rate of around 20Hz. Inside the ultrasound the scanned frame data is converted into an image and passed to the video buffer. The 60Hz video outputs the most recent image in the buffer but since the images are only updated at 20hz you get duplicates.

16.04.2025 10:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@awrench is following 20 prominent accounts