Deadline 4 Jan: Postdoc, variability and vowel harmony, metaphony (phonetic and psycholinguistic approaches), Potsdam (w/ A. Gafos) docs.google.com/document/d/1...
Interested in doing a PhD with me and lacns.github.io? Or with any of the incredible fellows in the IMPRS School of Cognition www.maxplanckschools.org/cognition-en - apply before Dec 1st at cognition.maxplanckschools.org/en/application
I'm afraid not! I've seen 2 cameras but not the mirror. It sounds like it's worth a go though!
And you can hear all about Nosey, nasalance and 3D printing (and mysterious bonus content??) from @samkirkham.bsky.social on Tuesday afternoon (A02-O3)!
📄 www.isca-archive.org/interspeech_...
Pat Strycharczuk will be presenting our paper at #Interspeech2025 on Wednesday afternoon (A02-O6), where we applied forensic speaker comparison methods to ultrasound tongue imaging data to think about the individuality of articulatory strategy
📄 www.isca-archive.org/interspeech_...
Very happy to have been awarded an APEX grant for a project on “Interpretable acoustic-articulatory relations in speech production” w/ co-investigators Anton Ragni & Aneta Stefanovska. The plan is to do some interesting speech research at the intersection of linguistics, physics & computer science!
Presenting some stuff from my dissertation at this cool workshop in a couple weeks on dynamical models of speech: samkirkham.github.io/dymos/
Doing a lot of reading/prep work as I am the "symbolicist" going to talk to a bunch of "dynamacists". Should be fun! (seriously)
Here are my slides: www.scott-nelson.net/Presentation...
I started my talk by saying, “usually when you’re a phonologist who is interested in phonetics you get more concrete but I decided to get more abstract instead.” If that’s of interest to you then you might be interested in checking these out!
MLX, Apple’s machine learning framework, just merged a CUDA Backend.
Matmul, tensor copy ops, and other core CUDA primitives are now part of Apple’s official build.
There’s a lot of hype + confusion.
Here’s what it is, and…isn’t.
More details on this soon! Also this weekend is the last chance to submit your TTS system for the next round of evaluation (Q2 2025) by either messaging me at christoph.minixhofer@ed.ac.uk or requesting a model here: huggingface.co/spaces/ttsds...
congratulations!!
deadline 23 June!! Please re-bleat(??) widely!
Introducing the tidynorm package! It's got convenience functions for applying your favorite vowel normalization methods to point measures, formant tracks, and DCT coefficients in a tidyverse workflow, as well as a flexible framework for defining your own normalization methods!
🐠🦠Why can’t bacteria swim like fish?
At microscopic scales, physics changes — viscosity rules! Researchers at our institute study how microbes like E. coli overcome this challenge with clever strategies like run-and-tumble.
Watch our new video:
youtu.be/drwCRRD7CGY?...
There's a lot of big news today/this week, but Apple just dropped a nuke of a paper about LLMs & LRMs, specially around "high-complexity tasks where both models experience complete collapse." This is the biggest sign yet that if AI ever lives up the hype, it won't be via those approaches:
🚨 PhD call in Neurosciences @UNIMORE_univ is open!
Join our Cognitive Neuroscience & Psychology lab in Reggio Emilia 🇮🇹 – a vibrant, livable city.
We study cognitive control, perception-action links & social cognition using EEG, behavioral, mathematical & computational modeling.
🧠👇
We are very excited that Samuel Schmück has been awarded a Leverhulme Trust Early Career Fellowship for a great project on speech analytics and under-represented language varieties in speech technology. Many congratulations Sam!
@samschmueck.bsky.social @leverhulme.ac.uk
Hot off the press! My tutorial on ultrasound data collection & analysis is now out. Open Access. Part of a special issue in the Journal of the Phonetic Society of Japan, with lots of other cool studies. Articulatory phonetics is going strong in Japan!
www.jstage.jst.go.jp/article/onse...
@rpuggaardrode.bsky.social @matyak.bsky.social as promised here is the preprint arxiv.org/abs/2505.23339 and here is a GitHub repo with the 3D model files: github.com/phoneticslab...
And also a paper accepted at CogSci 2025! ✨
➡️ Phonetic accommodation and inhibition in a dynamic neural field model
arxiv.org/abs/2502.01210
We have two papers accepted at #Interspeech2025! ✨
➡️ Nosey: Open-source hardware for acoustic nasalance
arxiv.org/abs/2505.23339
➡️ Articulatory strategy in vowel production as a basis for speaker discrimination
arxiv.org/abs/2505.20995
+ I'll also be giving a survey talk!
See you in Rotterdam! 🇳🇱
Glossogram with dark red indicating constriction and blue diagonal (tongue compartment contracted) demonstrating peristaltic transfer of water bolus from oral-pharyngeal. This is easiest to explain as sequential extension of neuromuscular compartments of the tongue.
my lab (lacns.github.io) at @mpi-nl.bsky.social and @dondersinst.bsky.social is recruiting for two PhD and two postdoctoral positions funded by an @erc.europa.eu Consolidator - come join us!
PhD: www.mpi.nl/career-educa...
Postdoc: www.mpi.nl/career-educa...
(please share widely)
GitHub integration, versioning (although I don't think it shows version diffs), DOIs, guaranteed by EU/CERN. It is a bit more complicated (lots of options) and I don't think it has anonymous repos like OSF. But it seemed best for long-term storage + you can see the file directory on main page...!
Me too - I've started to host more things on Zenodo instead.
As part of the #ViTraLiP transnational course, our virtual guest was @samkirkham.bsky.social from Lancaster University. During our exchange in #Paris, we were able to attend his talk at the #SRPP colloquium. Thanks to @lppparis.bsky.social for inviting us! Read more here: tinyurl.com/vitralip.
yeah, it's pretty basic - it really shouldn't cost what it does! Our system isn't perfect, as it doesn't come with built-in calibration etc etc but it's also just two microphones into your audio interface of choice... so it gives the flexibility to do whatever you want with it. Will share it soon!
@rpuggaardrode.bsky.social @matyak.bsky.social we'll have a preprint online by the end of next week when we submit the final version - have made a note to share it with you! (let me know if it helps to have a pre-final version sooner though).
2. Nosey: Open-source hardware for acoustic nasalance - led by PhD student Maya Dewhurst, with Jack Collins, Roy Alderton & Sam Kirkham
(psst there's 3D printing in there)