Thanks for putting this together! May I join?
31.07.2025 19:38 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0@erinecampbell.bsky.social
Thanks for putting this together! May I join?
31.07.2025 19:38 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Data collection is underway!
We're using mobile eye tracking to study action comprehension in dogs. Django is helping us understand how dogs see and interpret our actions โ more coming soon! ๐ถ๐๏ธ #Science #DogResearch #CognitiveScience
Thanks @asab.org for funding this project!
I was finally able to make these visualizations of what what "light", "dark" and other modifiers do to colors jofrhwld.github.io/blog/posts/2...
14.07.2025 18:19 โ ๐ 24 ๐ 10 ๐ฌ 2 ๐ 0ooooh, can you share the % that end up being real / still-valid addresses?
14.07.2025 01:55 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0NASA is more than rockets and moonwalks. NASA is behind much of our everyday technology. From space discovery, to Air Jordans, to CAT scans, NASA has played a role. We get it all on less than a penny of every federal dollar. Now their science may be gutted by 50%.
#NASADidThat
Jenna Norton and Naomi Caselli in front of Naomiโs poster at the science fair for canceled grants.
At the Science Fair for canceled grants, I had the privilege of speaking with @naomicaselli.bsky.social. Her team (which includes deaf researchers) were making breakthroughs to better identify & address language deprivation โ when the Trump administration terminated their grant.
10.07.2025 04:31 โ ๐ 46 ๐ 19 ๐ฌ 1 ๐ 1Okay yโall, gather round for a chat. Itโs been a roller coaster, and I thought Iโd share what weโve learned. ๐งต (1/16)
bsky.app/profile/luck...
BTSCON 2025 is now seeking submission proposals for their annual conference. All session submissions are due on July 31st. You can submit on the website https://bigteamscienceconference.github.io/submissions/. We are also excited to say that registration is also now live! BTSCON 2025 is now seeking submission proposals for their annual conference. All session submissions are due on July 31st. You can submit on the website https://bigteamscienceconference.github.io/submissions/. We are also excited to say that registration is also now live!
Session Submission and registration is now open for BTSCON2025!
bigteamscienceconference.github.io/submissions/
@abrir.bsky.social @psysciacc.bsky.social @manybabies.org
Severe blows to the โdeaf-scientist pipelineโ must not mean abandoning its best practices. Here is how to support current and future students, says Wyatte C. Hall
https://go.nature.com/4eDVjqp
screenshot of our public handbook
every year my lab does a re-read + edit of our Handbook, a documentation resource for how we do science
this year we also updated our Public Handbook, an open-access version for folks wanting to improve their own docs
it's at handbook-public.themusiclab.org and available for noncommercial re-use
Exciting news - three years after visiting @amymlieberman.bsky.social in Boston for 6 wonderful weeks, our project on joint attention and sign familiarity in ASL has been published!
10.06.2025 20:02 โ ๐ 9 ๐ 2 ๐ฌ 1 ๐ 1its amazing how chatgpt knows everything about subjects I know nothing about, but is wrong like 40% of the time in things im an expert on. not going to think about this any further
08.03.2025 00:13 โ ๐ 12420 ๐ 3113 ๐ฌ 88 ๐ 106As we age, we move slower and less preciselyโbut how much, exactly?
We analyzed one of the largest datasets on motor control to dateโ2,185 adults performing a reaching task.
Findings
โข Reaction time: โ1.2 ms/year
โข Movement time: โ2.3 ms/year
โข Precision: โ0.02ยฐ/year
tinyurl.com/f9v66jut
1/2
those of us staring down the @bucld.bsky.social #BUCLD50 deadline feel seen.
19.05.2025 21:08 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 0Graphical abstract from the Journal of Neurophysiology with 3 panels. The first panel is titled "Task" with the text: "Record responses in the inferior colliculus (IC) while monkeys perform a multi-modal localization task." A diagram depicts a behavioral task requiring a subject to fixate their eyes on a central light, wait for a target (auditory, visual, or both at a single location) to appear, and then move their eyes to the location of the target. The second panel is titled "Local Field Potential (LFP)" with the text: "Visually-evoked responses to both fixation and target lights." Two figures show the average local field potential (LFP) from multiple recording sites over time during a trial, showing a response that deviates from the pre-stimulus baseline in response to the fixation light (left figure) and visual targets (right figure). Finally, the third panel is titled "Single-Unit Spiking Activity" with the text: "Visually-induced modulation of auditory responses even when the visual spiking response is weak." Two figures follow. The first figure is a peri-stimulus time histogram (PSTH) from one neuron, showing the response to a visual, auditory, and audiovisual target over time. The second figure is a bar plot quantifying the first figure, showing that the audiovisual response has a lower firing rate than the auditory response, despite the visual response for this neuron being near zero. Below the 3 main panels of the graphical abstract is a footer with the logo of the American Physiological Society and the Journal of Neurophysiology.
I'm happy to share a new paper from my PhD research!
Exciting work about how visual info helps us process sound, and an example of federal funding that benefits all of us - from your health to community health.
doi.org/10.1152/jn.0...
With @jmgrohneuro.bsky.social & Jesse Herche
Thread: ๐งต๐
1/10
Daz @drdazzy.bsky.social and I are current board members of SLLS (slls.eu/board/) and are seeking feedback about TISLR conferences
Anyone who's interested in signed languages can respond even if you haven't been to TISLR. Please share widely!
Thank you for your time!
gu.live/RWCkP
Now check out the paper!!
๐Paper: ldr.lps.library.cmu.edu/article/id/8...
๐Data: osf.io/dcnq6/
๐ปCode: github.com/BergelsonLab...
Maybe blind children are doing something different with the language input ๐คทโโ๏ธ๐คทโโ๏ธ๐คทโโ๏ธ (underspecified, I know)
Maybe language input supports language development regardless of vision status, but without vision, it takes a little longer to derive meaning from language input.
10/N
Back to the present paper!
My interpretation?
The input doesnโt look *that* different.
I donโt feel compelled by the explanation that parents of blind children talk to them in some special magic way that allows them to overcome initial language delays.
So then what?
9/N
Links to those two papers โฌ๏ธ
๐ Deep dive into vocabulary in blind toddlers: onlinelibrary.wiley.com/doi/abs/10.1...
๐ป Production of "imperceptible" words (in my own biased opinion, this one is a banger)
direct.mit.edu/opmi/article...
A graph showing blind and sighted children's likelihood of saying words that were visual, auditory, or abstract. Blind children were significantly less likely to say visual words, but did not significantly different on auditory or abstract words.
A multi-panel figure (A, B, C) showing the relationship between perceptual ratings and word production in sighted and blind children. Panel A: Two line graphs titled Predicted Probability of Word Production vs. Perceptual Strength. Left graph: Sighted children; right graph: Blind children. Both graphs show predicted word production probability increasing with perceptual strength (x-axis range 1โ5). For sighted children, the increase is steeper for visual words. Density plots around the scatterplot indicate distributions of word ratings. Panel B: Two line graphs titled Predicted Probability of Word Production vs. Perceptual Exclusivity. Left graph: Sighted children; right graph: Blind children. For blind children, visual words that are exclusively visual show a sharp drop-off in likelihood of production with increasing modality exclusivity. Density plots again show distributions. Panel C: A scatterplot showing individual words plotted by Perceptual Strength (x-axis, 1โ5) and Perceptual Exclusivity (y-axis, 0โ1). Words are colored by modality: green for non-visual words, blue for visual words. Words like "blue," "cloud," "see," "hear," "white," and "black" cluster at high strength and exclusivity (top right), while function words like "about," "because," and "how" cluster at low values (bottom left).
Btw, in other work: blind toddlers are less likely than to say visual words
An effect that is specific to words that are exclusively visual (and donโt really have a auditory/ tactile/olfactory/etc association)
But!! Blind children still do produce words like โblueโ or โseeโ as early as 16 months!
I expected differences in these "visual" words:
๐ Maybe parents of blind kiddos would use them more? (giving extra visual description)
๐ Maybe parents of blind kids would use them less? (instead talking about sounds or textures, idk)
Nope! Similar.
8/N
Violin plots comparing the proportion of temporally-displaced verbs (past, future, hypothetical) and the proportion of highly visual words across groups. Shown also are bar graphs that depict the proportions of other categories of words. The proportion of temporally-displaced words is slightly higher for blind children (34% as opposed to 29% for sighted children). Present-tense verbs comprise roughly half of the verbs for both groups, and uncategorized words comprise the rest. There was no significant difference in the proportion of visual words. As shown on the bar graph, roughly 44% of the input is multimodal words, followed by 10% visual words, and 5% auditory words. Roughly 40% of the words for both groups were amodal, meaning that they were not strongly associated with any sensory experience.
Lastly, what do parents talk about?
Based on verb tense, we found that parents of blind kids seem to talk more about past, future, and hypothetical events than parents of sighted kids
We saw no difference in the amount that parents used highly-visual words (see, mirror, blue, sky)
7/N
Two violin plots depicting the linguistic properties of language input to blind and sighted children. The mean length of utterances to blind children range from roughly 4-7 morphemes, compared to 4-6 morphemes per utterance in input to sighted children. This did not differ across groups. Type-token ratio, unique words per word, ranged from 0.55 to 0.7 and also did not differ across groups.
Is input to blind kids more lexically diverse or morphosyntactically complex?
No and no: Similar MLU and TTR across groups.
6/N
3 panel plot depicting the number of conversational turns in language input for blind children vs. sighted children, and the proportion of child-directed speech in language input for blind children vs. sighted children. Blind and sighted children are exposed to roughly 15-80 conversational turns per hour, but that doesn't differ by groups. For blind children, child-directed speech comprises 55% of the input, vs. 57% of the input to sighted children. This does not differ significantly across groups. Shown also, adult-directed speech comprises 37% of the input for both blind and sighted children.
Do blind and sighted kids differ in the amount of interaction?
Nope! Blind and sighted kids participate in a similar number of conversational turns and get a similar amount of speech directed *to* them (as opposed to directed to adults, etc.)
5/N
A two-panel figure labeled Word Count Measures comparing adult speech to sighted and blind children. Panel A: A paired violin plot showing Adult Word Count (per hour) for sighted (left, purple) and blind (right, light blue) groups. Data points are connected with light gray lines showing individual differences. The distributions are similar; both center around ~1000 words/hour. A black dot and error bar represent the mean and confidence interval for each group. โnsโ (not significant) indicates no statistical difference between groups. Panel B: Similar paired violin plot for Manual Word Count (per hour). Sighted group (purple) and blind group (light blue) distributions both center around ~2200โ2500 words/hour, with substantial individual variability. Again, means are marked with black dots and โnsโ indicates no significant group difference.
First, do parents of blind children talk more?
Nope! Doesnโt seem to matter if we measure it with LENAโs automated word count (left) or by counting the words in our transcriptions (right). Kids vary a lot in the number of words they hear, but that doesnโt vary by group.
4/N
Next, a 7-year annotation effort: small army of RAs from @bergelsonlab.bsky.social transcribed 40 minutes per recording
โ 1200 minutes of fully transcribed speech, ~ 65000 words
3/N
Photo of the storefront of the Durham Pack & Ship, with a crosswalk in front of it. A cartoon image of a girl in a Duke shirt is overlaid on top of the image. The girl is running, has a panicked expression, and is holding a LENA recorder. There is a label on top of the image that says "Assisted by AI".
If that sample sounds small, know that I am patting myself on the back for even reaching fifteen!
(This involved driving hours to homes, yoga classesโฆmailing recorders to families during the pandemic and becoming close friends with the Durham Pack & Shipโฆ)
actual photo of me 4th year grad school
15 blind infants wore LENA recorders for a day to capture language input in their daily lives.
We matched each blind participant to a sighted participant based on age, gender, maternal ed., and number of siblings in the household.
2/N
Blind toddlers often experience language delays. Blind adults do not.
In our new paper at @langdevres.bsky.social, we ask whether differences in language input could help them catch up:
Do parents speak differently to blind children than sighted children?
(Barely... read on for details)
๐งช1/N