Erin Campbell's Avatar

Erin Campbell

@erinecampbell.bsky.social

104 Followers  |  93 Following  |  47 Posts  |  Joined: 25.10.2023  |  2.1757

Latest posts by erinecampbell.bsky.social on Bluesky

Thanks for putting this together! May I join?

31.07.2025 19:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Data collection is underway!

We're using mobile eye tracking to study action comprehension in dogs. Django is helping us understand how dogs see and interpret our actions โ€” more coming soon! ๐Ÿถ๐Ÿ‘๏ธ #Science #DogResearch #CognitiveScience

Thanks @asab.org for funding this project!

14.07.2025 09:15 โ€” ๐Ÿ‘ 53    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image Post image

I was finally able to make these visualizations of what what "light", "dark" and other modifiers do to colors jofrhwld.github.io/blog/posts/2...

14.07.2025 18:19 โ€” ๐Ÿ‘ 24    ๐Ÿ” 10    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

ooooh, can you share the % that end up being real / still-valid addresses?

14.07.2025 01:55 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

NASA is more than rockets and moonwalks. NASA is behind much of our everyday technology. From space discovery, to Air Jordans, to CAT scans, NASA has played a role. We get it all on less than a penny of every federal dollar. Now their science may be gutted by 50%.
#NASADidThat

10.07.2025 22:39 โ€” ๐Ÿ‘ 8084    ๐Ÿ” 2628    ๐Ÿ’ฌ 263    ๐Ÿ“Œ 185
Jenna Norton and Naomi Caselli in front of Naomiโ€™s poster at the science fair for canceled grants.

Jenna Norton and Naomi Caselli in front of Naomiโ€™s poster at the science fair for canceled grants.

At the Science Fair for canceled grants, I had the privilege of speaking with @naomicaselli.bsky.social. Her team (which includes deaf researchers) were making breakthroughs to better identify & address language deprivation โ€” when the Trump administration terminated their grant.

10.07.2025 04:31 โ€” ๐Ÿ‘ 46    ๐Ÿ” 19    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Okay yโ€™all, gather round for a chat. Itโ€™s been a roller coaster, and I thought Iโ€™d share what weโ€™ve learned. ๐Ÿงต (1/16)
bsky.app/profile/luck...

10.07.2025 15:59 โ€” ๐Ÿ‘ 43    ๐Ÿ” 25    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
BTSCON 2025 is now seeking submission proposals for their annual conference. All session submissions are due on July 31st. You can submit on the website https://bigteamscienceconference.github.io/submissions/. We are also excited to say that registration is also now live!
BTSCON 2025 is now seeking submission proposals for their annual conference. All session submissions are due on July 31st. You can submit on the website https://bigteamscienceconference.github.io/submissions/. We are also excited to say that registration is also now live!

BTSCON 2025 is now seeking submission proposals for their annual conference. All session submissions are due on July 31st. You can submit on the website https://bigteamscienceconference.github.io/submissions/. We are also excited to say that registration is also now live! BTSCON 2025 is now seeking submission proposals for their annual conference. All session submissions are due on July 31st. You can submit on the website https://bigteamscienceconference.github.io/submissions/. We are also excited to say that registration is also now live!

Session Submission and registration is now open for BTSCON2025!

bigteamscienceconference.github.io/submissions/

@abrir.bsky.social @psysciacc.bsky.social @manybabies.org

07.07.2025 22:55 โ€” ๐Ÿ‘ 13    ๐Ÿ” 20    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
Deaf scientists hit by drastic NIH cuts โ€” the research community must support them Severe blows to the โ€˜deaf-scientist pipelineโ€™ must not mean abandoning its best practices. Here is how to support current and future students.

Severe blows to the โ€˜deaf-scientist pipelineโ€™ must not mean abandoning its best practices. Here is how to support current and future students, says Wyatte C. Hall

https://go.nature.com/4eDVjqp

08.07.2025 13:13 โ€” ๐Ÿ‘ 30    ๐Ÿ” 19    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
screenshot of our public handbook

screenshot of our public handbook

every year my lab does a re-read + edit of our Handbook, a documentation resource for how we do science

this year we also updated our Public Handbook, an open-access version for folks wanting to improve their own docs

it's at handbook-public.themusiclab.org and available for noncommercial re-use

23.06.2025 01:33 โ€” ๐Ÿ‘ 134    ๐Ÿ” 28    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1

Exciting news - three years after visiting @amymlieberman.bsky.social in Boston for 6 wonderful weeks, our project on joint attention and sign familiarity in ASL has been published!

10.06.2025 20:02 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

its amazing how chatgpt knows everything about subjects I know nothing about, but is wrong like 40% of the time in things im an expert on. not going to think about this any further

08.03.2025 00:13 โ€” ๐Ÿ‘ 12420    ๐Ÿ” 3113    ๐Ÿ’ฌ 88    ๐Ÿ“Œ 106
Post image

As we age, we move slower and less preciselyโ€”but how much, exactly?

We analyzed one of the largest datasets on motor control to dateโ€”2,185 adults performing a reaching task.

Findings
โ€ข Reaction time: โ€“1.2 ms/year
โ€ข Movement time: โ€“2.3 ms/year
โ€ข Precision: โ€“0.02ยฐ/year

tinyurl.com/f9v66jut

1/2

08.05.2025 00:33 โ€” ๐Ÿ‘ 21    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

those of us staring down the @bucld.bsky.social #BUCLD50 deadline feel seen.

19.05.2025 21:08 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Graphical abstract from the Journal of Neurophysiology with 3 panels. The first panel is titled "Task" with the text: "Record responses in the inferior colliculus (IC) while monkeys perform a multi-modal localization task." A diagram depicts a behavioral task requiring a subject to fixate their eyes on a central light, wait for a target (auditory, visual, or both at a single location) to appear, and then move their eyes to the location of the target. The second panel is titled "Local Field Potential (LFP)" with the text: "Visually-evoked responses to both fixation and target lights." Two figures show the average local field potential (LFP) from multiple recording sites over time during a trial, showing a response that deviates from the pre-stimulus baseline in response to the fixation light (left figure) and visual targets (right figure). Finally, the third panel is titled "Single-Unit Spiking Activity" with the text: "Visually-induced modulation of auditory responses even when the visual spiking response is weak." Two figures follow. The first figure is a peri-stimulus time histogram (PSTH) from one neuron, showing the response to a visual, auditory, and audiovisual target over time. The second figure is a bar plot quantifying the first figure, showing that the audiovisual response has a lower firing rate than the auditory response, despite the visual response for this neuron being near zero. Below the 3 main panels of the graphical abstract is a footer with the logo of the American Physiological Society and the Journal of Neurophysiology.

Graphical abstract from the Journal of Neurophysiology with 3 panels. The first panel is titled "Task" with the text: "Record responses in the inferior colliculus (IC) while monkeys perform a multi-modal localization task." A diagram depicts a behavioral task requiring a subject to fixate their eyes on a central light, wait for a target (auditory, visual, or both at a single location) to appear, and then move their eyes to the location of the target. The second panel is titled "Local Field Potential (LFP)" with the text: "Visually-evoked responses to both fixation and target lights." Two figures show the average local field potential (LFP) from multiple recording sites over time during a trial, showing a response that deviates from the pre-stimulus baseline in response to the fixation light (left figure) and visual targets (right figure). Finally, the third panel is titled "Single-Unit Spiking Activity" with the text: "Visually-induced modulation of auditory responses even when the visual spiking response is weak." Two figures follow. The first figure is a peri-stimulus time histogram (PSTH) from one neuron, showing the response to a visual, auditory, and audiovisual target over time. The second figure is a bar plot quantifying the first figure, showing that the audiovisual response has a lower firing rate than the auditory response, despite the visual response for this neuron being near zero. Below the 3 main panels of the graphical abstract is a footer with the logo of the American Physiological Society and the Journal of Neurophysiology.

I'm happy to share a new paper from my PhD research!

Exciting work about how visual info helps us process sound, and an example of federal funding that benefits all of us - from your health to community health.

doi.org/10.1152/jn.0...

With @jmgrohneuro.bsky.social & Jesse Herche

Thread: ๐Ÿงต๐Ÿ‘‡

1/10

14.05.2025 20:13 โ€” ๐Ÿ‘ 18    ๐Ÿ” 7    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Preview
a picture of a bald man with the words " i would very much like to hear your thoughts " Alt: Captain Picard of Star Trek in uniform seriously says to someone off camera "I would very much like to hear your thoughts" (which appears as white text on the bottom)

Daz @drdazzy.bsky.social and I are current board members of SLLS (slls.eu/board/) and are seeking feedback about TISLR conferences

Anyone who's interested in signed languages can respond even if you haven't been to TISLR. Please share widely!

Thank you for your time!

gu.live/RWCkP

13.05.2025 18:52 โ€” ๐Ÿ‘ 3    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
a sign that says that 's all folks with a blue circle in the middle Alt: an animation of "that's all folks" being written in cursive. This is from the end of the Looney Tunes cartoons.

Now check out the paper!!

๐Ÿ“–Paper: ldr.lps.library.cmu.edu/article/id/8...
๐Ÿ“ŠData: osf.io/dcnq6/
๐Ÿ’ปCode: github.com/BergelsonLab...

12.05.2025 18:34 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Maybe blind children are doing something different with the language input ๐Ÿคทโ€โ™€๏ธ๐Ÿคทโ€โ™€๏ธ๐Ÿคทโ€โ™€๏ธ (underspecified, I know)

Maybe language input supports language development regardless of vision status, but without vision, it takes a little longer to derive meaning from language input.

10/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Back to the present paper!

My interpretation?

The input doesnโ€™t look *that* different.

I donโ€™t feel compelled by the explanation that parents of blind children talk to them in some special magic way that allows them to overcome initial language delays.

So then what?

9/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Links to those two papers โฌ‡๏ธ

๐Ÿ  Deep dive into vocabulary in blind toddlers: onlinelibrary.wiley.com/doi/abs/10.1...

๐Ÿ‘ป Production of "imperceptible" words (in my own biased opinion, this one is a banger)
direct.mit.edu/opmi/article...

12.05.2025 18:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A graph showing blind and sighted children's likelihood of saying words that were visual, auditory, or abstract. Blind children were significantly less likely to say visual words, but did not significantly different on auditory or abstract words.

A graph showing blind and sighted children's likelihood of saying words that were visual, auditory, or abstract. Blind children were significantly less likely to say visual words, but did not significantly different on auditory or abstract words.

A multi-panel figure (A, B, C) showing the relationship between perceptual ratings and word production in sighted and blind children.

Panel A: Two line graphs titled Predicted Probability of Word Production vs. Perceptual Strength. Left graph: Sighted children; right graph: Blind children. Both graphs show predicted word production probability increasing with perceptual strength (x-axis range 1โ€“5). For sighted children, the increase is steeper for visual words. Density plots around the scatterplot indicate distributions of word ratings.

Panel B: Two line graphs titled Predicted Probability of Word Production vs. Perceptual Exclusivity. Left graph: Sighted children; right graph: Blind children. For blind children, visual words that are exclusively visual show a sharp drop-off in likelihood of production with increasing modality exclusivity. Density plots again show distributions.

Panel C: A scatterplot showing individual words plotted by Perceptual Strength (x-axis, 1โ€“5) and Perceptual Exclusivity (y-axis, 0โ€“1). Words are colored by modality: green for non-visual words, blue for visual words. Words like "blue," "cloud," "see," "hear," "white," and "black" cluster at high strength and exclusivity (top right), while function words like "about," "because," and "how" cluster at low values (bottom left).

A multi-panel figure (A, B, C) showing the relationship between perceptual ratings and word production in sighted and blind children. Panel A: Two line graphs titled Predicted Probability of Word Production vs. Perceptual Strength. Left graph: Sighted children; right graph: Blind children. Both graphs show predicted word production probability increasing with perceptual strength (x-axis range 1โ€“5). For sighted children, the increase is steeper for visual words. Density plots around the scatterplot indicate distributions of word ratings. Panel B: Two line graphs titled Predicted Probability of Word Production vs. Perceptual Exclusivity. Left graph: Sighted children; right graph: Blind children. For blind children, visual words that are exclusively visual show a sharp drop-off in likelihood of production with increasing modality exclusivity. Density plots again show distributions. Panel C: A scatterplot showing individual words plotted by Perceptual Strength (x-axis, 1โ€“5) and Perceptual Exclusivity (y-axis, 0โ€“1). Words are colored by modality: green for non-visual words, blue for visual words. Words like "blue," "cloud," "see," "hear," "white," and "black" cluster at high strength and exclusivity (top right), while function words like "about," "because," and "how" cluster at low values (bottom left).

Btw, in other work: blind toddlers are less likely than to say visual words

An effect that is specific to words that are exclusively visual (and donโ€™t really have a auditory/ tactile/olfactory/etc association)

But!! Blind children still do produce words like โ€œblueโ€ or โ€œseeโ€ as early as 16 months!

12.05.2025 18:34 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I expected differences in these "visual" words:

๐ŸŒŸ Maybe parents of blind kiddos would use them more? (giving extra visual description)
๐ŸŒŸ Maybe parents of blind kids would use them less? (instead talking about sounds or textures, idk)

Nope! Similar.

8/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Violin plots comparing the proportion of temporally-displaced verbs (past, future, hypothetical) and the proportion of highly visual words across groups. Shown also are bar graphs that depict the proportions of other categories of words. The proportion of temporally-displaced words is slightly higher for blind children (34% as opposed to 29% for sighted children). Present-tense verbs comprise roughly half of the verbs for both groups, and uncategorized words comprise the rest. There was no significant difference in the proportion of visual words. As shown on the bar graph, roughly 44% of the input is multimodal words, followed by 10% visual words, and 5% auditory words. Roughly 40% of the words for both groups were amodal, meaning that they were not strongly associated with any sensory experience.

Violin plots comparing the proportion of temporally-displaced verbs (past, future, hypothetical) and the proportion of highly visual words across groups. Shown also are bar graphs that depict the proportions of other categories of words. The proportion of temporally-displaced words is slightly higher for blind children (34% as opposed to 29% for sighted children). Present-tense verbs comprise roughly half of the verbs for both groups, and uncategorized words comprise the rest. There was no significant difference in the proportion of visual words. As shown on the bar graph, roughly 44% of the input is multimodal words, followed by 10% visual words, and 5% auditory words. Roughly 40% of the words for both groups were amodal, meaning that they were not strongly associated with any sensory experience.

Lastly, what do parents talk about?

Based on verb tense, we found that parents of blind kids seem to talk more about past, future, and hypothetical events than parents of sighted kids

We saw no difference in the amount that parents used highly-visual words (see, mirror, blue, sky)

7/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Two violin plots depicting the linguistic properties of language input to blind and sighted children. The mean length of utterances to blind children range from roughly 4-7 morphemes, compared to 4-6 morphemes per utterance in input to sighted children. This did not differ across groups. Type-token ratio, unique words per word, ranged from 0.55 to 0.7 and also did not differ across groups.

Two violin plots depicting the linguistic properties of language input to blind and sighted children. The mean length of utterances to blind children range from roughly 4-7 morphemes, compared to 4-6 morphemes per utterance in input to sighted children. This did not differ across groups. Type-token ratio, unique words per word, ranged from 0.55 to 0.7 and also did not differ across groups.

Is input to blind kids more lexically diverse or morphosyntactically complex?

No and no: Similar MLU and TTR across groups.

6/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
3 panel plot depicting the number of conversational turns in language input for blind children vs. sighted children, and the proportion of child-directed speech in language input for blind children vs. sighted children. Blind and sighted children are exposed to roughly 15-80 conversational turns per hour, but that doesn't differ by groups. For blind children, child-directed speech comprises 55% of the input, vs. 57% of the input to sighted children. This does not differ significantly across groups. Shown also, adult-directed speech comprises 37% of the input for both blind and sighted children.

3 panel plot depicting the number of conversational turns in language input for blind children vs. sighted children, and the proportion of child-directed speech in language input for blind children vs. sighted children. Blind and sighted children are exposed to roughly 15-80 conversational turns per hour, but that doesn't differ by groups. For blind children, child-directed speech comprises 55% of the input, vs. 57% of the input to sighted children. This does not differ significantly across groups. Shown also, adult-directed speech comprises 37% of the input for both blind and sighted children.

Do blind and sighted kids differ in the amount of interaction?

Nope! Blind and sighted kids participate in a similar number of conversational turns and get a similar amount of speech directed *to* them (as opposed to directed to adults, etc.)

5/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A two-panel figure labeled Word Count Measures comparing adult speech to sighted and blind children.

Panel A: A paired violin plot showing Adult Word Count (per hour) for sighted (left, purple) and blind (right, light blue) groups. Data points are connected with light gray lines showing individual differences. The distributions are similar; both center around ~1000 words/hour. A black dot and error bar represent the mean and confidence interval for each group. โ€œnsโ€ (not significant) indicates no statistical difference between groups.

Panel B: Similar paired violin plot for Manual Word Count (per hour). Sighted group (purple) and blind group (light blue) distributions both center around ~2200โ€“2500 words/hour, with substantial individual variability. Again, means are marked with black dots and โ€œnsโ€ indicates no significant group difference.

A two-panel figure labeled Word Count Measures comparing adult speech to sighted and blind children. Panel A: A paired violin plot showing Adult Word Count (per hour) for sighted (left, purple) and blind (right, light blue) groups. Data points are connected with light gray lines showing individual differences. The distributions are similar; both center around ~1000 words/hour. A black dot and error bar represent the mean and confidence interval for each group. โ€œnsโ€ (not significant) indicates no statistical difference between groups. Panel B: Similar paired violin plot for Manual Word Count (per hour). Sighted group (purple) and blind group (light blue) distributions both center around ~2200โ€“2500 words/hour, with substantial individual variability. Again, means are marked with black dots and โ€œnsโ€ indicates no significant group difference.

First, do parents of blind children talk more?

Nope! Doesnโ€™t seem to matter if we measure it with LENAโ€™s automated word count (left) or by counting the words in our transcriptions (right). Kids vary a lot in the number of words they hear, but that doesnโ€™t vary by group.

4/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Next, a 7-year annotation effort: small army of RAs from @bergelsonlab.bsky.social transcribed 40 minutes per recording

โ†’ 1200 minutes of fully transcribed speech, ~ 65000 words

3/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Photo of the storefront of the Durham Pack & Ship, with a crosswalk in front of it. A cartoon image of a girl in a Duke shirt is overlaid on top of the image. The girl is running, has a panicked expression, and is holding a LENA recorder. There is a label on top of the image that says "Assisted by AI".

Photo of the storefront of the Durham Pack & Ship, with a crosswalk in front of it. A cartoon image of a girl in a Duke shirt is overlaid on top of the image. The girl is running, has a panicked expression, and is holding a LENA recorder. There is a label on top of the image that says "Assisted by AI".

If that sample sounds small, know that I am patting myself on the back for even reaching fifteen!

(This involved driving hours to homes, yoga classesโ€ฆmailing recorders to families during the pandemic and becoming close friends with the Durham Pack & Shipโ€ฆ)

actual photo of me 4th year grad school

12.05.2025 18:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

15 blind infants wore LENA recorders for a day to capture language input in their daily lives.

We matched each blind participant to a sighted participant based on age, gender, maternal ed., and number of siblings in the household.

2/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Blind toddlers often experience language delays. Blind adults do not.

In our new paper at @langdevres.bsky.social, we ask whether differences in language input could help them catch up:

Do parents speak differently to blind children than sighted children?

(Barely... read on for details)

๐Ÿงช1/N

12.05.2025 18:34 โ€” ๐Ÿ‘ 30    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@erinecampbell is following 20 prominent accounts