Very excited to have you here and looking forward to working with you :)
11.02.2026 20:16 β π 5 π 1 π¬ 0 π 0@coganlab.bsky.social
The Cogan Lab at Duke University: Investigating speech, language, and cognition using invasive neural human electrophysiology http://coganlab.org
Very excited to have you here and looking forward to working with you :)
11.02.2026 20:16 β π 5 π 1 π¬ 0 π 0Thank you to the authors
@princetonupress.bsky.social Neuro
for your work, and we look forward to following more of it!
CC:
@timbuschman.bsky.social ,
@tafazolisina.bsky.social
β3β£: Do tasks of differing complexity occupy differing capacity in working memory? Or are complex tasks abstracted away such that they occupy the same amount of working memory capacity as simple tasks?
11.02.2026 15:56 β π 0 π 0 π¬ 1 π 0β2β£: When and where in the brain determines which neural populations to amplify and suppress to implement any given task? Why?
11.02.2026 15:56 β π 0 π 0 π¬ 2 π 0β1β£ (con't): What is the elementary unit of a task? Under a programming analogy, what is/are the simplest function(s) that the brain represents and combines to create more complex functions? Conversely, what is the most complex possible taskβour search for meaning?
11.02.2026 15:56 β π 1 π 0 π¬ 2 π 0β1β£: The color/shape categorization and response direction subtasks can be broken into even smaller subtasks (e.g., look at fixation cross, remember what color red & green are or what a bunny or a tee is, look at the corner of a box). Is this turtles all the way down (and up)?
11.02.2026 15:56 β π 0 π 0 π¬ 2 π 0π©Ά2β£: The research question is simple, intuitive, and practical yet very robustly tested
π©Ά3β£: The cross-decoding analyses were a nice way of assessing whether sensorimotor representations transferred across tasks.
π©Ά1β£: Given the lack of an S2 task, this paper elegantly minimized the learning load on the monkeys while still testing their core question of whether tasks are composed of shared sensorimotor representations
11.02.2026 15:56 β π 1 π 0 π¬ 1 π 0Last week, Jim Zhang (4th year PhD student,
@DukeBrain
) presented Sina Tafazoliβs paper on building compositional tasks with shared neural subspaces. This π§΅ explores our thoughts (π€ & β)
Come by tomorrow morning to see Baishen's work on verbal working memory!
19.11.2025 01:12 β π 1 π 1 π¬ 0 π 0Come by this morning to see Areti's poster!
17.11.2025 14:59 β π 1 π 2 π¬ 0 π 0At #Sfn2025 ?
Come see some of the lab's posters this afternoon!
Stop by to say hello and see some great science!
#Sfn2025 #Neuroscience #neuroskyence
Lastly (not least):
Wed. Nov 19 8am-12pm: 411.11 / MM10
Sensory-motor mechanisms for verbal working memory*
Postdoc Baishen Liang will be presenting his work on sensory-motor transformations for vWM
@gregoryhickok.bsky.social
*Also presenting at APAN
Next:
Mon. Nov 17 8am-12pm: 173.10 / S11
Multimodal sensory-motor transformations for speech
@dukeengineering.bsky.social PhD Student Areti Majumdar will be presenting her work on multimodal sensory-motor transformations for speech
Then:
Sun. Nov 16 1pm-5pm: 142.11 / LL17
Computational hierarchies of intrinsic neural timescales for speech perception and production
Former CRS @nicoleliddle.bsky.social (now at UCSD Cog Sci) will be presenting her work on intrinsic timescales and speech perception/production
Next:
Sun. Nov 16 1pm-5pm: 142.06 / LL12
Hierarchical Speech Encoding in Non-Primary Auditory Regions*
Postdoc Nanlin Shi will be presenting his work on speech encoding in non-canonical areas
*Also presenting at APAN
Then:
Sun. Nov 16 1pm-5pm: 142.05 / LL11
Verbal working memory is subserved by distributed network activity between temporal and frontal lobes
Former Neurosurgery Resident Daniel Sexton (now at @stanfordnsurg.bsky.social ) will be presenting his work on network decoding of verbal WM
Next:
Sun. Nov 16 1pm-5pm: 137.10 / HH2
Intracranial EEG Correlates of Concurrent Demands on Cognitive Stability and Flexibility
Undergraduate Erin Burns and CNAP PhD Student Jim Zhang will present work from our lab and @tobiasegner.bsky.social Lab on cognitive control
First up:
Sun. Nov 16 1pm-5pm: 126.20 / T11
Automated speech annotation achieves manual-level accuracy for neural speech decoding
@dukeengineering.bsky.social PhD Student Zac Spalding and Duke Kunshan undergrad Ahmed Hadwan will present work on validating automated speech alignment for BCI
Coming to San Diego for SfN and/or APAN? Come check out the intracranial work from the lab (7 posters)! There's a bit of everything this year, so come say hello!
#Sfn2025 #Neuroscience #neuroskyence
@dukebrain.bsky.social @dukeneurosurgery.bsky.social @dukeengineering.bsky.social
Come by tomorrow morning to hear about verbal working memory!
12.09.2025 23:16 β π 2 π 1 π¬ 0 π 1Stop by this afternoon to see some intracranial speech decoding in the hippocampus and to say hello!
12.09.2025 17:59 β π 3 π 1 π¬ 0 π 0Saturday Sept. 13 11am-12:30pm, Poster Session C
C54: Baishen Liang (Postdoctoral Associate) will be presenting his work on sensory-motor mechanisms for verbal working memory.
Hope to see you all there!
Friday Sept 12 4:30pm-6:00pm, Poster Session B
B70: Yuchao Wang (Rotation CNAP PhD Student) will be presenting his work on auditory pseudoword decoding in the hippocampus.
Coming to DC for SNL later this week?
Come check out our posters on speech decoding and verbal working memory using intracranial recordings!
@snlmtg.bsky.social
#SNL2025
β3οΈβ£: In Figs. 4 and 5, do you obtain similar results if you operate directly on the spike trains instead of on the PCA-reduced spike trains? Why is PCA necessary first?
Thank you to the authors for your work!
cc: Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheynst
If separate animals were treated as separate manifolds with an embedding-agnostic MARBLE, would you still expect an informative latent space to be learned without any need for post-hoc alignment?
09.09.2025 14:51 β π 1 π 0 π¬ 1 π 0β2οΈβ£: It seems that a linear transformation between MARBLE representations of different animals was necessary because the same information is present in the latent space but not necessarily with the same ordering... (con't)
09.09.2025 14:51 β π 0 π 0 π¬ 1 π 0β1οΈβ£: It is stated that non-neighbors (both within and across manifolds) are negative samples (mapped far) during the contrastive learning step. Does treating non-neighbors within and across manifolds as similarly βdistantβ lead to less interpretability of larger distances in latent space?
09.09.2025 14:51 β π 0 π 0 π¬ 1 π 0