In just 3 days, the Take Back our Science Rally hits the streets in Washington DC and all across the country! Join a local rally, or if notning's nearby, start your own pop-up rally. Details at zurl.co/NUhkK
(1/2🧵)
#science
#standupforscience
#rally
#March7
📌 Curious to know more? Visit my website 👉 lnkd.in/d59v_TpH
📌 Looking for a PhD opportunity? The Spring admissions round at SISSA is now open and I’d be delighted to hear from prospective candidates (deadline: March 20, 1pm CET) 👉 lnkd.in/dvf6RuQd
Congratulations!
If you are in Houston - or want to travel - join us at Rice Thursday March 12-14 for DeLange XIV: Brains in Society: Preparing for Neuroscience’s Impact on our everyday lives. DeLange.rice.edu
Interested?
Please submit the interest form found at forms.gle/WQTKT3GDWMjM...,
email SAMPLab_recruitment@georgetown.edu or call 202- 687- 8329.
To learn more about this study and related projects, visit samp-lab.facultysite.georgetown.edu/research.
Compensation:
- $50 per hour for time spent inside the scanner, and $25 per hour for time outside the scanner
- If you decide to withdraw from the study early, you will be paid for the time you completed to that point
- All transportation costs for each session will be covered by the university.
Eligibility:
- Must be blind from birth or infancy, and live within 50 miles of Washington DC.
Participation:
- One 4-hour behavioral session
- Three 2-hour sessions in an fMRI scanner. fMRI is a non-invasive brain imaging technique
The long-term goal of this research is to understand how to improve sight recovery for people who lost their sight later in life.
Georgetown University Medical Center is seeking individuals who are blind from birth or infancy to participate in a behavioral and fMRI research study to explore how the brain regions responsible for visual processing are reorganized to support other functions.
Paid Research Opportunity for Blind Participants!
Have you ever wondered if your blindness has affected your memory, cognitive processing, or other senses?
Research plug: we're currently seeking (bilateraly, congenitally) blind adults & Deaf adults for a *paid* online research study (screen reader compatible) on how individuals experience words across perceptual modalities. Ping bergelsonlab@fas.harvard.edu if interested! Reposts welcome! #Blind #Deaf
Are you a junior faculty member interested in spending 2-4 weeks at Princeton Psych? Consider applying for our Microsabbatical program! It’s a fully funded visit for professional development and creating long-term collaborations.
psych.princeton.edu/diversity/mi...
Deadline coming up!
If you're interested in a postdoc position in fMRI + plasticity, individual differences, and a translational twist, email me and/or apply.
Registration for the workshop closes this Sunday (no extensions)! Come join us in what will be the biggest in-person ISRW to date!
#sleeppeeps #cns2026 @cnsmtg.bsky.social
✨ New 3D pose estimation method from my lab! #FMPose3D allows for monocular (i.e. single camera) 2D➡️3D 🔥
Led by Ti Wang & w/ Xiaohang Yu #FMPose3D is SOTA on human & animal 3D benchmarks, & will be integrated into @deeplabcut.bsky.social ⬇️
📝 arxiv.org/abs/2602.05755
➡️ xiu-cs.github.io/FMPose3D/
Breakthrough @nature.com study showcases new non-invasive treatment for Parkinson’s Disease that targets the somato-cognitive action network (SCAN) with personalized neuromodulation, for superior outcomes. www.nature.com/articles/s41...
Congratulations to @lillianbehm.bsky.social, Nick Turk-Browne, and a huge team for putting together this paper (out today) on lessons from a decade of attempts to study awake infants with fMRI:
onlinelibrary.wiley.com/doi/10.1111/...
Beautiful work on visual development - with a large cohort of *awake* 2-month olds!
This was such a fun section to write! The review provides an on-ramp to further integrating DL within the developmental field. Existing DNN findings & tools, while not without caveats, open new empirical developmental questions. I'm grateful for NSF's support to my lab as we tackle some of them 👶
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
🚨NEW PREPRINT🚨
www.biorxiv.org/content/10.6...
w/ Giulio Degano and Uta Noppeney
In this work, we use music to investigate how the brain extracts and integrates multisensory information in real-world environments.
🧠🧪 #psychscisky #neuroskyence
TL;DR 🧵👇
Every day a new reason to support articles of impeachment against RFK Jr...find the tools you need to add your support and demand that your Congressperson does the same www.standupforscience.net/impeach-rfkjr
Still accepting applications!
Preprint alert from our lab!
@zhiqingdeng.bsky.social shows how the sensorimotor system reorganizes in people born without hands.
#neuroscience #motor #somatosensory
Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset size—we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
Now out in #JNeurosci -- we found changes in medial parietal cortex after manual exploration of everyday real-world objects
doi.org/10.1523/JNEU...
with Beth Rispoli, Vinai Roopchansingh & @cibaker.bsky.social
Congratulations!
Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy
This afternoon, I was in a meeting with Jay Bhattacharya, and I can confirm this is *exactly* what he both wants and will be doing.
THIS IS NOT HYPERBOLE.