Screenshot of SfN's grad school fair - highlighted is Booth 66
DYNS logo, with text: An interdisciplinary program focused on the study of how the nervous system generates perception, behavior and cognition.
Curious about the Dynamical Neuroscience #PhD Program at @ucsantabarbara.bsky.social? Come find us at the #SfN2025 Grad School Fair (Both 66)! ๐ง ๐งช
More info at www.dyns.ucsb.edu.
#AcademicSky #Neuroscience #compneurosky
16.11.2025 19:08 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
If you're at #SfN25, come chat with us about subretinal implants this afternoon! Poster 122.22, presented by PhD student Emily Joyce
16.11.2025 19:00 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
I will be presenting the poster โHuman-in-the-loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortexโ at the Early Career Poster Session #SfN as a TPDA awardee!
Nov. 15, 2025
18:45โ20:45 (PT)
Poster: G5
SDCC Halls CโH
Come discuss!
15.11.2025 22:34 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0
University of California faculty push back against Big Brother cybersecurity mandate
School officials defend software as bulwark against ransomware, but professors fear potential surveillance of their devices
"In February 2024, thenโUC President Michael Drake announced all employee computers connected to university networks would be required to install Trellix by May 2025. Campuses failing to comply would face penalties of up to $500,000 per ... incident."
www.science.org/content/arti...
25.10.2025 01:21 โ ๐ 8 ๐ 2 ๐ฌ 2 ๐ 0
Thank you so much for this tip! Infuriating change
13.10.2025 03:02 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Good eye! Youโre right, my spicy summary skipped over the nuance. Color was a free-form response, which we later binned into 4 categories for modeling. Chance level isnโt 25% but adjusted for class imbalance (majority class frequency). Definitely preliminary re:โperceptionโ, but beats stimulus-only!
27.09.2025 23:53 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Thanks! I hear you, that thought has crossed my mind, too. But IP & money have already held this field back too long... This work was funded by public grants, and our philosophy is to keep data + code open so others can build on it. Still, watch us get no credit & me eat my words in 5-10 years ๐
27.09.2025 23:48 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Together, this argues for closed-loop visual prostheses:
๐ก Record neural responses
โก Adapt stimulation in real-time
๐๏ธ Optimize for perceptual outcomes
This work was only possible through a tight collaboration between 3 labs across @ethz.ch, @umh.es, and @ucsantabarbara.bsky.social!
27.09.2025 02:52 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0
Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordingsโespecially pre-stimulus activityโmakes predictions much better across all three perceptual outcomes.
And hereโs the kicker: ๐จ
If you try to predict perception from stimulation parameters alone, youโre basically at chance.
But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.
27.09.2025 02:52 โ ๐ 3 ๐ 0 ๐ฌ 2 ๐ 0
Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brainโs natural activity space.
We pushed further: Could we make V1 produce new, arbitrary activity patterns?
Yes ... but control breaks down the farther you stray from the brainโs natural manifold.
Still, our methods required lower currents and evoked more stable percepts.
27.09.2025 02:52 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methodsโlinear, inverse neural network, and gradient optimizationโproduce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.
Prediction is only step 1. We then inverted the forward model with 2 strategies:
1๏ธโฃ Gradient-based optimizer (precise, but slow)
2๏ธโฃ Inverse neural net (fast, real-time)
Both shaped neural responses far better than conventional 1-to-1 mapping
27.09.2025 02:52 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
Figure comparing predicted and true neural responses to electrical stimulation. Left panels show two example stimulation patterns (top), predicted neural responses by the forward neural network (middle), and the actual recorded responses (bottom). The predicted responses closely match the true responses. Right panels show bar plots comparing model performance across methods. The forward neural network (last bar) achieves the lowest error (MSE) and highest explained variance (Rยฒ), significantly outperforming dictionary-based, linear, and 1-to-1 mapping approaches.
We trained a deep neural network (โforward modelโ) to predict neural responses from stimulation and baseline brain state.
๐ก Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.
This makes the model robust to day-to-day drift.
27.09.2025 02:52 โ ๐ 3 ๐ 0 ๐ฌ 2 ๐ 0
Diagram of the experimental setup for measuring electrically evoked neural activity. A stimulation pattern is chosen across electrodes on a Utah array (left). Selected electrodes deliver 167 ms trains of 50 pulses at 300 Hz (middle left), sent via stimulator and amplifier into the visual cortex of a participant (middle). Neural signals are recorded before and after stimulation across all channels, producing multi-unit activity traces (MUAe). The difference between pre- and post-stimulation activity (ฮMUAe) is computed (middle right) and visualized as a heatmap across electrodes, showing localized increases in neural responses (right).
Many in #BionicVision have tried to map stimulation โ perception, but cortical responses are nonlinear and drift day to day.
So we turned to ๐ง data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.
27.09.2025 02:52 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.
๐๏ธ๐ง New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!
TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.
www.biorxiv.org/content/10.1...
27.09.2025 02:52 โ ๐ 93 ๐ 25 ๐ฌ 2 ๐ 6
NSF Graduate Research Fellowship Program (GRFP)
NSF GRFP is out 2.5 months late w/key changes
1. 2nd year graduate students not eligible.
2. "alignment with Administration priorities"
3. Unlike prior years, they DO NOT specify the expected number of awards... that is a BIG problem.
a brief ๐งต w/receipts
www.nsf.gov/funding/oppo...
27.09.2025 00:04 โ ๐ 94 ๐ 80 ๐ฌ 3 ๐ 6
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment
Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...
๐จOur NeurIPS 2025 competition Mouse vs. AI is LIVE!
We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like.
Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge!
Whitepaper: arxiv.org/abs/2509.14446
22.09.2025 23:13 โ ๐ 38 ๐ 20 ๐ฌ 3 ๐ 2
Thrilling progress in brain-computer interfaces from UC labs
UC researchers and the patients they work with are showing the world what's possible when the human mind and advanced computers meet.
As federal research funding faces steep cuts, UC scientists are pushing brain-computer interfaces forward: restoring speech after ALS, easing Parkinsonโs symptoms, and improving bionic vision with AI (thatโs us ๐ at @ucsantabarbara.bsky.social).
๐ง www.universityofcalifornia.edu/news/thrilli...
17.09.2025 17:59 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
Curious though - many of the orgs leading this effort donโt seem to be on @bsky.app yetโฆ Would love to see more #Blind, #Accessibility, and #DisabilityJustice voices here!
31.08.2025 00:49 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
World Blindness Summit & WBU General Assembly - World Blind Union
Excited to be heading to Sรฃo Paulo for the World Blindness Summit 2025! ๐โจ
Looking forward to learning from/connecting with blindness organizations from around the globe.
๐ wbu.ngo/events/world...
#WorldBlindnessSummit #Inclusion #Accessibility #Blindness #DisabilityRights
31.08.2025 00:42 โ ๐ 2 ๐ 1 ๐ฌ 1 ๐ 0
Reviewer Code of Conduct - NeurIPS 2025
I appreciate the effort to improve the review process! Wondering whatโs being done to address poor-quality reviews (the โtoo many paragraphs in Related WorkโโWeak Reject ones)โฆ e.g. #NeurIPS added strong steps to uphold review integrity (neurips.cc/Conferences/...) that #CHI2026 could learn from
09.08.2025 02:00 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
Epic collage of Bionic Vision Lab activities. From top to bottom, left to right:
A) Up-to-date group picture
B) BVL at Dr. Beyeler's Plous Award celebration (2025)
C) BVL at The Eye & The Chip (2023)
D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony
E) BVL logo cake created by Tori LeVier
G) Dr. Beyeler with symposium speakers at Optica FVM (2023)
H, I, M, N) Students presenting conference posters/talks
J) Participant scanning a food item (ominous pizza study)
K) Galen Pogoncheff in VR
L) Argus II user drawing a phosphene
O) Prof. Beyeler demoing BionicVisionXR
P) First lab hike (ca. 2021)
Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022)
R) BVL at Club Vision
S) Students drifting off into the sunset on a floating couch after a hard day's work
Excited to share that Iโve been promoted to Associate Professor with tenure at UCSB!
Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!
Full post: www.linkedin.com/posts/michae...
02.08.2025 18:12 โ ๐ 24 ๐ 5 ๐ฌ 1 ๐ 0
Bionic Vision - Advancing Sight Restoration
Discover cutting-edge research, events, and insights in bionic vision and sight restoration.
๐๏ธโก I spoke with Dr. Jiayi Zhang about her Science paper on tellurium nanowire retinal implantsโrestoring vision and extending it into the infrared, no external power required.
New materials, new spectrum, new possibilities.
๐ www.bionic-vision.org/research-spo...
#BionicVision #NeuroTech
18.07.2025 00:11 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 0
Program โ EMBC 2025
Loading...
At #EMBC2025? Come check out two talks from my lab in tomorrowโs Sensory Neuroprostheses session!
๐๏ธ Thurs July 17 ยท 8-10AM ยท Room B3 M3-4
๐ง Efficient threshold estimation
๐ง๐ฌ Deep human-in-the-loop optimization
๐ embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS
16.07.2025 16:54 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0
Program โ EMBC 2025
Loading...
๐๏ธโก Headed to #EMBC2025? Catch two of our labโs talks on optimizing retinal implants!
๐ Sensory Neuroprostheses
๐๏ธ Thurs July 17 ยท 8-10AM ยท Room B3 M3-4
๐ง Efficient threshold estimation
๐ง๐ฌ Deep human-in-the-loop optimization
๐ embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS #Retina
13.07.2025 17:24 โ ๐ 1 ๐ 2 ๐ฌ 1 ๐ 0
A group of surgeons in blue scrubs and surgical masks are performing a procedure in a clinical wetlab setting. Dr. Muqit (seated) operates under a ZEISS ARTEVOยฎ 850 surgical microscope, with Dr. others observing and assisting nearby. A large monitor and medical equipment are visible in the background, along with surgical instruments on a sterile table. The environment is dimly lit, with overhead lights providing focused illumination on the surgical field.
A surgeon in blue scrubs, surgical gloves, and a hair cover is seated and operating under a ZEISS ARTEVOยฎ 850 surgical microscope. He is performing a delicate procedure on a blue surgical model using forceps, while another masked assistant supports from behind. The operating table is covered with a sterile green drape, and medical tubing and instruments are visible around the setup. The environment is dimly lit, highlighting the precision of the surgical training.
A wide view of a surgical training room shows multiple surgeons in blue scrubs and masks working around a ZEISS ARTEVOยฎ 850 digital microscope. One seated surgeon is actively operating on a subretinal surgery model, while others observe and assist. A large overhead visualization arm and a table with imaging and surgical equipment are prominently visible. The lighting is dim except for the illuminated surgical field, emphasizing the precision and focus of the wetlab environment.
Two surgeons in blue scrubs and surgical caps are seated at a ZEISS ARTEVOยฎ 850 digital microscope in a dimly lit operating room. A large monitor displays a high-resolution OCT scan, showing detailed cross-sections of ocular tissue. A green surgical drape, tubing, and imaging equipment are visible around the operating station. The scene highlights the integration of real-time imaging in subretinal surgical training.
๐ฌ๐๏ธ The next-gen #PRIMA chip in action: subretinal surgery training in ๐ฉ๐ช with the Science Corps team, Prof. Yannick Le Mer, and Prof. Dr. Lars-Olof Hattenbach.
3D digital visualization + iOCT = a powerful combo for precision subretinal implant work.
#BionicVision #NeuroTech
๐ธ via Dr. Mahi Muqit
12.07.2025 15:33 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0
Thrilled to see this one hit the presses! ๐
One of the final gems from Dr. Justin Kasowskiโs dissertatio, showing how checkerboard rastering boosts perceptual clarity in simulated prosthetic vision. ๐๏ธโก๏ธ
#BionicVision #NeuroTech
10.07.2025 17:21 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0
Optica Fall Vision Meeting
Oct 2-5 2025University of Minnesota, Twin Cities, MN
๐๏ธ๐ง Itโs not too late to submit your abstract to Opticaโs Fall Vision Meeting (FVM) 2025!
๐ Minneapolis/St Paul, Oct 2โ5
๐งโ๐ซ Featuring talks by Susana Marcos, Austin Roorda, and Gordon Legge
๐ท Kickoff at the CMRR!
๐๏ธ Abstracts due: Aug 8
๐ www.osafallvisionmeeting.org
#VisionScience #VisionResearch
09.07.2025 16:58 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0
I have fond memories from a summer internship there - such a unique place, both geographically & intellectually. Sad to see it go
06.07.2025 05:25 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Science Submits CE Mark Application for PRIMA Retinal Implant โ A Critical Step Towards Making It Available To Patients | Science Corporation
Science Corporation is a clinical-stage medical technology company.
๐๏ธ๐ง Big step forward for #BionicVision: Science has submitted a CE mark application for the PRIMA retinal implant. If approved, it would be the first #NeuroTech to treat geographic atrophy, a late-stage form of age-related macular degeneration #AMD.
๐ science.xyz/news/prima-c...
24.06.2025 20:24 โ ๐ 2 ๐ 2 ๐ฌ 0 ๐ 0
PhD Student at UC Santa Barbara. Machine Perception / NeuroAI / Computational Neuroscience / Artificial & Natural Intelligence.
Iโm studying visual perception, mostly using psychophysics. I work for the CNRS at the Ecole Normale Supรฉrieure in Paris.
Cognitive neuroscientist interested in predictive perception and cognition. Head of www.predictivebrainlab.com
Professor of Cognitive Neuroscience at the FIL, UCL, using human neuroimaging to study how our prior knowledge influences the way we perceive the world. https://www.fil.ion.ucl.ac.uk/team/visual-perception-team/
Vision scientist in Experimental Psychology, University College London. Foreigner. Father. Posting perception, neuroscience, music & I guess reposted memes. He/him. Lab website: http://eccentricvision.com
Welcome to the official account of SPIE, the international society for optics and #photonics. Over the past five years, we have invested over $25M in the international optics community! ๐ก
Advancing cognition
www.psychonomic.org
Investigating the eye's optics, ocular structures & visual perception through advanced imaging technologies. Developing innovative ways for vision correction.
#VisionScience, #Optics and #Ophthalmology.
University of Rochester, New York
Senior Desk Editor for Life Sciences at Scientific American, covering earth and the environment. (she/her) Posts are entirely my own and not reflective of my employer. Email: andrea.thompson@sciam.com Signal: @AndreaT.95
assoc prof, uc irvine cogsci & LPS: perception+metacognition+subjective experience, fMRI+models+AI
phil sci, education for all
2026: UCI-->UCL!
prez+co-founder, neuromatch.io
fellow, CIFAR brain mind consciousness
meganakpeters.org
she/her๐๐๐views mine
Associate Research Scientist at Center for Theoretical Neuroscience Zuckerman Mind Brain Behavior Institute
Kavli Institute for Brain Science
Columbia University Irving Medical Center
K99-R00 scholar @NIH @NatEyeInstitute
https://toosi.github.io/
Eye-tracking, Pupillometry, Word nerd, Learning and Memory, Language, R, Stats, Quant, Director Human Neuroscience Lab @bostoncollege www.drjasongeller.com
Computational neuroscience. PhD student with Friedemann Zenke at FMI, Basel. B.Sc. and M.Sc. Cognitive Science at university Osnabrรผck
Computational neuroscientist, NeuroAI lab @EPFL
Adventurer of life, reader, baker, artist
Neuro Occupational Therapist | Certified Neuro Specialist | Trauma-Informed Educator | perpetual learner
Research: music as a modality, acute care, antepartum, leadership
Grants: MHSP, Hurd Foundation
M.A. student at Isik Lab, JHU
AI @ OpenAI, Tesla, Stanford
Postdoc at UCSB studying Machine Learning, Computational Neuroscience, and NeuroAI