Michael Beyeler's Avatar

Michael Beyeler

@mbeyeler.bsky.social

๐Ÿ‘๏ธ๐Ÿง ๐Ÿ–ฅ๏ธ๐Ÿงช๐Ÿค– Associate Professor in @ucsb-cs.bsky.social and Psychological & Brain Sciences at @ucsantabarbara.bsky.social. PI of @bionicvisionlab.org. #BionicVision #Blindness #LowVision #VisionScience #CompNeuro #NeuroTech #NeuroAI

1,093 Followers  |  429 Following  |  439 Posts  |  Joined: 04.10.2023  |  2.5669

Latest posts by mbeyeler.bsky.social on Bluesky

Screenshot of SfN's grad school fair - highlighted is Booth 66

Screenshot of SfN's grad school fair - highlighted is Booth 66

DYNS logo, with text: An interdisciplinary program focused on the study of how the nervous system generates perception, behavior and cognition.

DYNS logo, with text: An interdisciplinary program focused on the study of how the nervous system generates perception, behavior and cognition.

Curious about the Dynamical Neuroscience #PhD Program at @ucsantabarbara.bsky.social? Come find us at the #SfN2025 Grad School Fair (Both 66)! ๐Ÿง ๐Ÿงช

More info at www.dyns.ucsb.edu.

#AcademicSky #Neuroscience #compneurosky

16.11.2025 19:08 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

If you're at #SfN25, come chat with us about subretinal implants this afternoon! Poster 122.22, presented by PhD student Emily Joyce

16.11.2025 19:00 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

I will be presenting the poster โ€œHuman-in-the-loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortexโ€ at the Early Career Poster Session #SfN as a TPDA awardee!

Nov. 15, 2025
18:45โ€“20:45 (PT)
Poster: G5
SDCC Halls Cโ€“H

Come discuss!

15.11.2025 22:34 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
University of California faculty push back against Big Brother cybersecurity mandate School officials defend software as bulwark against ransomware, but professors fear potential surveillance of their devices

"In February 2024, thenโ€“UC President Michael Drake announced all employee computers connected to university networks would be required to install Trellix by May 2025. Campuses failing to comply would face penalties of up to $500,000 per ... incident."

www.science.org/content/arti...

25.10.2025 01:21 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Thank you so much for this tip! Infuriating change

13.10.2025 03:02 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Good eye! Youโ€™re right, my spicy summary skipped over the nuance. Color was a free-form response, which we later binned into 4 categories for modeling. Chance level isnโ€™t 25% but adjusted for class imbalance (majority class frequency). Definitely preliminary re:โ€œperceptionโ€, but beats stimulus-only!

27.09.2025 23:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks! I hear you, that thought has crossed my mind, too. But IP & money have already held this field back too long... This work was funded by public grants, and our philosophy is to keep data + code open so others can build on it. Still, watch us get no credit & me eat my words in 5-10 years ๐Ÿ˜…

27.09.2025 23:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Together, this argues for closed-loop visual prostheses:

๐Ÿ“ก Record neural responses
โšก Adapt stimulation in real-time
๐Ÿ‘๏ธ Optimize for perceptual outcomes

This work was only possible through a tight collaboration between 3 labs across @ethz.ch, @umh.es, and @ucsantabarbara.bsky.social!

27.09.2025 02:52 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordingsโ€”especially pre-stimulus activityโ€”makes predictions much better across all three perceptual outcomes.

Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordingsโ€”especially pre-stimulus activityโ€”makes predictions much better across all three perceptual outcomes.

And hereโ€™s the kicker: ๐Ÿšจ

If you try to predict perception from stimulation parameters alone, youโ€™re basically at chance.

But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.

27.09.2025 02:52 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brainโ€™s natural activity space.

Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brainโ€™s natural activity space.

We pushed further: Could we make V1 produce new, arbitrary activity patterns?

Yes ... but control breaks down the farther you stray from the brainโ€™s natural manifold.

Still, our methods required lower currents and evoked more stable percepts.

27.09.2025 02:52 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methodsโ€”linear, inverse neural network, and gradient optimizationโ€”produce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.

Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methodsโ€”linear, inverse neural network, and gradient optimizationโ€”produce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.

Prediction is only step 1. We then inverted the forward model with 2 strategies:

1๏ธโƒฃ Gradient-based optimizer (precise, but slow)
2๏ธโƒฃ Inverse neural net (fast, real-time)

Both shaped neural responses far better than conventional 1-to-1 mapping

27.09.2025 02:52 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Figure comparing predicted and true neural responses to electrical stimulation. Left panels show two example stimulation patterns (top), predicted neural responses by the forward neural network (middle), and the actual recorded responses (bottom). The predicted responses closely match the true responses. Right panels show bar plots comparing model performance across methods. The forward neural network (last bar) achieves the lowest error (MSE) and highest explained variance (Rยฒ), significantly outperforming dictionary-based, linear, and 1-to-1 mapping approaches.

Figure comparing predicted and true neural responses to electrical stimulation. Left panels show two example stimulation patterns (top), predicted neural responses by the forward neural network (middle), and the actual recorded responses (bottom). The predicted responses closely match the true responses. Right panels show bar plots comparing model performance across methods. The forward neural network (last bar) achieves the lowest error (MSE) and highest explained variance (Rยฒ), significantly outperforming dictionary-based, linear, and 1-to-1 mapping approaches.

We trained a deep neural network (โ€œforward modelโ€) to predict neural responses from stimulation and baseline brain state.

๐Ÿ’ก Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.

This makes the model robust to day-to-day drift.

27.09.2025 02:52 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Diagram of the experimental setup for measuring electrically evoked neural activity. A stimulation pattern is chosen across electrodes on a Utah array (left). Selected electrodes deliver 167 ms trains of 50 pulses at 300 Hz (middle left), sent via stimulator and amplifier into the visual cortex of a participant (middle). Neural signals are recorded before and after stimulation across all channels, producing multi-unit activity traces (MUAe). The difference between pre- and post-stimulation activity (ฮ”MUAe) is computed (middle right) and visualized as a heatmap across electrodes, showing localized increases in neural responses (right).

Diagram of the experimental setup for measuring electrically evoked neural activity. A stimulation pattern is chosen across electrodes on a Utah array (left). Selected electrodes deliver 167 ms trains of 50 pulses at 300 Hz (middle left), sent via stimulator and amplifier into the visual cortex of a participant (middle). Neural signals are recorded before and after stimulation across all channels, producing multi-unit activity traces (MUAe). The difference between pre- and post-stimulation activity (ฮ”MUAe) is computed (middle right) and visualized as a heatmap across electrodes, showing localized increases in neural responses (right).

Many in #BionicVision have tried to map stimulation โ†’ perception, but cortical responses are nonlinear and drift day to day.

So we turned to ๐Ÿง  data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.

27.09.2025 02:52 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

๐Ÿ‘๏ธ๐Ÿง  New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...

27.09.2025 02:52 โ€” ๐Ÿ‘ 93    ๐Ÿ” 25    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 6
Preview
NSF Graduate Research Fellowship Program (GRFP)

NSF GRFP is out 2.5 months late w/key changes

1. 2nd year graduate students not eligible.

2. "alignment with Administration priorities"

3. Unlike prior years, they DO NOT specify the expected number of awards... that is a BIG problem.

a brief ๐Ÿงต w/receipts

www.nsf.gov/funding/oppo...

27.09.2025 00:04 โ€” ๐Ÿ‘ 94    ๐Ÿ” 80    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 6
Preview
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...

๐ŸšจOur NeurIPS 2025 competition Mouse vs. AI is LIVE!

We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like.

Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge!

Whitepaper: arxiv.org/abs/2509.14446

22.09.2025 23:13 โ€” ๐Ÿ‘ 38    ๐Ÿ” 20    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2
Preview
Thrilling progress in brain-computer interfaces from UC labs UC researchers and the patients they work with are showing the world what's possible when the human mind and advanced computers meet.

As federal research funding faces steep cuts, UC scientists are pushing brain-computer interfaces forward: restoring speech after ALS, easing Parkinsonโ€™s symptoms, and improving bionic vision with AI (thatโ€™s us ๐Ÿ‘‹ at @ucsantabarbara.bsky.social).

๐Ÿง  www.universityofcalifornia.edu/news/thrilli...

17.09.2025 17:59 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Curious though - many of the orgs leading this effort donโ€™t seem to be on @bsky.app yetโ€ฆ Would love to see more #Blind, #Accessibility, and #DisabilityJustice voices here!

31.08.2025 00:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
World Blindness Summit & WBU General Assembly - World Blind Union

Excited to be heading to Sรฃo Paulo for the World Blindness Summit 2025! ๐ŸŒŽโœจ

Looking forward to learning from/connecting with blindness organizations from around the globe.

๐Ÿ‘‰ wbu.ngo/events/world...

#WorldBlindnessSummit #Inclusion #Accessibility #Blindness #DisabilityRights

31.08.2025 00:42 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Reviewer Code of Conduct - NeurIPS 2025

I appreciate the effort to improve the review process! Wondering whatโ€™s being done to address poor-quality reviews (the โ€œtoo many paragraphs in Related Workโ€โ†’Weak Reject ones)โ€ฆ e.g. #NeurIPS added strong steps to uphold review integrity (neurips.cc/Conferences/...) that #CHI2026 could learn from

09.08.2025 02:00 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Epic collage of Bionic Vision Lab activities. From top to bottom, left to right:
A) Up-to-date group picture
B) BVL at Dr. Beyeler's Plous Award celebration (2025)
C) BVL at The Eye & The Chip (2023)
D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony
E) BVL logo cake created by Tori LeVier
G) Dr. Beyeler with symposium speakers at Optica FVM (2023)
H, I, M, N) Students presenting conference posters/talks
J) Participant scanning a food item (ominous pizza study)
K) Galen Pogoncheff in VR
L) Argus II user drawing a phosphene
O) Prof. Beyeler demoing BionicVisionXR
P) First lab hike (ca. 2021)
Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022)
R) BVL at Club Vision
S) Students drifting off into the sunset on a floating couch after a hard day's work

Epic collage of Bionic Vision Lab activities. From top to bottom, left to right: A) Up-to-date group picture B) BVL at Dr. Beyeler's Plous Award celebration (2025) C) BVL at The Eye & The Chip (2023) D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony E) BVL logo cake created by Tori LeVier G) Dr. Beyeler with symposium speakers at Optica FVM (2023) H, I, M, N) Students presenting conference posters/talks J) Participant scanning a food item (ominous pizza study) K) Galen Pogoncheff in VR L) Argus II user drawing a phosphene O) Prof. Beyeler demoing BionicVisionXR P) First lab hike (ca. 2021) Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022) R) BVL at Club Vision S) Students drifting off into the sunset on a floating couch after a hard day's work

Excited to share that Iโ€™ve been promoted to Associate Professor with tenure at UCSB!

Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!

Full post: www.linkedin.com/posts/michae...

02.08.2025 18:12 โ€” ๐Ÿ‘ 24    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Bionic Vision - Advancing Sight Restoration Discover cutting-edge research, events, and insights in bionic vision and sight restoration.

๐Ÿ‘๏ธโšก I spoke with Dr. Jiayi Zhang about her Science paper on tellurium nanowire retinal implantsโ€”restoring vision and extending it into the infrared, no external power required.

New materials, new spectrum, new possibilities.
๐Ÿ”— www.bionic-vision.org/research-spo...

#BionicVision #NeuroTech

18.07.2025 00:11 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Program โ€“ EMBC 2025 Loading...

At #EMBC2025? Come check out two talks from my lab in tomorrowโ€™s Sensory Neuroprostheses session!

๐Ÿ—“๏ธ Thurs July 17 ยท 8-10AM ยท Room B3 M3-4
๐Ÿง  Efficient threshold estimation
๐Ÿง‘๐Ÿ”ฌ Deep human-in-the-loop optimization

๐Ÿ”— embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS

16.07.2025 16:54 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Program โ€“ EMBC 2025 Loading...

๐Ÿ‘๏ธโšก Headed to #EMBC2025? Catch two of our labโ€™s talks on optimizing retinal implants!

๐Ÿ“ Sensory Neuroprostheses
๐Ÿ—“๏ธ Thurs July 17 ยท 8-10AM ยท Room B3 M3-4
๐Ÿง  Efficient threshold estimation
๐Ÿง‘๐Ÿ”ฌ Deep human-in-the-loop optimization

๐Ÿ”— embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS #Retina

13.07.2025 17:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A group of surgeons in blue scrubs and surgical masks are performing a procedure in a clinical wetlab setting. Dr. Muqit (seated) operates under a ZEISS ARTEVOยฎ 850 surgical microscope, with Dr. others observing and assisting nearby. A large monitor and medical equipment are visible in the background, along with surgical instruments on a sterile table. The environment is dimly lit, with overhead lights providing focused illumination on the surgical field.

A group of surgeons in blue scrubs and surgical masks are performing a procedure in a clinical wetlab setting. Dr. Muqit (seated) operates under a ZEISS ARTEVOยฎ 850 surgical microscope, with Dr. others observing and assisting nearby. A large monitor and medical equipment are visible in the background, along with surgical instruments on a sterile table. The environment is dimly lit, with overhead lights providing focused illumination on the surgical field.

A surgeon in blue scrubs, surgical gloves, and a hair cover is seated and operating under a ZEISS ARTEVOยฎ 850 surgical microscope. He is performing a delicate procedure on a blue surgical model using forceps, while another masked assistant supports from behind. The operating table is covered with a sterile green drape, and medical tubing and instruments are visible around the setup. The environment is dimly lit, highlighting the precision of the surgical training.

A surgeon in blue scrubs, surgical gloves, and a hair cover is seated and operating under a ZEISS ARTEVOยฎ 850 surgical microscope. He is performing a delicate procedure on a blue surgical model using forceps, while another masked assistant supports from behind. The operating table is covered with a sterile green drape, and medical tubing and instruments are visible around the setup. The environment is dimly lit, highlighting the precision of the surgical training.

A wide view of a surgical training room shows multiple surgeons in blue scrubs and masks working around a ZEISS ARTEVOยฎ 850 digital microscope. One seated surgeon is actively operating on a subretinal surgery model, while others observe and assist. A large overhead visualization arm and a table with imaging and surgical equipment are prominently visible. The lighting is dim except for the illuminated surgical field, emphasizing the precision and focus of the wetlab environment.

A wide view of a surgical training room shows multiple surgeons in blue scrubs and masks working around a ZEISS ARTEVOยฎ 850 digital microscope. One seated surgeon is actively operating on a subretinal surgery model, while others observe and assist. A large overhead visualization arm and a table with imaging and surgical equipment are prominently visible. The lighting is dim except for the illuminated surgical field, emphasizing the precision and focus of the wetlab environment.

Two surgeons in blue scrubs and surgical caps are seated at a ZEISS ARTEVOยฎ 850 digital microscope in a dimly lit operating room. A large monitor displays a high-resolution OCT scan, showing detailed cross-sections of ocular tissue. A green surgical drape, tubing, and imaging equipment are visible around the operating station. The scene highlights the integration of real-time imaging in subretinal surgical training.

Two surgeons in blue scrubs and surgical caps are seated at a ZEISS ARTEVOยฎ 850 digital microscope in a dimly lit operating room. A large monitor displays a high-resolution OCT scan, showing detailed cross-sections of ocular tissue. A green surgical drape, tubing, and imaging equipment are visible around the operating station. The scene highlights the integration of real-time imaging in subretinal surgical training.

๐Ÿ”ฌ๐Ÿ‘๏ธ The next-gen #PRIMA chip in action: subretinal surgery training in ๐Ÿ‡ฉ๐Ÿ‡ช with the Science Corps team, Prof. Yannick Le Mer, and Prof. Dr. Lars-Olof Hattenbach.

3D digital visualization + iOCT = a powerful combo for precision subretinal implant work.
#BionicVision #NeuroTech

๐Ÿ“ธ via Dr. Mahi Muqit

12.07.2025 15:33 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thrilled to see this one hit the presses! ๐ŸŽ‰

One of the final gems from Dr. Justin Kasowskiโ€™s dissertatio, showing how checkerboard rastering boosts perceptual clarity in simulated prosthetic vision. ๐Ÿ‘๏ธโšก๏ธ

#BionicVision #NeuroTech

10.07.2025 17:21 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Optica Fall Vision Meeting Oct 2-5 2025 University of Minnesota, Twin Cities, MN

๐Ÿ‘๏ธ๐Ÿง  Itโ€™s not too late to submit your abstract to Opticaโ€™s Fall Vision Meeting (FVM) 2025!
๐Ÿ“ Minneapolis/St Paul, Oct 2โ€“5
๐Ÿง‘โ€๐Ÿซ Featuring talks by Susana Marcos, Austin Roorda, and Gordon Legge
๐Ÿท Kickoff at the CMRR!

๐Ÿ—“๏ธ Abstracts due: Aug 8
๐Ÿ”— www.osafallvisionmeeting.org

#VisionScience #VisionResearch

09.07.2025 16:58 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I have fond memories from a summer internship there - such a unique place, both geographically & intellectually. Sad to see it go

06.07.2025 05:25 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Science Submits CE Mark Application for PRIMA Retinal Implant โ€“ A Critical Step Towards Making It Available To Patients | Science Corporation Science Corporation is a clinical-stage medical technology company.

๐Ÿ‘๏ธ๐Ÿง  Big step forward for #BionicVision: Science has submitted a CE mark application for the PRIMA retinal implant. If approved, it would be the first #NeuroTech to treat geographic atrophy, a late-stage form of age-related macular degeneration #AMD.

๐Ÿ”— science.xyz/news/prima-c...

24.06.2025 20:24 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@mbeyeler is following 20 prominent accounts