Michael Beyeler's Avatar

Michael Beyeler

@mbeyeler.bsky.social

πŸ‘οΈπŸ§ πŸ–₯️πŸ§ͺπŸ€– Associate Professor in @ucsb-cs.bsky.social and Psychological & Brain Sciences at @ucsantabarbara.bsky.social. PI of @bionicvisionlab.org. #BionicVision #Blindness #LowVision #VisionScience #CompNeuro #NeuroTech #NeuroAI

1,113 Followers  |  436 Following  |  447 Posts  |  Joined: 04.10.2023
Posts Following

Posts by Michael Beyeler (@mbeyeler.bsky.social)

Preview
BIRD: Behavior Induction via Representation-structure Distillation Human-aligned deep learning models exhibit behaviors consistent with human values, such as robustness, fairness, and honesty. Transferring these behavioral properties to models trained on different ta...

Most transfer learning assumes shared data, tasks, or domains.

BIRD shows you can transfer behavior itself even when those assumptions break.

All details here:
arxiv.org/abs/2505.23933

#KnowledgeDistillation #Robustness #MachineLearning #AIResearch #ResponsibleAI

08.02.2026 17:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Two-panel schematic illustrating the BIRD framework. Left panel shows independent pre-training of a teacher and a student network on different datasets, each optimized with its own task loss. Right panel shows representation-structure distillation: selected intermediate layers from teacher and student are compared via a representation loss, which aligns the geometry of their internal activations while the student is still trained on its own task loss. A snowflake icon indicates the teacher is frozen. The diagram emphasizes that behavior is transferred by aligning internal representation structure rather than outputs or shared data.

Two-panel schematic illustrating the BIRD framework. Left panel shows independent pre-training of a teacher and a student network on different datasets, each optimized with its own task loss. Right panel shows representation-structure distillation: selected intermediate layers from teacher and student are compared via a representation loss, which aligns the geometry of their internal activations while the student is still trained on its own task loss. A snowflake icon indicates the teacher is frozen. The diagram emphasizes that behavior is transferred by aligning internal representation structure rather than outputs or shared data.

We introduce BIRD: Behavior Induction via Representation-structure Distillation.

Instead of transferring outputs, BIRD aligns the geometry of internal representations between teacher and student, enabling weak β†’ strong generalization.

#KnowledgeDistillation #TransferLearning #Robustness

08.02.2026 17:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
BIRD: Behavior Induction via Representation-structure Distillation Human-aligned deep learning models exhibit behaviors consistent with human values, such as robustness, fairness, and honesty. Transferring these behavioral properties to models trained on different ta...

What if your strongest #ML model is brittle at one thing that really matters?

Can it learn that behavior from a weaker but specialist model, even when they share no task, no data, and no architecture?

My student Galen Pogoncheff explored this in our #ICLR2026 paper:

πŸ‘‰ arxiv.org/abs/2505.23933

08.02.2026 17:35 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Flyer for the UCSB CRML Agentic AI Summit 2026. Friday, January 23, 2026. 8:00 AM in Henley Hall 1010.

Keynote speakers: Sujith Ravi (VP GenAi, Oracle), Jiantao Jiao (Director AI, Nvidia), Murphy Niu (UCSB), Diyi Yang (Stanford), Daniel Martin (UCSB).

Industry talks: Ang Li (CEO, Simular), Zackary Glazewski (Founding AI Engineer, ChipAgents), Eser Kandogan (Principal Research Engineer, Megagon Labs)

AI faculty highlights: Eric Wang, Yuheng Bu, Michael Beyeler, James Preiss, Miguel Eckstein

Flyer for the UCSB CRML Agentic AI Summit 2026. Friday, January 23, 2026. 8:00 AM in Henley Hall 1010. Keynote speakers: Sujith Ravi (VP GenAi, Oracle), Jiantao Jiao (Director AI, Nvidia), Murphy Niu (UCSB), Diyi Yang (Stanford), Daniel Martin (UCSB). Industry talks: Ang Li (CEO, Simular), Zackary Glazewski (Founding AI Engineer, ChipAgents), Eser Kandogan (Principal Research Engineer, Megagon Labs) AI faculty highlights: Eric Wang, Yuheng Bu, Michael Beyeler, James Preiss, Miguel Eckstein

Join us Jan 23 for the inaugural CRML Agentic AI Summit at @ucsb.bsky.social.

Researchers, industry, and students exploring how agentic AI drives discovery and real-world impact.

Free to attend, limited space: ml.ucsb.edu/events/summi...

#AgenticAI #ResponsibleAI #AIResearch

06.01.2026 00:04 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

As #neurotechnology scales toward high-resolution implantable devices, new challenges emerge: how will users calibrate visual implants with thousands of channels?

Learn how by reading our paper, co-first-authored with Dr. Xing Chen, now published at Brain Stimulation!

tinyurl.com/Large-scale-...

31.12.2025 17:32 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Can your AI beat a mouse? This is happening Sunday! robustforaging.github.io NeurIPS workshop 11 to 2 California time on Zoom!

@mbeyeler.bsky.social
@sinzlab.bsky.social
@ninamiolane.bsky.social
@crisniell.bsky.social
@mariusschneider.bsky.social
J. Canzano, Y. Hou, J. Peng, et al.

#NeurIPS2025

06.12.2025 23:09 β€” πŸ‘ 14    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

Grateful to the organizing team: @mariusschneider.bsky.social, @jingpeng.bsky.social, Y Hou, L Herbelin, J Canzano, @spencerlaveresmith.bsky.social.

πŸ‘πŸ™πŸ™Œ Special thanks to MS, YH, JP for daily work behind the scenes (at the expense of their own research). The challenge would not exist without them!

26.11.2025 19:56 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Headshots, names, and talk titles for the 3 keynote speakers:
1. Fabian Sinz, University of TΓΌbingen: Foundation models for mouse vision
2. Nina Miolane, UC Santa Barbara: Geometric approaches to neural activity prediction
3. Cris Niell University of Oregon:  Visual processing in freely moving mice

Headshots, names, and talk titles for the 3 keynote speakers: 1. Fabian Sinz, University of TΓΌbingen: Foundation models for mouse vision 2. Nina Miolane, UC Santa Barbara: Geometric approaches to neural activity prediction 3. Cris Niell University of Oregon: Visual processing in freely moving mice

Next: Join our NeurIPS workshop on Dec 7, 2025, 11 to 2 PT on Zoom!

Hear from top competitors and our 3 keynote speakers:
- @sinzlab.bsky.social
- @ninamiolane.bsky.social
- @crisniell.bsky.social

More info: robustforaging.github.io/workshop

#NeurIPS2025 #Neuroscience #AI

26.11.2025 19:56 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Robust Foraging Competition Can your AI visually navigate better than a mouse?

Top teams:

πŸ₯‡ 371333_HCMUS_TheFangs (ASR 0.968, MSR 0.940, Score 0.954)
πŸ₯ˆ 417856_alluding123 (ASR 0.864, MSR 0.650, Score 0.757)
πŸ₯‰ 366999_pingsheng-li (ASR 0.802, MSR 0.670, Score 0.736)

Full leaderboard: robustforaging.github.io/leaderboard/

#NeurIPS2025 #Neuroscience #AI

26.11.2025 19:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Robust Foraging Competition Can your AI visually navigate better than a mouse?

πŸŽ‰ Mouse vs AI #NeurIPS2025 Challenge 2025

The first year was a great success:
πŸ€– 290 submissions
πŸ‘₯ 22 teams
🌎 7 countries
robustforaging.github.io

A huge thank you to all who participated!πŸ‘

This was our first attempt at a global competition built around real mouse behavior and visual robustness

26.11.2025 19:56 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Presenting β€œHuman in the loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortex” again this afternoon at #SfN!!

Come discuss!

An amazing collaboration between the Biomedical Neuroengineering group at UMH and @bionicvisionlab.org

19.11.2025 18:02 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Screenshot of SfN's grad school fair - highlighted is Booth 66

Screenshot of SfN's grad school fair - highlighted is Booth 66

DYNS logo, with text: An interdisciplinary program focused on the study of how the nervous system generates perception, behavior and cognition.

DYNS logo, with text: An interdisciplinary program focused on the study of how the nervous system generates perception, behavior and cognition.

Curious about the Dynamical Neuroscience #PhD Program at @ucsantabarbara.bsky.social? Come find us at the #SfN2025 Grad School Fair (Both 66)! 🧠πŸ§ͺ

More info at www.dyns.ucsb.edu.

#AcademicSky #Neuroscience #compneurosky

16.11.2025 19:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If you're at #SfN25, come chat with us about subretinal implants this afternoon! Poster 122.22, presented by PhD student Emily Joyce

16.11.2025 19:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I will be presenting the poster β€œHuman-in-the-loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortex” at the Early Career Poster Session #SfN as a TPDA awardee!

Nov. 15, 2025
18:45–20:45 (PT)
Poster: G5
SDCC Halls C–H

Come discuss!

15.11.2025 22:34 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
University of California faculty push back against Big Brother cybersecurity mandate School officials defend software as bulwark against ransomware, but professors fear potential surveillance of their devices

"In February 2024, then–UC President Michael Drake announced all employee computers connected to university networks would be required to install Trellix by May 2025. Campuses failing to comply would face penalties of up to $500,000 per ... incident."

www.science.org/content/arti...

25.10.2025 01:21 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

Thank you so much for this tip! Infuriating change

13.10.2025 03:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Good eye! You’re right, my spicy summary skipped over the nuance. Color was a free-form response, which we later binned into 4 categories for modeling. Chance level isn’t 25% but adjusted for class imbalance (majority class frequency). Definitely preliminary re:β€œperception”, but beats stimulus-only!

27.09.2025 23:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks! I hear you, that thought has crossed my mind, too. But IP & money have already held this field back too long... This work was funded by public grants, and our philosophy is to keep data + code open so others can build on it. Still, watch us get no credit & me eat my words in 5-10 years πŸ˜…

27.09.2025 23:48 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Together, this argues for closed-loop visual prostheses:

πŸ“‘ Record neural responses
⚑ Adapt stimulation in real-time
πŸ‘οΈ Optimize for perceptual outcomes

This work was only possible through a tight collaboration between 3 labs across @ethz.ch, @umh.es, and @ucsantabarbara.bsky.social!

27.09.2025 02:52 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordingsβ€”especially pre-stimulus activityβ€”makes predictions much better across all three perceptual outcomes.

Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordingsβ€”especially pre-stimulus activityβ€”makes predictions much better across all three perceptual outcomes.

And here’s the kicker: 🚨

If you try to predict perception from stimulation parameters alone, you’re basically at chance.

But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.

27.09.2025 02:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brain’s natural activity space.

Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brain’s natural activity space.

We pushed further: Could we make V1 produce new, arbitrary activity patterns?

Yes ... but control breaks down the farther you stray from the brain’s natural manifold.

Still, our methods required lower currents and evoked more stable percepts.

27.09.2025 02:52 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methodsβ€”linear, inverse neural network, and gradient optimizationβ€”produce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.

Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methodsβ€”linear, inverse neural network, and gradient optimizationβ€”produce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.

Prediction is only step 1. We then inverted the forward model with 2 strategies:

1️⃣ Gradient-based optimizer (precise, but slow)
2️⃣ Inverse neural net (fast, real-time)

Both shaped neural responses far better than conventional 1-to-1 mapping

27.09.2025 02:52 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Figure comparing predicted and true neural responses to electrical stimulation. Left panels show two example stimulation patterns (top), predicted neural responses by the forward neural network (middle), and the actual recorded responses (bottom). The predicted responses closely match the true responses. Right panels show bar plots comparing model performance across methods. The forward neural network (last bar) achieves the lowest error (MSE) and highest explained variance (RΒ²), significantly outperforming dictionary-based, linear, and 1-to-1 mapping approaches.

Figure comparing predicted and true neural responses to electrical stimulation. Left panels show two example stimulation patterns (top), predicted neural responses by the forward neural network (middle), and the actual recorded responses (bottom). The predicted responses closely match the true responses. Right panels show bar plots comparing model performance across methods. The forward neural network (last bar) achieves the lowest error (MSE) and highest explained variance (RΒ²), significantly outperforming dictionary-based, linear, and 1-to-1 mapping approaches.

We trained a deep neural network (β€œforward model”) to predict neural responses from stimulation and baseline brain state.

πŸ’‘ Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.

This makes the model robust to day-to-day drift.

27.09.2025 02:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Diagram of the experimental setup for measuring electrically evoked neural activity. A stimulation pattern is chosen across electrodes on a Utah array (left). Selected electrodes deliver 167 ms trains of 50 pulses at 300 Hz (middle left), sent via stimulator and amplifier into the visual cortex of a participant (middle). Neural signals are recorded before and after stimulation across all channels, producing multi-unit activity traces (MUAe). The difference between pre- and post-stimulation activity (Ξ”MUAe) is computed (middle right) and visualized as a heatmap across electrodes, showing localized increases in neural responses (right).

Diagram of the experimental setup for measuring electrically evoked neural activity. A stimulation pattern is chosen across electrodes on a Utah array (left). Selected electrodes deliver 167 ms trains of 50 pulses at 300 Hz (middle left), sent via stimulator and amplifier into the visual cortex of a participant (middle). Neural signals are recorded before and after stimulation across all channels, producing multi-unit activity traces (MUAe). The difference between pre- and post-stimulation activity (Ξ”MUAe) is computed (middle right) and visualized as a heatmap across electrodes, showing localized increases in neural responses (right).

Many in #BionicVision have tried to map stimulation β†’ perception, but cortical responses are nonlinear and drift day to day.

So we turned to 🧠 data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.

27.09.2025 02:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

πŸ‘οΈπŸ§  New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...

27.09.2025 02:52 β€” πŸ‘ 93    πŸ” 25    πŸ’¬ 2    πŸ“Œ 6
Preview
NSF Graduate Research Fellowship Program (GRFP)

NSF GRFP is out 2.5 months late w/key changes

1. 2nd year graduate students not eligible.

2. "alignment with Administration priorities"

3. Unlike prior years, they DO NOT specify the expected number of awards... that is a BIG problem.

a brief 🧡 w/receipts

www.nsf.gov/funding/oppo...

27.09.2025 00:04 β€” πŸ‘ 94    πŸ” 79    πŸ’¬ 3    πŸ“Œ 6
Preview
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...

🚨Our NeurIPS 2025 competition Mouse vs. AI is LIVE!

We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like.

Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge!

Whitepaper: arxiv.org/abs/2509.14446

22.09.2025 23:13 β€” πŸ‘ 38    πŸ” 20    πŸ’¬ 3    πŸ“Œ 2
Preview
Thrilling progress in brain-computer interfaces from UC labs UC researchers and the patients they work with are showing the world what's possible when the human mind and advanced computers meet.

As federal research funding faces steep cuts, UC scientists are pushing brain-computer interfaces forward: restoring speech after ALS, easing Parkinson’s symptoms, and improving bionic vision with AI (that’s us πŸ‘‹ at @ucsantabarbara.bsky.social).

🧠 www.universityofcalifornia.edu/news/thrilli...

17.09.2025 17:59 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Curious though - many of the orgs leading this effort don’t seem to be on @bsky.app yet… Would love to see more #Blind, #Accessibility, and #DisabilityJustice voices here!

31.08.2025 00:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0