Jenelle Feather's Avatar

Jenelle Feather

@jfeather.bsky.social

Flatiron Research Fellow #FlatironCCN. PhD from #mitbrainandcog. Incoming Asst Prof #CarnegieMellon in Fall 2025. I study how humans and computers hear and see.

1,249 Followers  |  511 Following  |  51 Posts  |  Joined: 26.09.2023  |  2.0222

Latest posts by jfeather.bsky.social on Bluesky


🚨 #CCN2026 Proceedings submissions are open!
CCN 2026 again features an 8-page Proceedings track (alongside extended abstracts). Accepted papers will appear in CCN-Proceedings (CCN‑P) with DOIs on OpenReview.

28.01.2026 16:16 β€” πŸ‘ 33    πŸ” 22    πŸ’¬ 1    πŸ“Œ 6
Elizabeth Lee smiles at the camera.

Elizabeth Lee smiles at the camera.

Elizabeth Lee, a first-year Ph.D. student in Neural Computation, has been awarded CMU’s 2025 Sutherland-Merlino Fellowship. Her work bridges neuroscience and machine learning, and she’s passionate about advancing STEM access for underrepresented groups.
www.cmu.edu/mcs/news-eve...

30.09.2025 20:58 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
How to Pronounce Chipotle
YouTube video by PronunciationManual How to Pronounce Chipotle

This remains my personal fave:

www.youtube.com/watch?v=3ADu...

23.09.2025 02:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Data on the Brain & Mind

πŸ“’ 10 days left to submit to the Data on the Brain & Mind Workshop at #NeurIPS2025!

πŸ“ Call for:
β€’ Findings (4 or 8 pages)
β€’ Tutorials

If you’re submitting to ICLR or NeurIPS, consider submitting here tooβ€”and highlight how to use a cog neuro dataset in our tutorial track!
πŸ”— data-brain-mind.github.io

25.08.2025 15:43 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

So excited for CCN2026!!! πŸ§ πŸ€”πŸ€–πŸ—½

15.08.2025 18:06 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

arguably the most important component of AI for neuroscience:

data, and its usability

11.08.2025 11:28 β€” πŸ‘ 20    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Join us at #NeurIPS2025 for our Data on the Brain & Mind workshop! We aim to connect machine learning researchers and neuroscientists/cognitive scientists, with a focus on emerging datasets.

More info: data-brain-mind.github.io

05.08.2025 00:21 β€” πŸ‘ 17    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Preview
Computational Urban Ecology of New York City Rats Urban rats are highly adaptable, thriving in the dynamic and often inhospitable conditions of modern cities. Despite substantial mitigation efforts, they remain an enduring presence in urban environme...

New preprint!



tl;dr β€” We ran around late at night to record wild rats in NYC and figured out how to quantify their behavior and environment. 🧡

w/ Dima Batenkov, @zamakany.bsky.social, Emily Mackevicius

www.biorxiv.org/content/10.1...

25.07.2025 15:57 β€” πŸ‘ 135    πŸ” 47    πŸ’¬ 6    πŸ“Œ 4
Post image

Announcing the new "Sensorimotor AI" Journal Club β€” please share/repost!

w/ Kaylene Stocking, Tommaso Salvatori, and @elisennesh.bsky.social

Sign up link: forms.gle/o5DXD4WMdhTg...

More details below 🧡[1/5]

πŸ§ πŸ€–πŸ§ πŸ“ˆ

09.07.2025 22:31 β€” πŸ‘ 24    πŸ” 12    πŸ’¬ 1    πŸ“Œ 0
Preview
Variations in neuronal selectivity create efficient representational geometries for perception Our visual capabilities depend on neural response properties in visual areas of our brains. Neurons exhibit a wide variety of selective response properties, but the reasons for this diversity are unkn...

In many brain areas, neuronal tuning is heterogeneous. But how does this diversity help behavior? We show how tuning diversity shapes representational geometry and boosts coding efficiency for perception in our new preprint: www.biorxiv.org/content/10.1...
(w/ @sueyeonchung.bsky.social&Tony Movshon)

29.06.2025 00:19 β€” πŸ‘ 75    πŸ” 20    πŸ’¬ 1    πŸ“Œ 3

Topics include but are not limited to:
β€’Optimal and adaptive stimulus selection for fitting, developing, testing or validating models
β€’Stimulus ensembles for model comparison
β€’Methods to generate stimuli with β€œnaturalistic” properties
β€’Experimental paradigms and results using model-optimized stimuli

18.06.2025 20:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Consider submitting your recent work on stimulus synthesis and selection to our special issue at JOV!

18.06.2025 20:52 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold?
In a new preprint with Zahra Kadkhodaie and @eerosim.bsky.social, we develop a novel energy-based model in order to answer these questions: 🧡

06.06.2025 22:11 β€” πŸ‘ 72    πŸ” 23    πŸ’¬ 1    πŸ“Œ 2

Just a few months until Cognitive Computational Neuroscience comes to Amsterdam! Check out our now-complete schedule for #CCN2025, with descriptions of each of the Generative Adversarial Collaborations (GACs), Keynotes-and-Tutorials (K&Ts), Community Events, Keynote Speakers, and social activities!

19.05.2025 11:08 β€” πŸ‘ 25    πŸ” 11    πŸ’¬ 0    πŸ“Œ 0
Post image

I’m happy to be at #VSS2025 and share what our lab has been up to this year!

I’m also honored to receive this year’s young investigator award and will give a short talk at the awards ceremony Monday

16.05.2025 18:12 β€” πŸ‘ 52    πŸ” 14    πŸ’¬ 3    πŸ“Œ 0
JOV Special Issue - Choose your stimuli wisely: Advances in stimulus synthesis and selection | JOV | ARVO Journals

The symposium also serves to kick off a special issue of JOV!

"Choose your stimuli wisely: Advances in stimulus synthesis and selection"
jov.arvojournals.org/ss/synthetic...
Paper Deadline: Dec 12th

For those not able to attend tomorrow, I will strive to post some of the highlights here πŸ‘€ πŸ‘€ πŸ‘€

15.05.2025 20:31 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
VSS SymposiaSymposia – Vision Sciences Society

Super excited for our #VSS2025 symposium tomorrow, "Model-optimized stimuli: more than just pretty pictures".
Join us to talk about designing and using synthetic stimuli for testing properties of visual perception!

May 16th @ 1-3PM in Talk Room #2

More info: www.visionsciences.org/symposia/?sy...

15.05.2025 20:31 β€” πŸ‘ 25    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0

This is joint work with fantastic co-authors from @flatironinstitute.org Center for Computational Neuroscience: @lipshutz.bsky.social (co-first) @sarah-harvey.bsky.social @itsneuronal.bsky.social @eerosim.bsky.social

24.04.2025 05:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

These examples demonstrate how our framework can be used to probe for informative differences in local sensitivities between complex models, and suggest how it could be used to compare model representations with human perception.

24.04.2025 05:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In a second example, we apply our method to a set of deep neural network models and reveal differences in the local geometry that arise due to architecture and training types, illustrating the method's potential for revealing interpretable differences between computational models.

24.04.2025 05:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

As an example, we use this framework to compare a set of simple models of the early visual system, identifying a novel set of image distortions that allow immediate comparison of the models by visual inspection.

24.04.2025 05:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This provides an efficient method to generate stimulus distortions that discriminate image representations. These distortions can be used to test which model is closest to human perception.

24.04.2025 05:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We then extend this work to show that the metric may be used to optimally differentiate a set of *many* models, by finding a pair of β€œprincipal distortions” that maximize the variance of the models under this metric.

24.04.2025 05:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We use the FIM to define a metric on the local geometry of an image representation near a base image. This metric can be related to previous work investigating the sensitivities of one or two models.

24.04.2025 05:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We propose a framework for comparing a set of image representations in terms of their local geometries. We quantify the local geometry of a representation using the Fisher information matrix (FIM), a standard statistical tool for characterizing the sensitivity to local stimulus distortions.

24.04.2025 05:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Recent work suggests that many models are converging to representations that are similar to each other and (maybe) to human perception. However, similarity often focuses on stimuli that are far apart in stimulus space. Even if global geometry is similar, the local geometry can be quite different.

24.04.2025 05:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Discriminating image representations with principal distortions Image representations (artificial or biological) are often compared in terms of their global geometric structure; however, representations with similar global structure can have strikingly...

We are presenting our work β€œDiscriminating image representations with principal distortions” at #ICLR2025 today (4/24) at 3pm! If you are interested in comparing model representations with other models or human perception, stop by poster #63. Highlights in 🧡
openreview.net/forum?id=ugX...

24.04.2025 05:12 β€” πŸ‘ 39    πŸ” 13    πŸ’¬ 1    πŸ“Œ 0

Applications close TODAY (April 14) for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop.

All you need to apply is a CV and a 1 page abstract. πŸ§ πŸ—½

14.04.2025 13:56 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

πŸ“£ Grad students and postdocs in computational and theoretical neuroscience: please consider applying for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! All expenses are covered. Apply by April 14. jtnworkshop2025.flatironinstitute.org

09.04.2025 16:11 β€” πŸ‘ 21    πŸ” 16    πŸ’¬ 0    πŸ“Œ 0

@jfeather is following 20 prominent accounts