Itamar Avitan's Avatar

Itamar Avitan

@avitanit.bsky.social

PhD candidate at Ben-Gurion University.

23 Followers  |  45 Following  |  11 Posts  |  Joined: 10.11.2025  |  1.5129

Latest posts by avitanit.bsky.social on Bluesky

Preview
Human-like individual differences emerge from random weight initializations in neural networks Much of AI research targets the behavior of an average human, a focus that traces to Turing's imitation game. Yet, no two human individuals behave exactly alike. In this study, we show that artificial...

No two humans behave exactly alike. But what about neural networks? We found early evidence that human-like individual differences in behavior emerge from networks trained with different initializations. Hereโ€™s a peek at our resultsโ€”to be presented at UniReps & DBM @NeurIPS. Full paper on the way!

26.10.2025 23:39 โ€” ๐Ÿ‘ 11    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
NeurIPS Poster Modelโ€“Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isnโ€™t the Right OneNeurIPS 2025

Presenting our #NeurIPS2025 work on modelโ€“behavior alignment today.

Could we even recognize the โ€œrightโ€ model of behavior under flexible evaluation?

Come chat about DNNs & human visual preception!
Hall C-E #2010
Friday (today!) 4:30 โ€“ 7:30 PM

neurips.cc/virtual/2025...

05.12.2025 18:48 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Model-Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isn't the Right One Linearly transforming stimulus representations of deep neural networks yields high-performing models of behavioral and neural responses to complex stimuli. But does the test accuracy of such predictio...

Kudos to our NeurIPS 2025 reviewers for thoughtful, human-generated reviews. Iโ€™ll be presenting poster #2010 in San Diego on Fri, 5 Dec from 4:30โ€“7:30 p.m. PT. Come say hi!
arXiv : arxiv.org/abs/2510.23321
Code and data: github.com/brainsandmachines/oddoneout_model_recovery

20.11.2025 14:05 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Our work reveals a sharp trade-off between predictive accuracy and model identifiability. Flexible mappings maximize predictivity, but blur the distinction between competing computational hypotheses.

20.11.2025 14:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Further analyses showed that linear probing was the culprit. The linear fit warps each model's original feature space, erasing its unique signature and making all aligned models converge toward a human-like representation.

20.11.2025 14:05 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The key dependent measure is how often the data-generating model actually achieves the highest prediction accuracy. The surprising result: even with massive datasets (millions of trials), the best-performing model is often not the right one.

20.11.2025 14:05 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Each simulation worked like this: (1) pick one model from 20 candidate NNs and fit it to human responses; (2) sample a synthetic dataset from that model using NEW triplets; (3) test all 20 models on this generated data, measuring cross-validated prediction accuracy.

20.11.2025 14:05 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We ran model recovery simulations using models fitted to the massive THINGS odd-one-out data shared by @martinhebart.bsky.social , @cibaker.bsky.social et al. Each simulation tested whether a neural network model would โ€œwinโ€ the model comparison if it had generated the behavioral data.

20.11.2025 14:05 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
NeurIPS Poster Modelโ€“Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isnโ€™t the Right OneNeurIPS 2025

In our new NeurIPS 2025 paper, we ask: does better predictive accuracy necessarily mean better mechanistic correspondence between neural networks and human representations? neurips.cc/virtual/2025...

20.11.2025 14:05 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Human alignment of neural network representations Today's computer vision models achieve human or near-human level performance across a wide variety of vision tasks. However, their architectures, data, and learning algorithms differ in numerous ways ...

They also showed that if we nudge the NN representations toward human judgments by linearly transforming the representation space itself crossvalidated prediction accuracy is boosted almost to the reliability bound. arxiv.org/abs/2211.01201

20.11.2025 14:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@lukasmut.bsky.social , @lorenzlinhardt.bsky.social et al, showed that neural network representations can be strong predictors of human odd-one-out judgments: the image humans select as โ€œoddโ€ among three is often the one whose activation pattern differs most from the other two.

20.11.2025 14:05 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Excited to share my first paper: Modelโ€“Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isnโ€™t the Right One (NeurIPS 2025). link below.

20.11.2025 14:05 โ€” ๐Ÿ‘ 17    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

@avitanit is following 20 prominent accounts