Happy to share that our paper βMixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specializationβ (aka MiCRo) has been accepted to #ICLR2026!! π
See you in Rio π§π· ποΈ
@bkhmsi.bsky.social
PhD at EPFL π§ π» Ex @MetaAI, @SonyAI, @Microsoft Egyptian πͺπ¬
Happy to share that our paper βMixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specializationβ (aka MiCRo) has been accepted to #ICLR2026!! π
See you in Rio π§π· ποΈ
I also wrote a blog post reflecting on why this project started, how it evolved, and why I believe we often underestimate the power of inspiration.
βοΈ Blog: bkhmsi.medium.com/egyptian-res...
If this helps even one person see whatβs possible, itβs worth it.
We also added a statistics page visualizing key aspects of the community: academia vs. industry, research areas, positions, etc.
Webpage: egyptians-in-cs.github.io#/en/stats
The website now has much better filtering by subfield, making it easier to explore different areas of Computer Science and discover researchers working on specific topics.
19.01.2026 20:16 β π 0 π 0 π¬ 1 π 0One of the most powerful additions: an interactive map showing where Egyptian researchers are around the world π
It highlights the global Egyptian diaspora, and how widely Egyptian researchers are contributing across the world.
The website now features 262 Egyptian researchers across all of Computer Science β from systems and theory to AI, security, HCI, and more.
What started as a short list became a much broader story about visibility and representation.
Three years ago, I built a website called Egyptians in AI Research. Today, that project has grown into Egyptian Researchers in Computer Science thanks to the help of @mo-mo2025.bsky.social!
I wrote a blog post on how it grew, and why it expanded beyond AI.
π egyptians-in-cs.github.io
π§΅π
Great to see our "From Language to Cognition" work featured in @mordecwhy.bsky.social's latest piece on language models and the brain. Glad to contribute to the conversation!
www.foommagazine.org/language-mod...
Missed the Re-Align hackathon at ICLR or CCN 2025 β or want more? π Re-Align is back at ICLR 2026 π Beyond the paper track, weβre launching a persistent shared-task challenge + challenge paper track. Canβt wait to see your creative & critical takes on representational alignment π₯π₯³
08.01.2026 09:54 β π 12 π 1 π¬ 0 π 04/
π€ Re-Align 2026 is made possible by an interdisciplinary team of co-organizers:
@bkhmsi.bsky.social, Brian Cheung, @dotadotadota.bsky.social, @eringrant.me, Stephanie Fu, @kushinm.bsky.social, @sucholutsky.bsky.social, and @siddsuresh97.bsky.social!
3/
π Joining us at Re-Align 2026 is a fantastic lineup of invited speakers covering ML, neuroscience, and cognitive science:
David Bau, Arturo Deza, @judithfan.bsky.social, @alonaf.bsky.social, @phillipisola.bsky.social, and Danielle Perszyk!
2/
Building on last yearβs hackathon, weβre launching a persistent shared-task challenge to support transparent and reproducible representational alignment research.
β Stay in the loop:
GitHub: github.com/representational-alignment/challenge
Form: forms.gle/EUVCyE9gykQA...
π
Feb 26, 2026 (AoE)
π Re-Align is back for its 4th edition at ICLR 2026!
π£ We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields.
π Tracks: Short (β€5p), Long (β€10p), Challenge (blog)
β° Deadline: Feb 5, 2026 for papers
π representational-alignment.github.io/2026/
This year was hard, personally and globally, from ongoing visa issues that disrupted my life and may prevent me from achieving one of my dreams, to the state of the world itself.
Still grateful for what I achieved and for everyone who supported me.
Wishing us all a brighter year ahead. β¨
1/ π How does mixing data from hundreds of languages affect LLM training?
In our new paper "Revisiting Multilingual Data Mixtures in Language Model Pretraining" we revisit core assumptions about multilinguality using 1.1B-3B models trained on up to 400 languages.
π§΅π
Looking forward to be speaking at IndabaX Sudan on Building Responsible and Ethical LLMs!
π
Saturday, December 13th
β° 2:00 PM (GMT+2)
Register here: docs.google.com/forms/d/e/1F...
See you all there! :)
Not attending NeurIPS this year, but very much looking to connect.
Iβm seeking a PhD research internship next summer in AI for Science, especially where AI meets brain and cognitive sciences. π§
If youβre hiring, Iβd love to connect!
bkhmsi.github.io
I finally found time to update the Egyptians in AI Research website, apologies for the delay!
Super excited to share that we now feature 227 incredible Egyptian researchers!! π€―
Link: bkhmsi.github.io/egyptians-in...
You can learn more about our work here: language-to-cognition.epfl.ch
Thanks to all my co-authors @gretatuckute.bsky.social, @davidtyt.bsky.social, @neurotaha.bsky.social and my advisors @abosselut.bsky.social and @mschrimpf.bsky.social!
On my way to #EMNLP2025 π¨π³
Iβll be presenting our work (Oral) on Nov 5, Special Theme session, Room A106-107 at 14:30.
Letβs talk brains π§ , machines π€, and everything in between :D
Looking forward to all the amazing discussions!
10/
π Huge thanks to my incredible co-authors @cndesabbata.bsky.social, @gretatuckute.bsky.social, @eric-zemingchen.bsky.social
and my advisors @mschrimpf.bsky.social and @abosselut.bsky.social!
9/
π Explore MiCRo:
π Website: cognitive-reasoners.epfl.ch
π Paper: arxiv.org/abs/2506.13331
π€ HF Space (interactive): huggingface.co/spaces/bkhmsi/cognitive-reasoners
π§ HF Models: huggingface.co/collections/bkhmsi/mixture-of-cognitive-reasoners-684709a0f9cdd7fa180f6678
8/
We now have a collection of 10 MiCRo models on HF that you can try out yourself!
π§ HF Models: huggingface.co/collections/bkhmsi/mixture-of-cognitive-reasoners-684709a0f9cdd7fa180f6678
7/
We built an interactive HF Space where you can see how MiCRo routes tokens across specialized experts for any prompt, and even toggle experts on/off to see how behavior changes.
π€ Try it here: huggingface.co/spaces/bkhms...
(Check the example prompts to get started!)
6/
We also wondered: if neuroscientists use functional localizers to map networks in the brain, could we do the same for MiCRoβs experts?
The answer: yes! The very same localizers successfully recovered the corresponding expert modules in our models!
5/
One result I was particularly excited about is the emergent hierarchy we found across MiCRo layers:
πΊEarlier layers route tokens to Language experts.
π»Deeper layers shift toward domain-relevant experts.
This emergent hierarchy mirrors patterns observed in the human brain π§
4/
We find that MiCRo matches or outperforms baselines on reasoning tasks (e.g., GSM8K, BBH) and aligns better with human behavior (CogBench), while maintaining interpretability!!
3/
β¨ Why it matters:
MiCRo bridges AI and neuroscience:
π€ ML side: Modular architectures make LLMs more interpretable and controllable.
π§ Cognitive side: Provides a testbed for probing how the relative contributions of different brain networks support complex behavior.
2/
π§© Recap:
MiCRo takes a pretrained language model and post-trains it to develop distinct, brain-inspired modules aligned with four cognitive networks:
π£οΈ Language
π’ Logic / Multiple Demand
π§ββοΈ Social / Theory of Mind
π World / Default Mode Network
π Excited to share a major update to our βMixture of Cognitive Reasonersβ (MiCRo) paper!
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brainβs functional specialization?
More below π§ π
cognitive-reasoners.epfl.ch