Looking forward to presenting this work next week at #ICLR2025! DM me if you are attending and want to grab a coffee to discuss these topics π«
18.04.2025 18:55 β π 20 π 4 π¬ 0 π 0
December 5th our ML theory group at Cohere For AI is hosting @mathildepapillon.bsky.social to discuss their recent review arxiv.org/abs/2407.09468 on geometric/topological/algebraic ML.
Join us online π«
02.12.2024 13:14 β π 13 π 1 π¬ 0 π 2
Iβm putting together a starter pack for researchers working on human-centered AI evaluation. Reply or DM me if youβd like to be added, or if you have suggestions! Thank you!
(It looks NLP-centric at the moment, but thatβs due to the current limits of my own knowledge π)
go.bsky.app/G3w9LpE
21.11.2024 15:56 β π 36 π 10 π¬ 15 π 1
I tried to find everyone who works in the area but I certainly missed some folks so please lmk...
go.bsky.app/BYkRryU
23.11.2024 05:11 β π 53 π 18 π¬ 32 π 0
Does anyone know of any feeds (or similar) for student internship opportunities in ML/CV/NLP?
22.11.2024 07:19 β π 44 π 11 π¬ 2 π 1
I've found starter packs on NLP, vision, graphics, etc. But personally, I would love to know and hear from researchers working on vision-language. So, let me know if you'd like to join this starter pack, would be happy to add!
go.bsky.app/TENRRBb
19.11.2024 19:52 β π 55 π 13 π¬ 42 π 2
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledgeπ¦? In our new preprint, we look at the pretraining data and find evidence against this:
Procedural knowledge in pretraining drives LLM reasoning βοΈπ’
π§΅β¬οΈ
20.11.2024 16:31 β π 858 π 140 π¬ 36 π 24
LLMs tend to match problem-solving strategies based on textual similarity rather than truly understanding the underlying principles of mathematical problems.
Paper: Do Large Language Models Truly Grasp Mathematics? An Empirical Exploration From Cognitive Psychology
18.11.2024 21:29 β π 47 π 7 π¬ 0 π 1
A starter pack of people working on interpretability / explainability of all kinds, using theoretical and/or empirical approaches.
Reply or DM if you want to be added, and help me reach others!
go.bsky.app/DZv6TSS
14.11.2024 17:00 β π 80 π 26 π¬ 34 π 0
If youβre interested in mechanistic interpretability, I just found this starter pack and wanted to boost it (thanks for creating it @butanium.bsky.social !). Excited to have a mech interp community on bluesky π
go.bsky.app/LisK3CP
19.11.2024 00:28 β π 36 π 8 π¬ 3 π 2
π I also work on the field (examples on my profile). Would love to be added!
19.11.2024 09:42 β π 1 π 0 π¬ 0 π 0
Bluesky Network Analyzer
Find accounts that you don't follow (yet) but are followed by lots of accounts that you do follow.
I forgot from whom in my feed I got this from, but anyway, this network analyzer is crazy efficient. It gives you ideas for accounts to follow based on your own followees. I just added 50 accounts or so.
bsky-follow-finder.theo.io
18.11.2024 21:32 β π 82 π 24 π¬ 9 π 7
there are many smart speakers and thinkers around AI/ML and/or NLP. but i find almost everything to be kinda predictable by now, minor stylistic variations on the same story. who are some *interesting* speakers i should listen/read? i want things that may surprise or inspire me.
16.11.2024 20:41 β π 96 π 12 π¬ 12 π 0
Any Latin Americans here working in Cognitive Science, very broadly construed? (Neuroscience, Psychology, Artificial Intelligence, Anthropology, Linguistics, Economics, Ethics, Philosophy, and moreβ¦)
I thought Iβd create a starter pack but I could only find a handful of us. Say hi?
17.11.2024 13:37 β π 1 π 5 π¬ 2 π 0
It is intuitive to observe some complex-looking model behavior (e.g., the classification of images of different animals using an abstract category) and infer an interesting capacity of the model (e.g., the ability to build rich representations that abstract away from particular animals).
17.11.2024 14:34 β π 0 π 1 π¬ 1 π 0
We found that the mechanisms behind the emergence of these representations are similar to those of LLMs, and can be found across a variety of vision transformers and layer types.
17.11.2024 14:06 β π 1 π 0 π¬ 0 π 0
[1/2] Position paper at #ICML2024 βAn Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience"
17.11.2024 14:06 β π 2 π 0 π¬ 1 π 0
Hi BlueSky! π¦ Iβm a computer science PhD student with a background in cognitive neuroscience. Working at the intersection of these topics, my research focuses on reverse engineer the cognitive capacities of AI models π§ π»
Some recent examples π
17.11.2024 14:06 β π 23 π 3 π¬ 2 π 0
I made a starter pack with the people doing something related to Neurosymbolic AI that I could find.
Let me know if I missed you!
go.bsky.app/RMJ8q3i
11.11.2024 15:27 β π 92 π 36 π¬ 16 π 2
New here? Interested in AI/ML? Check out these great starter packs!
AI: go.bsky.app/SipA7it
RL: go.bsky.app/3WPHcHg
Women in AI: go.bsky.app/LaGDpqg
NLP: go.bsky.app/SngwGeS
AI and news: go.bsky.app/5sFqVNS
You can also search all starter packs here: blueskydirectory.com/starter-pack...
09.11.2024 09:13 β π 557 π 213 π¬ 67 π 55
Cognitive and perceptual psychologist, industrial designer, & electrical engineer. Assistant Professor of Industrial Design at University of Illinois Urbana-Champaign. I make neurally plausible bio-inspired computational process models of visual cognition.
PhD Fellow in AI Evals @UniCopenhagen.
Interested in AI Policy/ AI Ethics/ Responsible AI.
Community Lead @cohereforai.bsky.social
Site: ruchiradhar.github.io
#nlproc #llm #ai
PhD candidate - Centre for Cognitive Science at TU Darmstadt,
explanations for AI, sequential decision-making, problem solving
PhD student in explainable AI for computer vision @visinf.bsky.social @tuda.bsky.social - Prev. intern AWS and @maxplanck.de
The largest workshop on analysing and interpreting neural networks for NLP.
BlackboxNLP will be held at EMNLP 2025 in Suzhou, China
blackboxnlp.github.io
PhD student @LIG | Causal abstraction, interpretability & LLMs
Phd student at @MPI_NL@Donders, working on multimodal semantic representations in π₯οΈ and πΆπ§ https://tianaidong.github.io/
PhD candidate for Interpretable AI @ Fraunhofer HHI Berlin
International Conference on Learning Representations https://iclr.cc/
PhD Candidate @hpi.de, multimodal deep learning, vision-language models.
sarah.eslami.me
Assistant Professor of Machine Learning
Generative AI, Uncertainty Quantification, AI4Science
Amsterdam Machine Learning Lab, University of Amsterdam
https://naesseth.github.io
PhD Student | Works on Explainable AI | https://donatellagenovese.github.io/
#NLProc PhD Student & Research Associate at Bielefeld University
Working on: Question Answering over Linked Data, Semantic Web, Lexical Knowledge & Compositionality in AI
https://davidmschmidt.de
PhD Student @ https://selflearningsystems.uni-koeln.de/
Working on the intersection of (visual) neuroscience, orthographic processing, computer vision and machine learning
#Reading #CognitiveNeuroscience
Interested in the building blocks of intelligence: neural & computational mechanisms underlying how we rapidly learn, generalize; how our mental models help us experience & infer; curiosity and ideation
https://tarananigam.github.io/TaranaNigam/index.html
CS Masterβs @ NTHU | previously @ NCTU & NYMU in Cog. Neuro.
Researching architectural experience, social issues, and human-AI co-design through cog. comp. neurosci. & HCI - drawing inspiration from everyday.
https://www.notion.so/YC-s-Personal-Site-1d1f
3D Vision and Robotics - PostDoc @ Stanford
https://francisengelmann.github.io/
MS AI @ VU Amsterdam - Interested in Geometrical Deep Learning | Topological Deep Learning | Graphs