Really appreciated Joshua Gans' postmortem on an experiment in vibe researching
joshuagans.substack.com/p/reflection...
Taste is still paramount and the models are instruction-tuned to sycophancy to all hell
@beatrixmgn.bsky.social
PhD student in machine learning at DTU, Copenhagen. Especially interested in model representations.
Really appreciated Joshua Gans' postmortem on an experiment in vibe researching
joshuagans.substack.com/p/reflection...
Taste is still paramount and the models are instruction-tuned to sycophancy to all hell
πΉ Job alert: Postdoc in Human-Computer Interaction with Explainable AI at University of Copenhagen
π Copenhagen π©π°
π
Apply by Feb 1st
π https://employment.ku.dk/faculty/?show=153139
The promise of AI chat assistants: they solve 90% of the problems users have (by looking up the docs and telling them)
My reality: need to spend 10 minutes trying to get to a human, to solve an issue I need customer support to look into
Around minute 8 I sign up to a competitor
5/5
@ema-ridopoco.bsky.social and @andreadittadi.bsky.social will be presenting a poster in San Diego and Luigi and I will be presenting at Eurips (eurips.cc) in Copenhagen so come on by! π
4/5
We also show that it is possible to define a metric between probability distributions and a measure of representational dissimilarity such that when distributions are close in this sense, we get similar representations.
3/5
The two models agree on their prediction for the highest likelihood label. They also disagree on the ranking by likelihood of the remaining labels, and while this has a negligible effect on the KL divergence, it means the relation between their representations is non-linear.
2/5
We prove that a small KL divergence between models is not enough to guarantee similar representations. Here is an example of how to construct two models with small KL divergence, but representations which are far from being linear transformations of each other.
1/5
We study when and why representations learned by different neural networks are similar from the perspective of identifiability theory, which suggests that a measure of representational similarity should be invariant to transformations that leave the model distribution unchanged.
I am happy to announce that our article "When Does Closeness in Distribution Imply Representational Similarity? An Identifiability Perspective" has been accepted at NeurIPS 2025! π arxiv.org/abs/2506.037...
Details below π
Andrej Karpathyβs take on AI coding agents feels grounded. The industryβs chasing full autonomy when models still hallucinate too much.
Agents that churn out a thousand lines of code leave you either blindly trusting them or slogging through reviews. These tools should embrace their fallibility.
So if you did not have the opportunity to come by at ACL in Vienna then here is your second chance! :D
Hope to see you there! eurips.cc/ellis.
Good news everyone!
Iβll be presenting the paper I did with Marco and Iuri "Prediction Hubs are Context-Informed Frequent tokens in LLMs" at the ELLIS UnConference on December 2nd in Copenhagen. arxiv.org/abs/2502.10201
How can we make AI explanations provably correct β not just convincing? π€
Join us for the Theory of Explainable Machine Learning Workshop, part of the ELLIS UnConference Copenhagen π©π° on Dec 2, co-located with #EurIPS.
π Call for contributions open until Oct 15 (AoE)
π eurips.cc/ellis
π Applications for the @ellis.eu PhD Program 2025 are now open!
Join Europeβs leading network in AI & Machine Learning research,π
π Choose among three collaboration pathways: Academic, Industry, or Interdisciplinary.β¨ποΈ Apply by Oct 31, 2025β¨π ellis.eu/phd-program
β¨ EurIPS registration is now open! Get one of the 1,500 tickets available. 500 tickets are discounted for students π.
In Copenhagen π©π° on Dec 2-7, 2025.
β Entrance to the ELLIS UnConference is free with a #EurIPS ticket.
Get your ticket now: https://eurips.cc/
@euripsconf.bsky.social
πΉ Job alert: Several fully-funded open Post-Doc positions for 'AI for Scientific Discovery in Physics' at @unituebingen.bsky.social
π TΓΌbingen π©πͺ
π
Apply by Oct 28th (unless filled before)
π ellis.eu/jobs/ai-for-...
Without any theory, machine learning has evolved a universal theory of data representation. Can we post-hoc rationalize it with math?
25.09.2025 14:31 β π 13 π 2 π¬ 2 π 0Congratulations to everyone who got their @neuripsconf.bsky.social papers accepted πππ
At #EurIPS we are looking forward to welcoming presentations of all accepted NeurIPS papers, including a new βSalon des RefusΓ©sβ track for papers which were rejected due to space constraints!
- Fully funded PhD fellowship on Explainable NLU: apply by 31 October 2025, start in Spring 2026: candidate.hr-manager.net/ApplicationI...
- Open-topic PhD positions: express your interest through ELLIS by 31 October 2025, start in Autumn 2026: ellis.eu/news/ellis-p...
#NLProc #XAI
I wrote about the tragic death of Adam Raine and the venal negligence of "AI Safety." www.argmin.net/p/the-banal-...
28.08.2025 14:30 β π 77 π 38 π¬ 1 π 10π Interested in a #PhD in machine learning or #AI? The ELLIS PhD Program connects top students with leading researchers across Europe. The application portal opens on Oct 1st. Curious? Join our info session on the same day. Get all the info π
I've been using Kagi for over a year as my default search engine. It's excellent.
22.08.2025 20:03 β π 41 π 7 π¬ 0 π 1Hi! Anyone remember when I was saying that it might be functionally impossible to operate as a trans academic in the US in ~3-5 months, say, 3-5 months ago?
07.08.2025 22:02 β π 14 π 4 π¬ 1 π 0Direct your research. Forge lifelong connections. Apply for the 2026 Complexity Postdoctoral Fellowships. SFI is looking for recent PhD graduates with quantitative and computational skills interested in theoretical collaborative research.
Deadline: Oct 1, 2025
Apply: santafe.edu/sfifellowship
The EPFL NLP lab is looking to hire a postdoctoral researcher on the topic of designing, training, and evaluating multilingual LLMs:
docs.google.com/document/d/1...
Come join our dynamic group in beautiful Lausanne!
Thank you to everyone who came to talk to us at ACL!
Very happy that so many people are interested in our work :D
If you didnβt manage to have a look yet: This work is relevant if you want to compare representations in language models, and it can be read here: arxiv.org/abs/2502.10201
πΉ Job alert: Postdoc in Machine Learning (basic research) at @istaresearch.bsky.social in the group of Christoph Lampert (@mlcv-at-ista.bsky.social)
πKlosterneuburg π¦πΉ
π
Apply asap
π More info: cvml.ista.ac.at/Postdoc-ML.h...
Open PhD positions in Denmark! daracademy.dk/fellowship/f...
If you want to apply to work with me and Johannes Bjerva at @aau.dk Copenhagen, I'll be at @ic2s2.bsky.social this week and @aclmeeting.bsky.social next week! DM me if you'd like to meet :)
EurIPS is coming! π£ Mark your calendar for Dec. 2-7, 2025 in Copenhagen π
EurIPS is a community-organized conference where you can present accepted NeurIPS 2025 papers, endorsed by @neuripsconf.bsky.social and @nordicair.bsky.social and is co-developed by @ellis.eu
eurips.cc
1/
π¨ New paper at #ICML2025!
Identifying Latent Metric Structures in Deep Latent Variable Models π
We solve part of the identifiability puzzle in generative models β using geometry. π§΅