Next:
- Principles for leadership
- Updated goals
These are lessons I've crystallised from reading, conversations, and life experiences. Take what resonates. Leave what doesn't.
@coallaoh.bsky.social
Professor in Scalable Trustworthy AI @ University of Tübingen | Advisor at Parameter Lab & ResearchTrend.AI https://seongjoonoh.com | https://scalabletrustworthyai.github.io/ | https://researchtrend.ai/
Next:
- Principles for leadership
- Updated goals
These are lessons I've crystallised from reading, conversations, and life experiences. Take what resonates. Leave what doesn't.
Principles for relationship: github.com/coallaoh/Pri...
- Why criticism never works.
- How to handle disputes.
- The inner child in everyone, old and young, that runs much of our emotional life.
- And more.
Principles for learning: github.com/coallaoh/Pri...
- Why most real-world challenges require top-down learning that schools never teach.
- How to break impossible goals into daily tasks.
- The irony of AI making learning harder, not easier.
- And more.
🍎 Updated the Principles repository github.com/coallaoh/Pri... after a long break.
19.12.2025 05:56 — 👍 4 🔁 0 💬 1 📌 1Overall diagram about contextual privacy & LRMs
🫗 An LLM's "private" reasoning may leak your sensitive data!
🎉 Excited to share our paper "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" was accepted at #EMNLP main!
1/2
Paper thumbnail.
🔎Does Conversational SEO actually work? Our new benchmark has an answer!
Excited to announce our new paper: C-SEO Bench: Does Conversational SEO Work?
🌐 RTAI: researchtrend.ai/papers/2506....
📄 Paper: arxiv.org/abs/2506.11097
💻 Code: github.com/parameterlab...
📊 Data: huggingface.co/datasets/par...
📅 March 28th is an exciting day for scientific networking! 🚀
Join our Connect sessions on VLM, FedML, and LLMAG—top researchers will present their latest findings!
🔗 Zoom Link: us06web.zoom.us/j/8409727334...
#ResearchTrend #AI #Networking #VLM #FedML #LLMAG
ResearchTrend.AI is hiring a data engineer - python, airflow, and postgre talents ate required (+experience with LLMs would be awesome). Please apply to recruit@parameterlab.de or dm me!!!
14.02.2025 18:43 — 👍 4 🔁 0 💬 0 📌 0
Today's featured community is a unique one: Optimal Transport. It sits at the intersection of several machine learning subfields.
Follow the community to receive updates: researchtrend.ai/communities/OT
Audio LLM community is now available on ResearchTrend.AI!
researchtrend.ai/communities/...
Check out our cool video :)
AI in education has gained significant momentum since the beginning of the "LLM era" (2023).
Check out the papers at researchtrend.ai/communities/....
🚀 We launched the Graph Neural Networks (GNN) community on ResearchTrend.AI!
Explore papers: researchtrend.ai/communities/...
GNNs extend neural networks to graph data. The field boomed (2017-2020), plateaued (2020-21), and is now declining.
Is this success, hype, or limits? 🤔 Share your thoughts!
Paper: arxiv.org/abs/2409.16797
Code: github.com/AlexanderRub...
OpenReview: openreview.net/forum?id=BQE...
Thank Alex for his great efforts and work ethic. Thank @damienteney.bsky.social and @lucascimeca.bsky.social for their continued help with this paper. We’ll humbly address the criticisms to improve it further for future opportunities.
23.01.2025 22:21 — 👍 5 🔁 1 💬 1 📌 0
We were a bit unlucky with the reviewers - one voted for acceptance, while the other two remained silent during the discussion phase. What matters, though, is knowing this is solid work and the method works. That’s how we survive the review process.
- Outcome: Scaled ensemble diversification to ImageNet level, achieving improved OOD generalisation and detection.
23.01.2025 22:21 — 👍 0 🔁 0 💬 1 📌 0
Rejected at #ICLR2025:
Scalable Ensemble Diversification for OOD Generalisation and Detection
- Current approach: Ensemble diversification relies on a separate source of OOD samples for training models to diversify outputs.
- Ours: Use samples from the training set itself to diversify ensembles.
If you can't wait for the arXiv version, check out the ICLR forum: openreview.net/forum?id=ByC...
PS: This paper had 6 reviewers, unanimously voting for acceptance eventually (score 6+). Such luck is rare for me 😅 I'm glad that the hard work paid off, @auselis.bsky.social. Let's arXiv it!
Another #ICLR2025 paper:
Intermediate Layer Classifiers for OOD generalization
- Common approach: Use penultimate-layer features of pre-trained models for downstream tasks.
- Our recommendation: Explore lower-layer features. You'll likely find better layers for OOD generalisation.
Thank other co-authors too: Alexander Rubinstein and Ehsan Abbasnejad.
paper: arxiv.org/abs/2403.07968
code: github.com/aktsonthalia...
I.e., at a reasonable width (no wideresnet), solutions already form a star domain.
Side note: We developed a method for finding the "star model", a special solution connected to all other solutions. I couldn't resist naming it "NeuralStarLink" but fortunately the first author Ankit held me back :)
Finally accepted! #ICLR2025
Do Deep Neural Network Solutions Form a Star Domain?
- Known: as DNN width --> infty, solutions become convex modulo permutations of neurons.
- What's new: as DNN width --> infty, solutions **first form a star domain** and then a convex domain modulo permutations.
✨ Coming soon: Email digests for the authors and organisations you follow. Stay tuned for more updates!
Let’s make 2025 a year of learning and staying connected! 🚀
3/3
📩 How to get started:
- New users: Sign up here: researchtrend.ai/auth/signup and select “I agree to receive personalised daily email digests featuring the latest arXiv papers.”
- Existing users: Update your preferences in your profile: researchtrend.ai/profile.
2/3
🎉 Happy New Year!
We’re excited to announce the launch of our email digest feature on ResearchTrend.AI! 🚀
You can now receive daily updates from the research communities you follow. Stay informed!
1/3
Here’s a podcast I did with Danish journalist Lone Frank during Folkemødet on Bornholm last summer. The opening blurb is in Danish, and the rest is in English. We talk about my move from the US to DK, and the Pioneer Centre for AI’s unique approach to research.
29.12.2024 08:47 — 👍 18 🔁 2 💬 0 📌 0💡 Follow these fast-evolving domains and join their discussions on researchtrend.ai ! 🚀 5/5
29.12.2024 05:44 — 👍 2 🔁 0 💬 0 📌 0
📊 Language Models for Tabular Data (LMTD)
While deep learning has conquered vision and language, tabular data remains a challenge. Let's see what happens in 2025. LMTD is gaining traction as researchers push the boundaries to unlock its full potential.
researchtrend.ai/communities/... 4/5
🤖 Large-Language Model Agents (LLMAG)
2025 is shaping up to be a transformative year for LLM agents. The surge in interest reflects their growing role in automation, reasoning, and decision-making across domains.
researchtrend.ai/communities/... 3/5
🔍 Vision-Language Models (VLM)
CLIP models and their variants continue to shine! With widespread applications in multimodal AI, VLM remains one of the largest and most active research communities.
researchtrend.ai/communities/... 2/5