Endure: Mind body and the curiously elastic limits of human performance by Alex Hutchinson.
Good theory before to learn anything from any Dummies series book
Endure: Mind body and the curiously elastic limits of human performance by Alex Hutchinson.
Good theory before to learn anything from any Dummies series book
Interesting did not know that arxiv requires peer-reviewed papers
arxiv.org/abs/2601.17036
Czech news adopted it fast cc.cz/jaroslav-bec...
29.01.2026 16:41 β π 0 π 0 π¬ 1 π 0
Our Pulse News App -- our first app is live on iOS today.
We also announce star investors cap.
See the announcement video to see our plans.
If you are LLM guru at research or product building, join us to make the plans a reality
youtu.be/cA8-XJeoZAQ?...
Thank you to all coauthors to push the article to acceptance!
29.01.2026 15:42 β π 0 π 0 π¬ 0 π 0In general using LLM annotations for self-improvement is great future direction
29.01.2026 15:41 β π 2 π 0 π¬ 1 π 0
Annotations using LLM are pretty under researched topics.
I would be interested how it performs on coding tasks and if it helps during CoT for coding tasks
Well done @zdenekkasner.bsky.social et al!
LLMs as Span Annotators: A Comparative Study of LLMs and Humans is accepted to multilingual-multicultural-evaluation.github.io π
See paper arxiv.org/abs/2504.08697
What you plan to read on vacation?
29.01.2026 15:28 β π 0 π 0 π¬ 1 π 0
Dr.LLM: Adaptive Layer Routing for Efficient Inference
Avoiding computation is pretty obvious way how to be more efficient.
The question is how to maintain quality. MoE and router is an efficient approach but suffer from exposure bias fixed thresholds.
arxiv.org/abs/2510.12773
so what it is?
29.01.2026 14:00 β π 0 π 0 π¬ 1 π 0What do you want to do with the deeper layers
29.01.2026 12:23 β π 1 π 0 π¬ 0 π 0Do you think future is MoE or dense models?
29.01.2026 12:21 β π 0 π 0 π¬ 1 π 0
Qwen3 Embedder is amazing
but it is tricky to use.
Left padding confused lot of early adopters.
However, clearly it is the future. Reuse of pre-trained LLM for everything.
Including vector search where BERT-like models ruled.
1/N
arxiv.org/abs/2506.05176
huggingface.co/Qwen/Qwen3-E...
So would you like to prune them?
29.01.2026 12:13 β π 0 π 0 π¬ 0 π 0
The no free lunch idea is here again.
Saw it in TTS models where Guided Attention worked very well. Idea is simple: For speech conversion prefer guide the attention to focus on narrow context. It learns faster.
Looking forward to deep dive on this one - expect similarities
arxiv.org/abs/2601.15165
It can be used in RAG, so for fetching relevant documents in clawbot it makes very much sense
29.01.2026 12:09 β π 0 π 0 π¬ 0 π 0
The last token pooling is compressing the embeddings too much.
However it is engineering marvel.
Including tricks like the spherical last checkpoint averaging is awesome.
The video is on LinkedIn. Check it out it turn out fun www.linkedin.com/posts/bottle...
07.11.2025 22:04 β π 0 π 0 π¬ 0 π 0
π WE ARE LOOKING FOR YOU! (yes, you!)
π¨NoCap test deadline 11/11/2025
π $3K / $2K / $1K prizes
Our new team members shared tips&tricks which helped them to go through our test!
TRY IT NOW! π github.com/BottleCapAI/...
The right words at the right time with the right style for the venue. Proud of our president π¨πΏ
youtu.be/d3wT84egi-g?...
Me: Scaling algorithms is just much more fun, than just burning electricity on more and more GPUs.
22.09.2025 13:41 β π 0 π 0 π¬ 0 π 0
Official BottleCapAI: we believe AI shouldnβt cost tens of millions to train.
We are now opening challenge for those who want to help with that!
π $3K / $2K / $1K prizes
β° Deadline: 11/11/2025
π₯οΈ 1 GPU. Your ideas.
π Join the NoCap Test:
github.com/BottleCapAI/...
π£Take part in 3rd Terminology shared task @WMT!π£
This year:
π5 language pairs: EN->{ES, RU, DE, ZH},
π2 tracks - sentence-level and doc-level translation,
πauthentic data from 2 domains: finance and IT!
www2.statmt.org/wmt25/termin...
Don't miss an opportunity - we only do it once in two yearsπ
I just found awesome channel on YouTube.
It has 77k subscribers with just 7 videos.
Because the videos are just awesome!
I wish I could explain stuff that simply!
Does anybody know how long take to prepare such video?
www.youtube.com/@algorithmic...
Thanks to all my awesome collaborators!
ποΈ VilΓ©m @zouharvi.bsky.social
ποΈ PatrΓcia @patuchen.bsky.social
ποΈ Ivan @ivankartac.bsky.social
ποΈ KristΓ½na OnderkovΓ‘
ποΈ OndΕej P. @oplatek.bsky.social
ποΈ Dimitra @dimitrag.bsky.social
ποΈ Saad @saad.me.uk
ποΈ OndΕej D. @tuetschek.bsky.social
ποΈ Simone Balloccu
How do LLMs compare to human crowdworkers in annotating text spans? π§π€
And how can span annotation help us with evaluating texts?
Find out in our new paper: llm-span-annotators.github.io
Arxiv: arxiv.org/abs/2504.08697
We've been making the media rounds!
ππΊ @hajicjan.bsky.social talked about the new OpenEuroLLM project on Czech TV's Studio 6 www.ceskatelevize.cz/porady/10969...
ππ» @tuetschek.bsky.social discussed #LLMs on Czech Radio radiozurnal.rozhlas.cz/proc-umela-i...).
Congrats to β¨Ε Γ‘rka ZikΓ‘novΓ‘β¨on her promotion to Associate Professor! π Her research explores syntax, information structure, and discourse relations in Czech and beyond. She contributed to key linguistic corpora and now focuses on psycholinguistic studies of discourse. ufal.mff.cuni.cz/sarka-zikanova
25.02.2025 10:44 β π 6 π 1 π¬ 0 π 0