Reports of AI eating entry level jobs are greatly exaggerated.
My guess is current and near-future LLMs are more likely to increase the demand for programmers, not decrease demand (Jevons Paradox).
@benjaminwarner.dev.bsky.social
R&D at answer.ai
Reports of AI eating entry level jobs are greatly exaggerated.
My guess is current and near-future LLMs are more likely to increase the demand for programmers, not decrease demand (Jevons Paradox).
There isn't a canonical version, but there are retrieval models from GTE and Nomic which might work for your task.
GTE: huggingface.co/Alibaba-NLP/...
Nomic: huggingface.co/nomic-ai/mod...
For more details, including our simple training method, see Benjamin ClaviΓ©'s twitter announcement, our model, blog post, and paper.
Twitter: x.com/bclavie/stat...
Model: huggingface.co/answerdotai/...
Blog: www.answer.ai/posts/2025-0...
Paper: arxiv.org/abs/2502.03793
Can all encoders be instruction-tuned? Can we replicate ModernBERT's results with an older model like RoBERTa or peer model like GTE-en-MLM?
No. And it's not close.
When we finetune ModernBERT-Large-Instruct on task specific datasets, the generative MLM head is better or nearly equal to standard classification heads.
10.02.2025 18:13 β π 0 π 0 π¬ 1 π 0After instruction tuning on Flan, ModernBERT-Large-Instruct outperforms similarly sized LLMs on MMLU & MMLU-Pro, and achieves ~90 percent of Llama 3.2 1B's performance with ~65 percent fewer parameters.
10.02.2025 18:13 β π 1 π 0 π¬ 1 π 0With @bclavie.bsky.social and @ncoop57.bsky.social, we tried to answer two questions:
- Can an instruction-tuned ModernBERT zero-shot tasks using the MLM-head?
- Could we then fine-tune instruction-tuned ModernBERT to complete any task?
Detailed answers: arxiv.org/abs/2502.03793
from transformers import pipeline model_name = "answerdotai/ModernBERT-Large-Instruct" fill_mask = pipeline("fill-mask", model=model_name, tokenizer=model_name) text = """You will be given a question and options. Select the right answer. QUESTION: If (G, .) is a group such that (ab)^-1 = a^-1b^-1, for all a, b in G, then G is a/an CHOICES: - A: commutative semi group - B: abelian group - C: non-abelian group - D: None of these ANSWER: [unused0] [MASK]""" results = fill_mask(text) answer = results[0]["token_str"].strip() print(f"Predicted answer: {answer}") # Answer: B
One of the questions we debated while training ModernBERT was whether a modern trained encoder would unlock zero-shot reasoning using only it's generative head?
Spoilers: the answer is yes.
o3-mini is really good at writing internal documentation - feed it a codebase, get back a detailed explanation of how specific aspects of it work simonwillison.net/2025/Feb/5/o...
05.02.2025 06:09 β π 184 π 16 π¬ 6 π 2If you want to quickly catch up on all the open modeling things (DeepSeek, ModernBERT, etc.), this was a great overview, by @natolambert.bsky.social.
I somehow got into an argument last week with someone who was insisting that all models are industrial blackboxes... and I wish I'd had this on hand.
You can find the models on Hugging Face here:
- gte-modernbert-base: huggingface.co/Alibaba-NLP/...
- gte-reranker-modernbert-base: huggingface.co/Alibaba-NLP/...
In addition to being the best retrieval model under 300M params on METB (without extra work), and top 10 for under 1B, here's a fun tidbit from Alibaba's GTE ModernBERT model card:
gte-modernbert-base beats gte-qwen1.5-7b on LoCo long context retrieval with 7B less parameters.
The newest extremely strong embedding model based on ModernBERT-base is out: `cde-small-v2`. Both faster and stronger than its predecessor, this one tops the MTEB leaderboard for its tiny size!
Details in π§΅
ModernBERT-embed-base is awesome because it allows to use ModernBERT-base for various tasks out-of-the-box
But the large variant of ModernBERT is also awesome...
So today, @lightonai.bsky.social is releasing ModernBERT-embed-large, the larger and more capable iteration of ModernBERT-embed!
What's ModernBERT? It's a drop-in replacement for existing BERT models, but smarter, faster, and supports longer context.
Check out our announcement post for more details: huggingface.co/blog/modernb...
Transformers v4.48.0: ModernBERT, Aria, TimmWrapper, ColPali, Falcon3, Bamba, VitPose, DinoV2 w/ Registers, Emu3, Cohere v2, TextNet, DiffLlama, PixtralLarge, Moonshine
ModernBERT is officially released on Transformers v4.48.0. You no longer need to install from git to use.
If you are plugging ModernBERT into an existing encoder finetuning pipeline, try increasing the learning rate. We've found that ModernBERT tends to prefer a higher LR than older models.
*Actually, thatβs good compared to the 4090βs PCIe 4 without NVLink
07.01.2025 07:12 β π 0 π 0 π¬ 0 π 0The good: 32GB
The bad: $2,000
The Ugly*: PCIe 5 without NVLink
Basically, a frontier model like OpenAIβs O1 is like a Ferrari SF-23. Itβs an obvious triumph of engineering, designed to win races, and thatβs why we talk about it. But it takes a special pit crew just to change the tires and you canβt buy one for yourself. In contrast, a BERT model is like a Honda Civic. Itβs also an engineering triumph, but more subtly, since it is engineered to be affordable, fuel-efficient, reliable, and extremely useful. And thatβs why theyβre absolutely everywhere.
Via @simonwillison.net's excellent blog, I found this great quote about AI models, from @benjaminwarner.dev et al. www.answer.ai/posts/2024-1...
It seems to me that AI will be most relevant in people's lives because the Honda Civic is ubiquitous, not so much because everyone is driving a Ferrari.
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!
Details in π§΅
ModernBERT is a βfoundation modelβ so youβll either need to finetune it for entailment/NLI or wait for someone else to finetune it. I suspect it would be good at NLI once finetuned.
24.12.2024 22:24 β π 3 π 0 π¬ 2 π 0We evaluated ModernBERT on MLDR using ColBERT-style retrieval using that code. That process was smaller scale than a full ColBERT finetune, which would need additional contrastive training, likely use multiple teacher models, etc as detailed here by @bclavie.bsky.social www.answer.ai/posts/2024-0...
24.12.2024 07:47 β π 1 π 0 π¬ 2 π 0Thanks. ModernBERT is a base model. Itβll need additional contrastive pretraining to really shine as a retrieval model, but our early results in the paper look promising. Hopefully there will be multiple open source retrieval tuned models to choose from early next year, including ColBERT finetunes.
24.12.2024 07:26 β π 2 π 0 π¬ 2 π 0Thanks for the kind words. We tried to fit as much information within our page limit as possible and have a comprehensive appendix.
As far as the name goes, all Iβll say is be careful not to use an overly strong code name.
(early results in our paper)
22.12.2024 22:08 β π 1 π 0 π¬ 0 π 0Thanks. Itβll need additional contrastive pretraining to really shine as a retrieval model, but our early results look promising. Hopefully there will be multiple open source retrieval tuned models to choose from early next year.
22.12.2024 22:07 β π 2 π 0 π¬ 1 π 0PS: BlueSky needs to make their really long account tags not count against the character limit.
22.12.2024 06:12 β π 5 π 0 π¬ 1 π 0I'm looking forward to seeing what you all will build with a modern encoder.
22.12.2024 06:12 β π 1 π 0 π¬ 1 π 0A big thanks to Iacopo Poli and @lightonai.bsky.social for sponsoring the compute to train ModernBERT, @bclavie.bsky.social for organizing the ModernBERT project, and to everyone who offered assistance and advice along the way. Also h/t to Johno Whitaker for the illustrations.
22.12.2024 06:12 β π 3 π 0 π¬ 1 π 0Thanks to my two co-leads: @nohtow.bsky.social , @bclavie.bsky.social , & the rest of our stacked author cast: @orionweller.bsky.social, Oskar HallstrΓΆm, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, @tomaarsen.com , @ncoop57.bsky.social , Griffin Adams, @howard.fm , & Iacopo Poli
22.12.2024 06:12 β π 6 π 1 π¬ 1 π 0