Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flowโnot just the final weights, but the entire training journey.
Best fully open 32B reasoning model & best 32B base model. ๐งต
@akshitab.bsky.social
Research Engineer at Ai2 https://akshitab.github.io/
Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flowโnot just the final weights, but the entire training journey.
Best fully open 32B reasoning model & best 32B base model. ๐งต
Such an interesting paper!
31.10.2025 18:41 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0The Cancer AI Alliance (CAIA) is already prototyping Asta DataVoyager in a federated, multi-institution setup for cancer studiesโkeeping clinical data local and secure.
Read more about CAIA here: buff.ly/ACpxLNT
Introducing FlexOlmo, a new paradigm for language model training that enables the co-development of AI through data collaboration. ๐งต
09.07.2025 16:02 โ ๐ 15 ๐ 6 ๐ฌ 1 ๐ 2Announcing OLMo 2 32B: the first fully open model to beat GPT 3.5 & GPT-4o mini on a suite of popular, multi-skill benchmarks.
Comparable to best open-weight models, but a fraction of training compute. When you have a good recipe, โจ magical things happen when you scale it up!
I caught myself wanting to respond similarly to Claude and then told myself that it will be wasteful inference. But now I also mentally thank it each time because what if I lose that instinct with humans.. I'm already impatient with smart speakers.
12.02.2025 20:00 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0They made me do video ๐ฌ but for a good reason!
We are launching an iOS appโit runs OLMoE locally ๐ฑ We're gonna see more on-device AI in 2025, and wanted to offer a simple way to prototype with it
App: apps.apple.com/us/app/ai2-o...
Code: github.com/allenai/OLMo...
Blog: allenai.org/blog/olmoe-app
kicking off 2025 with our OLMo 2 tech report while payin homage to the sequelest of sequels ๐ซก
๐ 2 OLMo 2 Furious ๐ฅ is everythin we learned since OLMo 1, with deep dives into:
๐ stable pretrain recipe
๐ lr anneal ๐ค data curricula ๐ค soups
๐ tulu post-train recipe
๐ compute infra setup
๐๐งต
Want to predict the task performance of LMs before pretraining them?
We develop task scaling laws and model ladders, which predict the accuracy on individual tasks by OLMo 2 7B & 13B models within 2 points of absolute error. The cost is 1% of the compute used to pretrain them.
The OLMo 2 models sit at the Pareto frontier of training FLOPs vs model average performance.
Meet OLMo 2, the best fully open language model to date, including a family of 7B and 13B models trained up to 5T tokens. OLMo 2 outperforms other fully open models and competes with open-weight models like Llama 3.1 8B โ As always, we released our data, code, recipes and more ๐
26.11.2024 20:51 โ ๐ 151 ๐ 36 ๐ฌ 5 ๐ 12I use GoodNotes
26.11.2024 18:53 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0๐โโ๏ธ
22.11.2024 01:24 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0release day release day ๐ฅณ OLMo 1b +7b out today and 65b soon...
OLMo accelerates the study of LMs. We release *everything*, from toolkit for creating data (Dolma) to train/inf code
blog blog.allenai.org/olmo-open-la...
olmo paper allenai.org/olmo/olmo-pa...
dolma paper allenai.org/olmo/dolma-p...
๐
09.01.2024 03:42 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Perplexity macro averaged over any domains within each of the 18 top-level data sources in Paloma, using baselines with pretraining controls including decontamination. Evaluating on one monolithic corpus, such as C4, does not tell the complete story of model fit. Paloma lets us see when trends differ from one distribution of language to another. For instance, the 3 baselines trained on only Common Crawl data (C4, mC4-en, Falcon-RefinedWeb) exhibit high perplexity, sometimes with non-monotonic scaling over tokens seen, on specific evaluation sources such as The Pile, Dolma, and Dolma-100-Programming-Languages.
LMs are used to process text from many topics, styles, dialects, etc., but how well do they do?
๐ Evaluating perplexity on just one corpus like C4 doesn't tell the whole story ๐
โจ๐โจ
We introduce Paloma, a benchmark of 585 domains from NY Times to r/depression on Reddit.