Thanks Lucy!!β¨
15.04.2025 22:04 β π 0 π 0 π¬ 0 π 0@ianmagnusson.bsky.social
Science of language models @uwnlp.bsky.social and @ai2.bsky.social with @PangWeiKoh and @nlpnoah.bsky.social. https://ianmagnusson.github.io
Thanks Lucy!!β¨
15.04.2025 22:04 β π 0 π 0 π¬ 0 π 0And amazing colleagues @davidheineman.com eineman.com, Jena Hwang, @soldaini.net, @akshitab.bsky.social , @liujch1998.bsky.social, @mechanicaldirk.bsky.social, @oyvind-t.bsky.social, @nlpnoah.bsky.social, Pang Wei Koh, @jessedodge.bsky.social. Wouldnβt have been possible without all of them.
15.04.2025 19:36 β π 0 π 0 π¬ 0 π 0Iβm so grateful for all the hard work and good cheer of my co-first authors @taidnguyen.bsky.social and @benbogin.bsky.social ... π§΅
15.04.2025 19:36 β π 0 π 0 π¬ 1 π 0π Science relies on shared artifacts collected for the common good.
π° So we asked: what's missing in open language modeling?
πͺ DataDecide π charts the cosmos of pretrainingβacross scales and corporaβat a resolution beyond any public suite of models that has come before.
Today we're unveiling OLMoTrace, a tool that enables everyone to understand the outputs of LLMs by connecting to their training data.
We do this on unprecedented scale and in real time: finding matching text between model outputs and 4 trillion training tokens within seconds. β¨
π¨I too am on the job marketβΌοΈπ€―
I'm searching for faculty positions/postdocs in multilingual/multicultural NLP, vision+language models, and eval for genAI!
I'll be at #NeurIPS2024 presenting our work on meta-evaluation for text-to-image faithfulness! Let's chat there!
Papers inπ§΅, see more: saxon.me
the science of LMs should be fully openβ¨
today @akshitab.bsky.social @natolambert.bsky.social and I are giving our #neurips2024 tutorial on language model development.
everything from data, training, adaptation. published or not, no secrets π«‘
tues, 12/10, 9:30am PT βοΈ
neurips.cc/virtual/2024...
Excited to present MediQ at #NeurIPS !
πStop by my poster: East Exhibit Hall A-C #4805π·
πThu, Dec 12 | 11amβ2pm
ποΈtinyurl.com/mediq2024
Love to chat about anything--reasoning, synthetic data, multi-agent interaction, multilingual nlp! Message me if you want to chatβοΈπ΅π§
Want to predict the task performance of LMs before pretraining them?
We develop task scaling laws and model ladders, which predict the accuracy on individual tasks by OLMo 2 7B & 13B models within 2 points of absolute error. The cost is 1% of the compute used to pretrain them.
Excited to be at #NeurIPS next week in π¨π¦! Please reach out if you want to chat about LM post-training (TΓΌlu!), data curation, or anything else :)
I'll be around all week, with two papers you should go check out (see image or next tweet):
Touching down in Vancouver π¬ for #NeurIPS2024!
I'll be presenting our "Consent in Crisis" work on the 11th: arxiv.org/abs/2407.14933
Reach out to catch up or chat about:
- Training data / methods
- AI uses & impacts
- Multilingual scaling
And also check out our updated paper arxiv.org/abs/2312.10523
10.12.2024 04:00 β π 0 π 0 π¬ 0 π 0Drop by our poster presentation Friday (12/13) at 4:30-7:30pm neurips.cc/virtual/2024...
10.12.2024 03:58 β π 2 π 0 π¬ 1 π 0Come chat with me at #NeurIPS2024 and learn about how to use Paloma to evaluate perplexity over hundreds of domains! β¨We have stickers tooβ¨
10.12.2024 03:54 β π 21 π 4 π¬ 1 π 0Building/customizing your own LLM? You'll want to curate training data for it, but how do you know what makes the data good?
You can try out recipesπ©βπ³ iterate on β¨vibesβ¨ but we can't actually test all possible combos of tweaks,,, right?? π
ββοΈWRONG! arxiv.org/abs/2410.15661 (1/n) π§΅
Collaboration with @akshitab.bsky.social, @valentinhofmann.bsky.social, @soldaini.net i.net, @ananyahjha93.bsky.social, Oyvind Tafjord, Dustin Schwenk, Pete Walsh, @yanai.bsky.social, @kylelo.bsky.social , Dirk Groeneveld, Iz Beltagy, Hanna Hajishirzi, Noah Smith,Β Kyle Richardson, and Jesse Dodge
20.12.2023 20:41 β π 1 π 0 π¬ 0 π 0We invite submissions at github.com/allenai/ai2-.... Submissions can opt in to controls, or mark limitations to comparability. More than being a one-dimensional leaderboard, Paloma orchestrates fine-grained results for a greater density of comparisons across the research community.
20.12.2023 20:33 β π 0 π 0 π¬ 1 π 0Further decomposing perplexity, we find that some vocabulary strings get worse as models scale (see examples) βοΈ
Again, not always bad, but Paloma reports average loss of each vocabulary string, surfacing strings that behave differently in some domains.
We also show that performance improves in almost all domains as models are scaled, but domains improve unequally ππ
Differences in improvement, such as these examples, can indicate divergence, stagnation, or saturationβnot all bad, but worth investigating!
We pretrain six 1B baselines on popular corpora π€
With these we find Common-Crawl-only pretraining has inconsistent fit to many domains:
1. C4 and mC4 baselines erratically worse fit than median model
2. C4, mC4, and Falcon baselines sometimes non-monotonic perplexity in Fig 1
Along with the datasets we curate, we build eval corpora from held out Dolma data that sample:
π¬ top 100 subreddits
π§βπ» top 100 programming languagesΒ
Different research may require other domains, but Paloma enables research on 100s of domains from existing metadata.
We introduce guidelines and implement controls for LM experiments π:
1. Remove contaminated pretraining
2. Fix train order
3. Subsample eval data based on metric variance
4. Fix the vocabulary unless you study changing it
5. Standardize eval format
Paloma benchmark results are organized by comparability of:
π§ͺ controls like benchmark decontamination
πΈ measures of cost (parameter and training token count)
Find out more:
π arXiv (arxiv.org/pdf/2312.105...)
π€ data and models (huggingface.co/collections/...)
Perplexity macro averaged over any domains within each of the 18 top-level data sources in Paloma, using baselines with pretraining controls including decontamination. Evaluating on one monolithic corpus, such as C4, does not tell the complete story of model fit. Paloma lets us see when trends differ from one distribution of language to another. For instance, the 3 baselines trained on only Common Crawl data (C4, mC4-en, Falcon-RefinedWeb) exhibit high perplexity, sometimes with non-monotonic scaling over tokens seen, on specific evaluation sources such as The Pile, Dolma, and Dolma-100-Programming-Languages.
LMs are used to process text from many topics, styles, dialects, etc., but how well do they do?
π Evaluating perplexity on just one corpus like C4 doesn't tell the whole story π
β¨πβ¨
We introduce Paloma, a benchmark of 585 domains from NY Times to r/depression on Reddit.