[4/4] Collaborate efficiently with reproducible experiment setups using FAIRSeq2. Identify root causes swiftly and share lessons learned with the community. Create your benchmarks and contribute!
17.06.2025 11:01 β π 0 π 0 π¬ 0 π 0
[3/4] Beyond TensorBoard and WanDB, FAIRSeq2 supports torch profilers (set trainer.profile and common.profilers.torch.enabled=True) to inspect potential infra issues. Dive deep into your training processes with various profilers and metric recorders.
17.06.2025 11:01 β π 0 π 0 π¬ 1 π 0
[2/4] e.g., the tokens/s metric allows easy computation of Model FLOP Utilization β a great metric to check resource utilization efficiency. We achieve up to 48% MFU on 8 GPUs and maintain 37.6% across 4 nodes (32 GPUs). Experience effective and efficient distributed training!
17.06.2025 11:01 β π 0 π 0 π¬ 1 π 0
[1/4] π οΈ FAIRSeq2 β your go-to tool for reliable benchmarking and diagnosing infra issues! With native logging of metrics, monitor training performance in real-time and ensure great visibility. #AI #MachineLearning #fairseq2
17.06.2025 11:01 β π 1 π 0 π¬ 2 π 0
π Transform your LLM post-training with fairseq2! We make complex post-training into a breeze, so that you can make fairseq2 your paper machine!
Feel free to check our tutorials out:
- SFT: facebookresearch.github.io/fairseq2/sta...
- DPO: facebookresearch.github.io/fairseq2/sta...
10.03.2025 14:41 β π 1 π 0 π¬ 0 π 0
This project was made feasible by the excellent open-source LLM training library @fairseq2.bsky.social; I highly recommend giving it a look! It made both SFT and DPO a piece of cake π°
25.02.2025 21:58 β π 10 π 3 π¬ 0 π 1
Nothing explains better than a vivid example:
19.02.2025 14:02 β π 0 π 0 π¬ 0 π 0
End-to-End Fine-Tuning - fairseq2 DocumentationContentsMenuExpandLight modeDark modeAuto light/dark, in light modeAuto light/dark, in dark mode
π Big news for LLM researchers! #fairseq2 now has native support in #vLLM. Deploy your fine-tuned language models with vLLM in just one command for lightning-fast performance. Ready to accelerate your research like in FAIR? Check this out: facebookresearch.github.io/fairseq2/sta...
19.02.2025 09:19 β π 0 π 1 π¬ 1 π 0
πΌοΈ A gallery of open-source projects and papers powered by #fairseq2! π
Seamless Communication and Large Concept Models are 2 vivid examples that showcase the potential of what we are building.
More exciting FAIR research built on fairseq2 is on the way!
14.02.2025 10:18 β π 0 π 1 π¬ 1 π 0
Check out our doc for more details: facebookresearch.github.io/fairseq2/stablβ¦
12.02.2025 12:32 β π 0 π 0 π¬ 0 π 0
π Hello world! Weβre thrilled to announce the v0.4 release of fairseq2 β an open-source library from FAIR powering many projects at Meta. pip install fairseq2 and explore our trainer API, instruction & preference finetuning (up to 70B), and native vLLM integration.
12.02.2025 12:31 β π 4 π 2 π¬ 1 π 2
NLP PhD @ USC
Previously at AI2, Harvard
mattf1n.github.io
AI Research Scientist intern at FAIR Meta & PhD candidate at UPC Barcelona. Working on Multilingual and Multimodal Translation.
Postdoc at Meta FAIR, Comp Neuro PhD @McGill / Mila. Looking at the representation in brains and machines π¬ https://dongyanl1n.github.io/
π¬Research Scientist, Meta AI (FAIR).
πPhD from McGill University + Mila
πββοΈI study Multimodal LLMs, Vision-Language Alignment, LLM Interpretability & Iβm passionate about ML Reproducibility (@reproml.org)
πhttps://koustuvsinha.com/
Research Scientist at FAIR, Meta. π¬ My opinions are my own.
FAIR Researcher @metaai.bsky.social Previously Mila-Quebec, Microsoft Research, Adobe Research, IIT Roorkee
official Bluesky account (check usernameπ)
Bugs, feature requests, feedback: support@bsky.app