End-to-End Fine-Tuning - fairseq2 DocumentationContentsMenuExpandLight modeDark modeAuto light/dark, in light modeAuto light/dark, in dark mode
π Big news for LLM researchers! #fairseq2 now has native support in #vLLM. Deploy your fine-tuned language models with vLLM in just one command for lightning-fast performance. Ready to accelerate your research like in FAIR? Check this out: facebookresearch.github.io/fairseq2/sta...
19.02.2025 09:19 β π 0 π 1 π¬ 1 π 0
πΌοΈ A gallery of open-source projects and papers powered by #fairseq2! π
Seamless Communication and Large Concept Models are 2 vivid examples that showcase the potential of what we are building.
More exciting FAIR research built on fairseq2 is on the way!
14.02.2025 10:18 β π 0 π 1 π¬ 1 π 0
Excited to announce our first "true" release of fairseq2! In v0.4, our main focus has been language model finetuning, preference optimization for scales up to 70B. These recipes are already widely used by many FAIR researchers and can easily be extended with new loss algorithms.
12.02.2025 12:57 β π 2 π 0 π¬ 1 π 0