Documentation: Stable, Nightly | Install: Linux, macOS, Windows, From Source | Contribute: Guidelines
fairseq2 is a sequence modeling toolkit that allows researchers to train custom models for content generation tasks.
Many FAIR teams utilize fairseq2 for a diverse set of projects, ranging from language model preference optimization to pretraining video diffusion models.
fairseq2 is a start-from-scratch project that can be considered a reboot of the original fairseq to provide a clean, modular API. Notably, it differs from its predecessor in its design philosophy, moving from a monolithic framework to an extensible, much less intrusive architecture allowing researchers to independently own their project code base.
As fairseq2 is a complete new project rather than an incremental update to the original fairseq, we intentionally avoided labeling it as fairseq version 2, reflecting its distinct and separate identity.
- February 2025: Instruction finetuning and preference optimization recipes with support for DPO, CPO, SimPO, and ORPO. Supports tensor parallelism and 70B+ scales.
- First-party recipes for language model instruction finetuning and preference optimization
- Multi-GPU, multi-node training using DDP, FSDP, and tensor parallelism. Supports 70B+ models.
- Native support for vLLM along with built-in sampling and beam search sequence generators
- Extensible with setuptools extension mechanism. Easily register new models, optimizers, lr schedulers, trainer units without forking/branching the library.
- Modern PyTorch tooling. Uses composability (i.e. torch.compile), PyTorch FSDP, and other relevant features
- Streaming-based, high throughput data pipeline API written in C++ with support for speech and (soon) video decoding
- Programmatic asset cards for version controlled access to models, datasets, and tokenizers
- Flexible, but deterministic configuration based on the built-in structured API
Visit our documentation website to learn more about fairseq2.
As of today, the following models are available in fairseq2 for use in training and evaluation recipes:
- LLaMA 1 to 3.3
- Mistral 7B
- NLLB-200
- S2T Transformer + Conformer
- V-JEPA
- w2v-BERT
- wav2vec 2.0
- wav2vec 2.0 ASR
fairseq2 is also used by various external projects such as:
fairseq2 depends on libsndfile, which can be installed via the system package manager on most Linux distributions. For Ubuntu-based systems, run:
sudo apt install libsndfile1
Similarly, on Fedora, run:
sudo dnf install libsndfile
For other Linux distributions, please consult its documentation on how to install packages.
To install fairseq2 on Linux x86-64, run:
pip install fairseq2
This command will install a version of fairseq2 that is compatible with PyTorch hosted on PyPI.
At this time, we do not offer a pre-built package for ARM-based systems such as Raspberry PI or NVIDIA Jetson. Please refer to Install From Source to learn how to build and install fairseq2 on those systems.
Besides PyPI, fairseq2 also has pre-built packages available for different PyTorch and CUDA versions hosted on FAIR's package repository. The following matrix shows the supported combinations.
fairseq2 | PyTorch | Python | Variant* | Arch |
---|---|---|---|---|
HEAD |
2.6.0 |
>=3.10 , <=3.12 |
cpu , cu118 , cu124 |
x86_64 |
2.5.0 , 2.5.1 |
>=3.10 , <=3.12 |
cpu , cu118 , cu121 , cu124 |
x86_64 |
|
2.4.0 , 2.4.1 |
>=3.10 , <=3.12 |
cpu , cu118 , cu121 , cu124 |
x86_64 |
|
0.4 |
2.6.0 |
>=3.10 , <=3.12 |
cpu , cu118 , cu124 |
x86_64 |
2.5.0 , 2.5.1 |
>=3.10 , <=3.12 |
cpu , cu118 , cu121 , cu124 |
x86_64 |
|
2.4.0 , 2.4.1 |
>=3.10 , <=3.12 |
cpu , cu118 , cu121 , cu124 |
x86_64 |
* cuXYZ refers to CUDA XY.Z (e.g. cu118 means CUDA 11.8)
To install a specific combination, first follow the installation instructions on
pytorch.org for the desired PyTorch
version, and then use the following command (shown for PyTorch 2.6.0
and
variant cu124
):
pip install fairseq2\
--extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/pt2.6.0/cu124
Warning
fairseq2 relies on the C++ API of PyTorch which has no API/ABI compatibility between releases. This means you have to install the fairseq2 variant that exactly matches your PyTorch version. Otherwise, you might experience issues like immediate process crashes or spurious segfaults. For the same reason, if you upgrade your PyTorch version, you must also upgrade your fairseq2 installation.
For Linux, we also host nightly builds on FAIR's package repository. The
supported variants are identical to the ones listed in Variants above. Once
you have installed the desired PyTorch version, you can use the following
command to install the corresponding nightly package (shown for PyTorch 2.6.0
and variant cu124
):
pip install fairseq2\
--pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.6.0/cu124
fairseq2 depends on libsndfile, which can be installed via Homebrew:
brew install libsndfile
To install fairseq2 on ARM64-based (i.e. Apple silicon) Mac computers, run:
pip install fairseq2
This command will install a version of fairseq2 that is compatible with PyTorch hosted on PyPI.
At this time, we do not offer a pre-built package for Intel-based Mac computers. Please refer to Install From Source to learn how to build and install fairseq2 on Intel machines.
Besides PyPI, fairseq2 also has pre-built packages available for different PyTorch versions hosted on FAIR's package repository. The following matrix shows the supported combinations.
fairseq2 | PyTorch | Python | Arch |
---|---|---|---|
0.4 |
2.6.0 |
>=3.10 , <=3.12 |
arm64 |
To install a specific combination, first follow the installation instructions on
pytorch.org for the desired PyTorch
version, and then use the following command (shown for PyTorch 2.6.0
):
pip install fairseq2\
--extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/pt2.6.0/cpu
Warning
fairseq2 relies on the C++ API of PyTorch which has no API/ABI compatibility between releases. This means you have to install the fairseq2 variant that exactly matches your PyTorch version. Otherwise, you might experience issues like immediate process crashes or spurious segfaults. For the same reason, if you upgrade your PyTorch version, you must also upgrade your fairseq2 installation.
For macOS, we also host nightly builds on FAIR's package repository. The
supported variants are identical to the ones listed in Variants above. Once
you have installed the desired PyTorch version, you can use the following
command to install the corresponding nightly package (shown for PyTorch 2.6.0
):
pip install fairseq2\
--pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.6.0/cpu
fairseq2 does not have native support for Windows and there are no plans to support it in the foreseeable future. However, you can use fairseq2 via the Windows Subsystem for Linux (a.k.a. WSL) along with full CUDA support introduced in WSL 2. Please follow the instructions in the Installing on Linux section for a WSL-based installation.
See here.
We always welcome contributions to fairseq2! Please refer to Contribution Guidelines to learn how to format, test, and submit your work.
If you use fairseq2 in your research and wish to refer to it, please use the following BibTeX entry.
@software{balioglu2023fairseq2,
author = {Can Balioglu and Martin Gleize and Artyom Kozhevnikov and Ilia Kulikov and Tuan Tran and Julien Yao},
title = {fairseq2},
url = {http://github.com/facebookresearch/fairseq2},
year = {2023},
}
This project is MIT licensed, as found in the LICENSE file.