We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch now has native support for distributed tensor, might be a better way to do TP than megatron's MPU.
The text was updated successfully, but these errors were encountered:
Merge pull request #15 from huggingface/brrr-nanotron-sync
2be81dc
Helping making brrr depend on nanotron
No branches or pull requests
PyTorch now has native support for distributed tensor, might be a better way to do TP than megatron's MPU.
The text was updated successfully, but these errors were encountered: