Llama3 conversion from Megatron DCP checkpoints to HF [NeMo 1.0] #11345
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
Improves docs and fixes tool to convert from a NeMo/Megatron DCP checkpoint to a HF checkpoint
Collection: NLP
Changelog
scripts/checkpoint_converters/convert_llama_nemo_to_hf.py
Usage
Hi!
There has been some discussions regarding NeMo/Megatron checkpoint conversions in both projects [1], [2], [3], [4]. I would like to share some insights regarding this issue, as I've been able to successfully convert checkpoints to HF.
First, my dependencies:
fc7905d
, I know it's a month old but it's the version we are working withbf74129
7d576ed
. It's a bit old, but it's the one used in the NeMo/Dockerfile.ci image[nvcr.io/nvidia/pytorch:24.05-py3](http://nvcr.io/nvidia/pytorch:24.05-py3)
Some comments after converting a NeMo/Megatron Llama3.1-8B checkpoint stored with the TORCH distributed backend:
The official documentation points out the examples/nlp/language_modeling/megatron_lm_ckpt_to_nemo.py , but it doesn't works. Instead I could use a pretty similar script stored under the same folder examples/nlp/language_modeling/megatron_ckpt_to_nemo.py. If I'm not wrong, the docs state You can convert your GPT-style model checkpoints trained with Megatron-LM into the NeMo Framework but now when using NeMo with
model.dist_ckpt_format: torch_dist
we are indeed storing Megatron-LM checkpoints, right? So this step will be essential to convert to.nemo
before converting to HF.model.state_dict()
calls withmodel.model[0].state_dict()
. Also the key -->[f'model. ...']
with[f'...']
model_config.sequence_parallel = 0
as it raised a error if the model was trained with TP (We are also overriding TP = 1)hparams.yaml
file (I can push this changes too)IMO I think it would be nice to clarify how to convert checkpoints to HF as it's the most popular platform to share checkpoints with the community.
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information
cc @MaximumEntropy, @ericharper, @ekmb, @yzhang123, @VahidooX, @vladgets, @okuchaiev