You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When using Megatron-Core v0.9.0 with CUDA Graphs enabled, NaN gradients are encountered during the backward computation. This issue does not occur when CUDA Graphs are disabled.
To Reproduce
To reproduce this issue, follow these steps:
Ensure that Megatron-Core v0.9.0 is installed and set up correctly in your environment.
Configure the transformerconfig by setting enablecudagraph to True.
Train a model or run a training script that involves backward computation.
Observe the gradients during training to notice NaN values.
Expected behavior
The expected behavior is for the model to train normally without encountering NaN gradients, even with CUDA Graphs enabled. The use of CUDA Graphs should not affect the correctness of the gradient computations.
Stack trace/logs
[rank5]: Traceback (most recent call last):
[rank5]: File "/workspace/Megatron-LM/pretrain_gpt.py", line 265, in <module>
[rank5]: pretrain(
[rank5]: File "/workspace/Megatron-LM/megatron/training/training.py", line 360, in pretrain
[rank5]: iteration, num_floating_point_operations_so_far = train(
[rank5]: File "/workspace/Megatron-LM/megatron/training/training.py", line 1262, in train
[rank5]: train_step(forward_step_func,
[rank5]: File "/workspace/Megatron-LM/megatron/training/training.py", line 730, in train_step
[rank5]: losses_reduced = forward_backward_func(
[rank5]: File "/workspace/Megatron-LM/megatron/core/pipeline_parallel/schedules.py", line 492, in forward_backward_no_pipelining
[rank5]: config.finalize_model_grads_func(
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/finalize_model_grads.py", line 112, in finalize_model_grads
[rank5]: model_chunk.finish_grad_sync()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/distributed_data_parallel.py", line 422, in finish_grad_sync
[rank5]: bucket_group.finish_grad_sync()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/param_and_grad_buffer.py", line 302, in finish_grad_sync
[rank5]: self.start_grad_sync()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/param_and_grad_buffer.py", line 244, in start_grad_sync
[rank5]: self.check_for_nan_in_grad()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/param_and_grad_buffer.py", line 148, in check_for_nan_in_grad
[rank5]: assert not norm_is_nan, (
[rank5]: AssertionError: Rank 5: found NaN in local grad norm in backward pass before data-parallel communication collective. Device: 5, node: infra-train-3-ali-0
Environment (please complete the following information)
Describe the bug
When using Megatron-Core v0.9.0 with CUDA Graphs enabled, NaN gradients are encountered during the backward computation. This issue does not occur when CUDA Graphs are disabled.
To Reproduce
To reproduce this issue, follow these steps:
Expected behavior
The expected behavior is for the model to train normally without encountering NaN gradients, even with CUDA Graphs enabled. The use of CUDA Graphs should not affect the correctness of the gradient computations.
Stack trace/logs
Environment (please complete the following information)
The text was updated successfully, but these errors were encountered: