Skip to content

[Tutorial][PTD] Deprecate Training Transformer models using Distributed Data Parallel and Pipeline Parallelism and redirect the page to parallelism APIs #3379

[Tutorial][PTD] Deprecate Training Transformer models using Distributed Data Parallel and Pipeline Parallelism and redirect the page to parallelism APIs

[Tutorial][PTD] Deprecate Training Transformer models using Distributed Data Parallel and Pipeline Parallelism and redirect the page to parallelism APIs #3379

Annotations

1 warning

pytorch_tutorial_build_worker (5, 15, linux.g5.4xlarge.nvidia.gpu)

succeeded Nov 5, 2024 in 19m 19s