Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory. #12

Open
OpenSource-fan opened this issue Apr 4, 2023 · 3 comments
Open

RuntimeError: CUDA out of memory. #12

OpenSource-fan opened this issue Apr 4, 2023 · 3 comments

Comments

@OpenSource-fan
Copy link

I fine-tune llama-7b on 8 V100 32G. However, it occurs CUDA out of memory.

RuntimeError: CUDA out of memory. Tried to allocate 688.00 MiB (GPU 6; 31.75 GiB total capacity; 29.87 GiB already allocated; 41.94 MiB free; 30.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[2023-04-04 17:43:59,766] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1840
[2023-04-04 17:44:02,370] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1841
[2023-04-04 17:44:05,213] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1842
[2023-04-04 17:44:07,817] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1843
[2023-04-04 17:44:10,420] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1844
[2023-04-04 17:44:13,023] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1845
[2023-04-04 17:44:15,585] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1846
[2023-04-04 17:44:15,586] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1847
[2023-04-04 17:44:18,429] [ERROR] [launch.py:324:sigkill_handler] ['/home/aiscuser/.conda/envs/llamax/bin/python3.1', '-u', 'train.py', '--local_rank=7', '--model_name_or_path', './llama-7b-hf', '--data_path', '../data/alpaca_data.json', '--output_dir', 'output/', '--num_train_epochs', '3', '--per_device_train_batch_size', '64', '--per_device_eval_batch_size', '1', '--gradient_accumulation_steps', '1', '--evaluation_strategy', 'no', '--save_strategy', 'steps', '--save_steps', '100', '--save_total_limit', '2', '--learning_rate', '2e-5', '--warmup_steps', '2', '--logging_steps', '2', '--lr_scheduler_type', 'cosine', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--deepspeed', 'configs/deepspeed_config.json', '--fp16', 'True'] exits with return code = 1

watch -n 1 nvidia-smi


    Tue Apr  4 17:48:15 2023
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 510.85.02    Driver Version: 510.85.02    CUDA Version: 11.6     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  Tesla V100-SXM2...  On   | 00000001:00:00.0 Off |                    0 |
    | N/A   42C    P0    67W / 300W |   4491MiB / 32768MiB |     32%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   1  Tesla V100-SXM2...  On   | 00000002:00:00.0 Off |                    0 |
    | N/A   44C    P0    57W / 300W |   2482MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   2  Tesla V100-SXM2...  On   | 00000003:00:00.0 Off |                    0 |
    | N/A   40C    P0    54W / 300W |   2502MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   3  Tesla V100-SXM2...  On   | 00000004:00:00.0 Off |                    0 |
    | N/A   41C    P0    56W / 300W |   2482MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   4  Tesla V100-SXM2...  On   | 00000005:00:00.0 Off |                    0 |
    | N/A   39C    P0    54W / 300W |   2522MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   5  Tesla V100-SXM2...  On   | 00000006:00:00.0 Off |                    0 |
    | N/A   43C    P0    59W / 300W |   2522MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   6  Tesla V100-SXM2...  On   | 00000007:00:00.0 Off |                    0 |
    | N/A   40C    P0    55W / 300W |   2502MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   7  Tesla V100-SXM2...  On   | 00000008:00:00.0 Off |                    0 |
    | N/A   43C    P0    53W / 300W |   2522MiB / 32768MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
@OpenSource-fan
Copy link
Author

OpenSource-fan commented Apr 4, 2023

set --per_device_train_batch_size 4. It worked. However, the usage of each GPU varies too much especially for GPU6

Every 1.0s: nvidia-smi                                                                     node-0: Tue Apr  4 18:19:26 2023

Tue Apr  4 18:19:27 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.85.02    Driver Version: 510.85.02    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000001:00:00.0 Off |                    0 |
| N/A   44C    P0    76W / 300W |  17145MiB / 32768MiB |     36%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000002:00:00.0 Off |                    0 |
| N/A   47C    P0    77W / 300W |  17566MiB / 32768MiB |     57%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2...  On   | 00000003:00:00.0 Off |                    0 |
| N/A   43C    P0    71W / 300W |  12458MiB / 32768MiB |     83%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2...  On   | 00000004:00:00.0 Off |                    0 |
| N/A   44C    P0    72W / 300W |  11226MiB / 32768MiB |     33%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   4  Tesla V100-SXM2...  On   | 00000005:00:00.0 Off |                    0 |
| N/A   42C    P0    76W / 300W |  13112MiB / 32768MiB |     32%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   5  Tesla V100-SXM2...  On   | 00000006:00:00.0 Off |                    0 |
| N/A   46C    P0    68W / 300W |  10724MiB / 32768MiB |     62%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   6  Tesla V100-SXM2...  On   | 00000007:00:00.0 Off |                    0 |
| N/A   44C    P0    75W / 300W |  26108MiB / 32768MiB |     30%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   7  Tesla V100-SXM2...  On   | 00000008:00:00.0 Off |                    0 |
| N/A   47C    P0    69W / 300W |  20388MiB / 32768MiB |     57%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

The log tells us we need more time rather than one hour to fine-tune it.

'loss': 1.3567, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.6182, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.5182, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.6655, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 1.5583, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.4035, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.5064, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.5115, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.6298, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 1.539, 'learning_rate': 1e-05, 'epoch': 0.01}
{'loss': 1.4479, 'learning_rate': 1.9999997924406317e-05, 'epoch': 0.01}
{'loss': 1.2644, 'learning_rate': 1.9999981319662e-05, 'epoch': 0.01}
{'loss': 1.2153, 'learning_rate': 1.9999948110200944e-05, 'epoch': 0.02}
{'loss': 1.1531, 'learning_rate': 1.9999898296078282e-05, 'epoch': 0.02}
{'loss': 1.2623, 'learning_rate': 1.999983187737674e-05, 'epoch': 0.02}
{'loss': 1.1446, 'learning_rate': 1.99997488542066e-05, 'epoch': 0.02}
{'loss': 1.1687, 'learning_rate': 1.999964922670572e-05, 'epoch': 0.02}
{'loss': 1.1339, 'learning_rate': 1.9999532995039525e-05, 'epoch': 0.02}
{'loss': 1.2593, 'learning_rate': 1.999940015940102e-05, 'epoch': 0.02}
{'loss': 1.0747, 'learning_rate': 1.9999250720010775e-05, 'epoch': 0.02}
{'loss': 1.184, 'learning_rate': 1.9999084677116928e-05, 'epoch': 0.03}
  1%|          | 42/4878 [22:52<44:47:46, 33.35s/it]

Is it correct?

@AetherCortex
Copy link
Owner

To train one epoch, it takes one hour and the total time required depends on the number of epochs. The alpaca dataset comprises 52,000 training samples, and in one epoch, the number of batches is approximately 100, calculated by dividing 52,000 by 64*8. Based on the nvidia-smi status you provided, the time taken to run 42 batches is consistent with our expectations.

@OpenSource-fan
Copy link
Author

OpenSource-fan commented Apr 5, 2023

In fact, It will be OOM when I set batch size = 64. 😭😭😭
It works when I set batch size =4. And it will take about 45h to train 3 epochs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants