Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: deepseek-coder-v2-lite-instruct; Exception in worker VllmWorkerProcess while processing method initialize_cache: [Errno 2] No such file or directory: '/root/.triton/cache/de758c429c9ff1f18930bbd9c3004506/fused_moe_kernel.json.tmp.pid_1528_587007', Traceback (most recent call last): #6276

Closed
fengyang95 opened this issue Jul 10, 2024 · 10 comments
Labels
bug Something isn't working stale

Comments

@fengyang95
Copy link

Your current environment

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.31

Python version: 3.9.2 (default, Feb 28 2021, 17:03:44)  [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L40
GPU 1: NVIDIA L40
GPU 2: NVIDIA L40
GPU 3: NVIDIA L40

Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   52 bits physical, 57 bits virtual
CPU(s):                          180
On-line CPU(s) list:             0-179
Thread(s) per core:              2
Core(s) per socket:              45
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           143
Model name:                      Intel(R) Xeon(R) Platinum 8457C
Stepping:                        8
CPU MHz:                         2599.520
BogoMIPS:                        5199.04
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       4.2 MiB
L1i cache:                       2.8 MiB
L2 cache:                        180 MiB
L3 cache:                        195 MiB
NUMA node0 CPU(s):               0-89
NUMA node1 CPU(s):               90-179
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Mitigation; TSX disabled
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities

Versions of relevant libraries:
[pip3] byted-torch==2.1.0.post2
[pip3] numpy==1.26.2
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchaudio==2.1.0+cu121
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    NODE    SYS     SYS     1,4-89  0               N/A
GPU1    NODE     X      NODE    SYS     SYS     1,4-89  0               N/A
GPU2    NODE    NODE     X      SYS     SYS     1,4-89  0               N/A
GPU3    SYS     SYS     SYS      X      SYS     91,94-179       1               N/A
NIC0    SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

🐛 Describe the bug

When using L40 4-card inference, the following error occurs probabilistically:

 ERROR 07-10 09:23:56 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method initialize_cache: [Errno 2] No such file or directory: '/root/.triton/cache/de758c429c9ff1f18930bbd9c3004506/fused_moe_kernel.json.tmp.pid_1528_587007', Traceback (most recent call last):
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/worker/worker.py", line 214, in initialize_cache
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     self._warm_up_model()
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/worker/worker.py", line 230, in _warm_up_model
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     self.model_runner.capture_model(self.gpu_cache)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/worker/model_runner.py", line 1109, in capture_model
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     graph_runner.capture(**capture_inputs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/worker/model_runner.py", line 1327, in capture
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     self.model(
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 482, in forward
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     hidden_states = self.model(input_ids, positions, kv_caches,
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 449, in forward
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     hidden_states, residual = layer(positions, hidden_states,
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 407, in forward
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     hidden_states = self.mlp(hidden_states)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 164, in forward
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     final_hidden_states = fused_experts(
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 506, in fused_experts
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     invoke_fused_moe_kernel(intermediate_cache2,
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     fused_moe_kernel[grid](
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/triton/runtime/jit.py", line 167, in <lambda>
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/triton/runtime/jit.py", line 416, in run
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     self.cache[device][key] = compile(
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/triton/compiler/compiler.py", line 202, in compile
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return CompiledKernel(so_path, metadata_group.get(metadata_filename))
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/triton/compiler/compiler.py", line 230, in __init__
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     self.asm = {
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.9/dist-packages/triton/compiler/compiler.py", line 231, in <dictcomp>
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     file.suffix[1:]: file.read_bytes() if file.suffix[1:] == driver.binary_ext else file.read_text()
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/lib/python3.9/pathlib.py", line 1255, in read_text
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     with self.open(mode='r', encoding=encoding, errors=errors) as f:
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/lib/python3.9/pathlib.py", line 1241, in open
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return io.open(self, mode, buffering, encoding, errors, newline,
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]   File "/usr/lib/python3.9/pathlib.py", line 1109, in _opener
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226]     return self._accessor.open(self, flags, mode)
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226] FileNotFoundError: [Errno 2] No such file or directory: '/root/.triton/cache/de758c429c9ff1f18930bbd9c3004506/fused_moe_kernel.json.tmp.pid_1528_587007'
(VllmWorkerProcess pid=1527) ERROR 07-10 09:23:56 multiproc_worker_utils.py:226] 
@fengyang95 fengyang95 added the bug Something isn't working label Jul 10, 2024
@Grey4sh
Copy link

Grey4sh commented Jul 10, 2024

I encounterd the same problem when inference DeepSeekCoder-V2 on 8 A100s with the latest vLLM docker.

@liuyang8643
Copy link

same problem

@liuyang8643
Copy link

solved by reinstall the triton from source

@FlyCarrot
Copy link

Maybe use triton==2.2.0.
For me, it solved this problem.

@Grey4sh
Copy link

Grey4sh commented Jul 10, 2024

Maybe use triton==2.2.0. For me, it solved this problem.

It works for me.

@jeejeelee
Copy link
Collaborator

FYI: ##6140

@jdf-prog
Copy link

Also encounter this problem, triton==2.2.0 solves it. But it seems there are some version mismatch between torch. Don't know if that will cause some problems.

@jeejeelee
Copy link
Collaborator

@jdf-prog #6140 has addressed this issue, you can update the vllm version to try it out

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Oct 25, 2024
Copy link

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

6 participants