Add llama.cpp backend #43
test_cli_cuda_vllm_single_gpu.yaml
on: pull_request
run_cli_cuda_pytorch_tests
5m 19s
Annotations
2 errors
run_cli_cuda_pytorch_tests
Canceling since a higher priority waiting request for 'CLI CUDA vLLM Single-GPU Tests-231' exists
|
run_cli_cuda_pytorch_tests
The operation was canceled.
|