Skip to content

Add llama.cpp backend #43

Add llama.cpp backend

Add llama.cpp backend #43

Triggered via pull request July 24, 2024 10:44
Status Cancelled
Total duration 5m 30s
Artifacts
run_cli_cuda_pytorch_tests
5m 19s
run_cli_cuda_pytorch_tests
Fit to window
Zoom out
Zoom in

Annotations

2 errors
run_cli_cuda_pytorch_tests
Canceling since a higher priority waiting request for 'CLI CUDA vLLM Single-GPU Tests-231' exists
run_cli_cuda_pytorch_tests
The operation was canceled.