Skip to content

Add llama.cpp backend #34

Add llama.cpp backend

Add llama.cpp backend #34

Triggered via pull request July 24, 2024 10:44
Status Cancelled
Total duration 5m 29s
Artifacts
cli_cuda_tensorrt_llm_single_gpu_tests
2m 57s
cli_cuda_tensorrt_llm_single_gpu_tests
Fit to window
Zoom out
Zoom in

Annotations

2 errors
cli_cuda_tensorrt_llm_single_gpu_tests
Canceling since a higher priority waiting request for 'CLI CUDA TensorRT-LLM Single-GPU Tests-231' exists
cli_cuda_tensorrt_llm_single_gpu_tests
The operation was canceled.