Add llama.cpp backend #34
test_cli_cuda_tensorrt_llm_single_gpu.yaml
on: pull_request
cli_cuda_tensorrt_llm_single_gpu_tests
2m 57s
Annotations
2 errors
cli_cuda_tensorrt_llm_single_gpu_tests
Canceling since a higher priority waiting request for 'CLI CUDA TensorRT-LLM Single-GPU Tests-231' exists
|
cli_cuda_tensorrt_llm_single_gpu_tests
The operation was canceled.
|