Add llama.cpp backend #363
test_cli_rocm_pytorch_single_gpu.yaml
on: pull_request
run_cli_rocm_pytorch_single_gpu_tests
0s
Annotations
1 error
run_cli_rocm_pytorch_single_gpu_tests
Canceling since a higher priority waiting request for 'CLI ROCm Pytorch Single-GPU Tests-231' exists
|