Add llama.cpp backend #365
Triggered via pull request
July 25, 2024 05:32
Status
Cancelled
Total duration
1h 2m 36s
Artifacts
–
test_cli_rocm_pytorch_single_gpu.yaml
on: pull_request
run_cli_rocm_pytorch_single_gpu_tests
0s
Annotations
1 error
run_cli_rocm_pytorch_single_gpu_tests
The run was canceled by @baptistecolle.
|