Skip to content

Add llama.cpp backend #363

Add llama.cpp backend

Add llama.cpp backend #363

Triggered via pull request July 24, 2024 10:44
Status Cancelled
Total duration 5m 16s
Artifacts
run_cli_rocm_pytorch_single_gpu_tests
0s
run_cli_rocm_pytorch_single_gpu_tests
Fit to window
Zoom out
Zoom in

Annotations

1 error
run_cli_rocm_pytorch_single_gpu_tests
Canceling since a higher priority waiting request for 'CLI ROCm Pytorch Single-GPU Tests-231' exists