Add llama.cpp backend #531
test_cli_cpu_onnxruntime.yaml
on: pull_request
run_cli_cpu_onnxruntime_tests
5m 25s
Annotations
2 errors
run_cli_cpu_onnxruntime_tests
Canceling since a higher priority waiting request for 'CLI CPU OnnxRuntime Tests-231' exists
|
run_cli_cpu_onnxruntime_tests
The operation was canceled.
|