Add streaming-llm using llama2 on CPU #168
llm_example_tests.yml
on: pull_request
llm-cpp-build
/
check-linux-amx-artifact
2s
llm-cpp-build
/
check-linux-avx512-artifact
2s
llm-cpp-build
/
check-linux-avxvnni-artifact
3s
llm-cpp-build
/
check-windows-avx-artifact
4s
llm-cpp-build
/
check-windows-avx2-artifact
2s
llm-cpp-build
/
check-windows-avx2-vnni-artifact
4s
Matrix: llm-example-test
Annotations
2 errors
llm-example-test (3.9, AVX512)
The runner has received a shutdown signal. This can happen when the runner service is stopped, or a manually started runner is canceled.
|
llm-example-test (3.9, AVX512)
The self-hosted runner: github-runner-avx512-696cb8df78-6rt2f lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
linux-amx
Expired
|
4.89 MB |
|
linux-avx
Expired
|
1.75 MB |
|
linux-avx2
Expired
|
1.7 MB |
|
linux-avx512
Expired
|
3.27 MB |
|
linux-avxvnni
Expired
|
7.27 MB |
|
windows-avx
Expired
|
1.7 MB |
|
windows-avx2
Expired
|
2.57 MB |
|
windows-avx2-vnni
Expired
|
3.52 MB |
|