Skip to content

Add streaming-llm using llama2 on CPU #160

Add streaming-llm using llama2 on CPU

Add streaming-llm using llama2 on CPU #160

Triggered via pull request October 26, 2023 02:33
Status Cancelled
Total duration 18m 47s
Artifacts 8

llm_example_tests.yml

on: pull_request
llm-cpp-build  /  check-linux-amx-artifact
2s
llm-cpp-build / check-linux-amx-artifact
llm-cpp-build  /  check-linux-avx512-artifact
3s
llm-cpp-build / check-linux-avx512-artifact
llm-cpp-build  /  check-linux-avxvnni-artifact
5s
llm-cpp-build / check-linux-avxvnni-artifact
llm-cpp-build  /  check-windows-avx-artifact
2s
llm-cpp-build / check-windows-avx-artifact
llm-cpp-build  /  check-windows-avx2-artifact
9s
llm-cpp-build / check-windows-avx2-artifact
llm-cpp-build  /  check-windows-avx2-vnni-artifact
3s
llm-cpp-build / check-windows-avx2-vnni-artifact
llm-cpp-build  /  linux-build-amx
2m 39s
llm-cpp-build / linux-build-amx
llm-cpp-build  /  linux-build-avx512
1m 29s
llm-cpp-build / linux-build-avx512
llm-cpp-build  /  linux-build-avxvnni
1m 6s
llm-cpp-build / linux-build-avxvnni
llm-cpp-build  /  windows-build-avx
1m 6s
llm-cpp-build / windows-build-avx
llm-cpp-build  /  windows-build-avx2
1m 6s
llm-cpp-build / windows-build-avx2
llm-cpp-build  /  windows-build-avx2-vnni
1m 51s
llm-cpp-build / windows-build-avx2-vnni
Matrix: llm-example-test
Fit to window
Zoom out
Zoom in

Annotations

1 error and 1 warning
llm-example-test (3.9, AVX512)
The operation was canceled.
llm-example-test (3.9, AVX512)
The following actions uses node12 which is deprecated and will be forced to run on node16: actions/checkout@v2, actions/setup-python@v2. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/

Artifacts

Produced during runtime
Name Size
linux-amx Expired
4.89 MB
linux-avx Expired
1.75 MB
linux-avx2 Expired
1.7 MB
linux-avx512 Expired
3.27 MB
linux-avxvnni Expired
7.27 MB
windows-avx Expired
1.7 MB
windows-avx2 Expired
2.57 MB
windows-avx2-vnni Expired
3.52 MB