Skip to content

Commit

Permalink
replace bigdl-llm with ipex-llm
Browse files Browse the repository at this point in the history
  • Loading branch information
liu-shaojun committed Mar 26, 2024
1 parent c563b41 commit d91e489
Show file tree
Hide file tree
Showing 11 changed files with 101 additions and 2,571 deletions.
4 changes: 2 additions & 2 deletions .github/actions/llm/convert-test/action.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: "BigDL-LLM convert tests"
description: "BigDL-LLM convert test, including downloading original models"
name: "IPEX-LLM convert tests"
description: "IPEX-LLM convert test, including downloading original models"

runs:
using: "composite"
Expand Down
4 changes: 2 additions & 2 deletions .github/actions/llm/example-test/action.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 'BigDL-LLM example tests'
description: 'BigDL-LLM example tests'
name: 'IPEX-LLM example tests'
description: 'IPEX-LLM example tests'

runs:
using: "composite"
Expand Down
4 changes: 2 additions & 2 deletions .github/actions/llm/setup-llm-env/action.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: "Setup BigDL-LLM Env"
description: "BigDL-LLM installation"
name: "Setup IPEX-LLM Env"
description: "IPEX-LLM installation"
inputs:
extra-dependency:
description: "Name of extra dependencies filled in brackets"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/llm-harness-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ jobs:
- name: Download FP16 results
shell: bash
run: |
wget https://raw.githubusercontent.com/intel-analytics/BigDL/main/python/llm/test/benchmark/harness/fp16.csv -O $ACC_FOLDER/../fp16.csv
wget https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/test/benchmark/harness/fp16.csv -O $ACC_FOLDER/../fp16.csv
ls $ACC_FOLDER/..
- name: Write to CSV
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/llm-nightly-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ jobs:
- name: Download llm binary
uses: ./.github/actions/llm/download-llm-binary

- name: Install BigDL-LLM
- name: Install IPEX-LLM
uses: ./.github/actions/llm/setup-llm-env

- name: Download original models & convert
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/llm-ppl-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ jobs:
- name: Download fp16.results
shell: bash
run: |
wget https://raw.githubusercontent.com/intel-analytics/BigDL/main/python/llm/test/benchmark/perplexity/fp16.csv -O $ACC_FOLDER/../fp16.csv
wget https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/test/benchmark/perplexity/fp16.csv -O $ACC_FOLDER/../fp16.csv
ls $ACC_FOLDER/..
- name: Write to CSV
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/llm-whisper-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ jobs:
python -m pip install --upgrade librosa
python -m pip install --upgrade jiwer
# please uncomment it and comment the "Install BigDL-LLM from Pypi" part for PR tests
# please uncomment it and comment the "Install IPEX-LLM from Pypi" part for PR tests
- name: Download llm binary
uses: ./.github/actions/llm/download-llm-binary

Expand All @@ -120,10 +120,10 @@ jobs:
with:
extra-dependency: "xpu_2.1"

# - name: Install BigDL-LLM from Pypi
# - name: Install IPEX-LLM from Pypi
# shell: bash
# run: |
# pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
# pip install --pre --upgrade ipex-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu

# - name: Test installed xpu version
# shell: bash
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/llm_tests_for_stable_version_on_spr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ jobs:
cd python/llm/dev/benchmark/all-in-one
export http_proxy=${HTTP_PROXY}
export https_proxy=${HTTPS_PROXY}
source bigdl-llm-init -t
source ipex-llm-init -t
export OMP_NUM_THREADS=48
# hide time info
sed -i 's/str(end - st)/"xxxxxx"/g' run.py
Expand Down Expand Up @@ -125,7 +125,7 @@ jobs:
cd python/llm/dev/benchmark/all-in-one
export http_proxy=${HTTP_PROXY}
export https_proxy=${HTTPS_PROXY}
source bigdl-llm-init -t
source ipex-llm-init -t
export OMP_NUM_THREADS=48
# hide time info
sed -i 's/str(end - st)/"xxxxxx"/g' run-stress-test.py
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/llm_unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ jobs:
- name: Download llm binary
uses: ./.github/actions/llm/download-llm-binary

- name: Install BigDL-LLM for xpu
- name: Install IPEX-LLM for xpu
uses: ./.github/actions/llm/setup-llm-env
with:
extra-dependency: "xpu_${{ matrix.pytorch-version }}"
Expand Down Expand Up @@ -392,10 +392,10 @@ jobs:
pip install llama-index-readers-file llama-index-vector-stores-postgres llama-index-embeddings-huggingface
# Specific oneapi position on arc ut test machines
if [[ '${{ matrix.pytorch-version }}' == '2.1' ]]; then
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
pip install --pre --upgrade ipex-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
source /opt/intel/oneapi/setvars.sh
elif [[ '${{ matrix.pytorch-version }}' == '2.0' ]]; then
pip install --pre --upgrade bigdl-llm[xpu_2.0] -f https://developer.intel.com/ipex-whl-stable-xpu
pip install --pre --upgrade ipex-llm[xpu_2.0] -f https://developer.intel.com/ipex-whl-stable-xpu
source /home/arda/intel/oneapi/setvars.sh
fi
bash python/llm/test/run-llm-llamaindex-tests-gpu.sh
Loading

0 comments on commit d91e489

Please sign in to comment.