Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate workflows and add quality check workflow #54

Merged
merged 12 commits into from
Sep 11, 2023
36 changes: 36 additions & 0 deletions .github/workflows/check_quality.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: Quality checks

on:
push:
branches: [main]
pull_request:
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
run_cpu_tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2

- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install quality requirements
run: |
pip install --upgrade pip
pip install -e .[quality]

- name: Check style with black
run: |
black --check .

- name: Check style with ruff
run: |
ruff .
32 changes: 32 additions & 0 deletions .github/workflows/test_cpu_neural_compressor.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Intel Neural Compressor CPU Tests

on:
push:
branches: [main]
pull_request:
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
run_cpu_tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2

- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install Intel Neural Compressor CPU requirements
run: |
pip install --upgrade pip
pip install -e .[test,neural-compressor]

- name: Run Intel Neural Compressor CPU tests
run: |
pytest -k "cpu_neural_compressor"
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: CPU Unit Tests
name: OnnxRuntime CPU Tests

on:
push:
Expand All @@ -14,19 +14,19 @@ jobs:
run_cpu_tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
- name: Checkout
uses: actions/checkout@v2

- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install CPU requirements
- name: Install requirements
run: |
pip install --upgrade pip
pip install -r cpu_requirements.txt
pip install -e .[test]
pip install -e .[test,onnxruntime,diffusers]

- name: Run CPU tests
run: pytest -k "cpu"
- name: Run tests
run: |
pytest -k "cpu_onnxruntime"
32 changes: 32 additions & 0 deletions .github/workflows/test_cpu_openvino.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: OpenVINO CPU Tests

on:
push:
branches: [main]
pull_request:
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
run_cpu_tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2

- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install requirements
run: |
pip install --upgrade pip
pip install -e .[test,openvino,diffusers]

- name: Run tests
run: |
pytest -k "cpu_openvino"
32 changes: 32 additions & 0 deletions .github/workflows/test_cpu_pytorch.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Pytorch CPU tests

on:
push:
branches: [main]
pull_request:
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
run_cpu_tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2

- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install requirements
run: |
pip install --upgrade pip
pip install -e .[test,diffusers]

- name: Run tests
run: |
pytest -k "cpu_pytorch"
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: GPU Unit Tests
name: OnnxRuntime CUDA Inference Tests

on:
pull_request:
Expand All @@ -12,7 +12,7 @@ jobs:
build-and-test:
runs-on: self-hosted
steps:
- name: Restore docker ownership
- name: Restore files ownership
run: docker run
--rm
--entrypoint /bin/bash
Expand All @@ -23,18 +23,15 @@ jobs:
ubuntu
-c 'chown -R ${HOST_UID}:${HOST_GID} /workspace/optimum-benchmark'

- name: Checkout code
- name: Checkout
uses: actions/checkout@v2

- name: Build GPU Docker image
run: docker build --no-cache --build-arg CACHEBUST=$(date +%s) -f docker/gpu.dockerfile -t optimum-benchmark-gpu .

- name: Run GPU tests
- name: Run tests
run: docker run
--rm
--entrypoint /bin/bash
--gpus '"device=0,1"'
--volume $(pwd):/workspace/optimum-benchmark
--workdir /workspace/optimum-benchmark
optimum-benchmark-gpu
-c "pip install -e .[test] && pytest -k '(cuda or tensorrt) and not onnxruntime_training' -x"
-c "pip install -e .[test,diffusers] && pytest -k '(cuda or tensorrt) and onnxruntime and not training' -x"
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: OnnxRuntime Training Unit Tests
name: OnnxRuntime CUDA Training Tests

on:
pull_request:
Expand All @@ -12,7 +12,7 @@ jobs:
build-and-test:
runs-on: self-hosted
steps:
- name: Restore docker ownership
- name: Restore files ownership
run: docker run
--rm
--entrypoint /bin/bash
Expand All @@ -23,18 +23,15 @@ jobs:
ubuntu
-c 'chown -R ${HOST_UID}:${HOST_GID} /workspace/optimum-benchmark'

- name: Checkout code
- name: Checkout
uses: actions/checkout@v2

- name: Build OnnxRuntime Training Docker image
run: docker build --no-cache --build-arg CACHEBUST=$(date +%s) -f docker/ort_training.dockerfile -t optimum-benchmark-ort-training .

- name: Run OnnxRuntime Training tests
- name: Run tests
run: docker run
--rm
--entrypoint /bin/bash
--gpus '"device=0,1"'
--volume $(pwd):/workspace/optimum-benchmark
--workdir /workspace/optimum-benchmark
optimum-benchmark-ort-training
-c "pip install -e .[test] && pytest -k 'cuda_onnxruntime_training' -x"
onnxruntime-training
-c "pip install -e .[test,peft] && pytest -k 'onnxruntime_training' -x"
37 changes: 37 additions & 0 deletions .github/workflows/test_cuda_pytorch.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Pytorch CUDA Tests

on:
pull_request:
types: [opened, reopened, synchronize]

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
build-and-test:
runs-on: self-hosted
steps:
- name: Restore files ownership
run: docker run
--rm
--entrypoint /bin/bash
--env HOST_UID=`id -u`
--env HOST_GID=`id -g`
--volume $(pwd):/workspace/optimum-benchmark
--workdir /workspace/optimum-benchmark
ubuntu
-c 'chown -R ${HOST_UID}:${HOST_GID} /workspace/optimum-benchmark'

- name: Checkout
uses: actions/checkout@v2

- name: Run tests
run: docker run
--rm
--entrypoint /bin/bash
--gpus '"device=0,1"'
--volume $(pwd):/workspace/optimum-benchmark
--workdir /workspace/optimum-benchmark
optimum-benchmark-gpu
-c "pip install -e .[test,peft,diffusers] && pytest -k 'cuda_pytorch' -x"
36 changes: 36 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

SHELL := /bin/bash
CURRENT_DIR = $(shell pwd)
DEFAULT_CLONE_URL := https://github.com/huggingface/optimum-benchmark.git
# If CLONE_URL is empty, revert to DEFAULT_CLONE_URL
REAL_CLONE_URL = $(if $(CLONE_URL),$(CLONE_URL),$(DEFAULT_CLONE_URL))

# Install the library in development mode
.PHONY: style test

# Run code quality checks
style_check:
black --check .
ruff .

# Format the code
style:
black .
ruff . --fix

# Run tests for the library
test:
python -m pytest tests
8 changes: 0 additions & 8 deletions docker/gpu.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,3 @@ RUN apt-get install -y software-properties-common wget apt-utils patchelf git li
apt-get clean
RUN unattended-upgrade
RUN apt-get autoremove -y
RUN pip install --upgrade pip

# this line forces the docker build to rebuild from this point on
ARG CACHEBUST=1

# Install optimum-benchmark dependencies
COPY gpu_requirements.txt /tmp/gpu_requirements.txt
RUN pip install -r /tmp/gpu_requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,3 @@ RUN $PYTHON_EXE -m pip install torch-ort
ENV TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX"
RUN $PYTHON_EXE -m pip install --upgrade protobuf==3.20.2
RUN $PYTHON_EXE -m torch_ort.configure

# this line forces the docker build to rebuild from this point on
ARG CACHEBUST=1

# Install optimum-benchmark dependencies
COPY requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt
1 change: 0 additions & 1 deletion docker/scripts/build_gpu.sh

This file was deleted.

1 change: 0 additions & 1 deletion docker/scripts/build_ort_training.sh

This file was deleted.

12 changes: 6 additions & 6 deletions optimum_benchmark/backends/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,30 +64,30 @@ def __init__(self, model: str, task: str, device: str, hub_kwargs: Dict[str, Any

if self.is_diffusion_pipeline():
# for pipelines
self.library = "diffusers"
self.model_type = self.task
self.pretrained_config = None
self.pretrained_processor = None
self.model_type = self.task
else:
# for models
self.library = "transformers"
self.pretrained_config = AutoConfig.from_pretrained(
pretrained_model_name_or_path=self.model, **self.hub_kwargs
)
self.model_type = self.pretrained_config.model_type

try:
# the processor sometimes contains information about the model's
# input shapes that's not available in the config
# the processor sometimes contains information about the model's input shapes that's not available in the config
self.pretrained_processor = AutoProcessor.from_pretrained(
pretrained_model_name_or_path=self.model, **self.hub_kwargs
)
except ValueError:
# sometimes the processor is not available or can't be determined/detected
LOGGER.warning("Could not find the model's preprocessor")
self.pretrained_processor = None

self.automodel_class = TasksManager.get_model_class_for_task(
task=self.task,
framework="pt",
model_type=self.model_type,
task=self.task, library=self.library, model_type=self.model_type
)

def is_text_generation_model(self) -> bool:
Expand Down
Loading
Loading