-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add Stable Diffusion examples on GPU and CPU (#11166)
* add sdxl and lcm-lora * readme * modify * add cpu * add license * modify * add file
- Loading branch information
Showing
8 changed files
with
371 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
# Stable Diffusion | ||
In this directory, you will find examples on how to run StableDiffusion models on CPU. | ||
|
||
### 1. Installation | ||
#### 1.1. Install IPEX-LLM | ||
Follow the instructions in [IPEX-LLM CPU installation guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_cpu.html) to install ipex-llm. We recommend to use miniconda to manage your python environment. | ||
|
||
#### 1.2 Install dependencies for Stable Diffusion | ||
Assume you have created a conda environment named diffusion with ipex-llm installed. Run below commands to install dependencies for running Stable Diffusion. | ||
```bash | ||
conda activate diffusion | ||
pip install diffusers["torch"] transformers | ||
pip install -U PEFT transformers | ||
pip install setuptools==69.5.1 | ||
``` | ||
|
||
### 2. Examples | ||
|
||
#### 2.1 StableDiffusion XL Example | ||
The example shows how to run StableDiffusion XL example on Intel CPU. | ||
```bash | ||
python ./sdxl.py | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the stable diffusion xl model (e.g. `stabilityai/stable-diffusion-xl-base-1.0`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'stabilityai/stable-diffusion-xl-base-1.0'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered. It is default to be `'An astronaut in the forest, detailed, 8k'`. | ||
- `--save-path`: argument defining the path to save the generated figure. It is default to be `sdxl-cpu.png`. | ||
- `--num-steps`: argument defining the number of inference steps. It is default to be `20`. | ||
|
||
The sample output image looks like below. | ||
![image](https://llm-assets.readthedocs.io/en/latest/_images/sdxl-cpu.png) | ||
|
||
#### 4.2 LCM-LoRA Example | ||
The example shows how to performing inference with LCM-LoRA on Intel CPU. | ||
```bash | ||
python ./lora-lcm.py | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the stable diffusion xl model (e.g. `stabilityai/stable-diffusion-xl-base-1.0`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'stabilityai/stable-diffusion-xl-base-1.0'`. | ||
- `--lora-weights-path`: argument defining the huggingface repo id for the LCM-LoRA model (e.g. `latent-consistency/lcm-lora-sdxl`) to be downloaded, or the path to huggingface checkpoint folder. It is default to be `'latent-consistency/lcm-lora-sdxl'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered. It is default to be `'A lovely dog on the table, detailed, 8k'`. | ||
- `--save-path`: argument defining the path to save the generated figure. It is default to be `lcm-lora-sdxl-cpu.png`. | ||
- `--num-steps`: argument defining the number of inference steps. It is default to be `4`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# Code is adapted from https://huggingface.co/docs/diffusers/main/en/using-diffusers/inference_with_lcm_lora | ||
|
||
import torch | ||
from diffusers import DiffusionPipeline, LCMScheduler | ||
import ipex_llm | ||
import argparse | ||
|
||
|
||
def main(args): | ||
pipe = DiffusionPipeline.from_pretrained( | ||
args.repo_id_or_model_path, | ||
torch_dtype=torch.bfloat16, | ||
).to("cpu") | ||
|
||
# set scheduler | ||
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
|
||
# load LCM-LoRA | ||
pipe.load_lora_weights(args.lora_weights_path) | ||
|
||
generator = torch.manual_seed(42) | ||
image = pipe( | ||
prompt=args.prompt, num_inference_steps=args.num_steps, generator=generator, guidance_scale=1.0 | ||
).images[0] | ||
image.save(args.save_path) | ||
|
||
if __name__=="__main__": | ||
parser = argparse.ArgumentParser(description="Stable Diffusion lora-lcm") | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="stabilityai/stable-diffusion-xl-base-1.0", | ||
help='The huggingface repo id for the stable diffusion model checkpoint') | ||
parser.add_argument('--lora-weights-path',type=str,default="latent-consistency/lcm-lora-sdxl", | ||
help='The huggingface repo id for the lcm lora sdxl checkpoint') | ||
parser.add_argument('--prompt', type=str, default="A lovely dog on the table, detailed, 8k", | ||
help='Prompt to infer') | ||
parser.add_argument('--save-path',type=str,default="lcm-lora-sdxl-cpu.png", | ||
help="Path to save the generated figure") | ||
parser.add_argument('--num-steps',type=int,default=4, | ||
help="Number of inference steps") | ||
args = parser.parse_args() | ||
main(args) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# Code is adapted from https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl | ||
|
||
from diffusers import AutoPipelineForText2Image | ||
import torch | ||
import ipex_llm | ||
import numpy as np | ||
from PIL import Image | ||
import argparse | ||
|
||
|
||
def main(args): | ||
pipeline_text2image = AutoPipelineForText2Image.from_pretrained( | ||
args.repo_id_or_model_path, | ||
torch_dtype=torch.float16, | ||
use_safetensors=True | ||
).to("cpu") | ||
|
||
image = pipeline_text2image(prompt=args.prompt,num_inference_steps=args.num_steps).images[0] | ||
image.save(args.save_path) | ||
|
||
if __name__=="__main__": | ||
parser = argparse.ArgumentParser(description="Stable Diffusion") | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="stabilityai/stable-diffusion-xl-base-1.0", | ||
help='The huggingface repo id for the stable diffusion model checkpoint') | ||
parser.add_argument('--prompt', type=str, default="An astronaut in the forest, detailed, 8k", | ||
help='Prompt to infer') | ||
parser.add_argument('--save-path',type=str,default="sdxl-cpu.png", | ||
help="Path to save the generated figure") | ||
parser.add_argument('--num-steps',type=int,default=20, | ||
help="Number of inference steps") | ||
args = parser.parse_args() | ||
main(args) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
# Stable Diffusion | ||
In this directory, you will find examples on how to run StableDiffusion models on [Intel GPUs](../README.md). | ||
|
||
### 1. Installation | ||
#### 1.1 Install IPEX-LLM | ||
Follow the instructions in IPEX-GPU installation guides ([Linux Guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html), [Windows Guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html)) according to your system to install IPEX-LLM. After the installation, you should have created a conda environment, named diffusion for instance. | ||
|
||
#### 1.2 Install dependencies for Stable Diffusion | ||
Assume you have created a conda environment named diffusion with ipex-llm installed. Run below commands to install dependencies for running Stable Diffusion. | ||
```bash | ||
conda activate diffusion | ||
pip install diffusers["torch"] transformers | ||
pip install -U PEFT transformers | ||
``` | ||
|
||
### 2. Configures OneAPI environment variables for Linux | ||
|
||
> [!NOTE] | ||
> Skip this step if you are running on Windows. | ||
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. | ||
|
||
```bash | ||
source /opt/intel/oneapi/setvars.sh | ||
``` | ||
|
||
### 3. Runtime Configurations | ||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device. | ||
#### 3.1 Configurations for Linux | ||
<details> | ||
|
||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary> | ||
|
||
```bash | ||
export USE_XETLA=OFF | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
export SYCL_CACHE_PERSISTENT=1 | ||
``` | ||
|
||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel Data Center GPU Max Series</summary> | ||
|
||
```bash | ||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export ENABLE_SDP_FUSION=1 | ||
``` | ||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel iGPU</summary> | ||
|
||
```bash | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export BIGDL_LLM_XMX_DISABLED=1 | ||
``` | ||
|
||
</details> | ||
|
||
#### 3.2 Configurations for Windows | ||
<details> | ||
|
||
<summary>For Intel iGPU</summary> | ||
|
||
```cmd | ||
set SYCL_CACHE_PERSISTENT=1 | ||
set BIGDL_LLM_XMX_DISABLED=1 | ||
``` | ||
|
||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel Arc™ A-Series Graphics</summary> | ||
|
||
```cmd | ||
set SYCL_CACHE_PERSISTENT=1 | ||
``` | ||
|
||
</details> | ||
|
||
> [!NOTE] | ||
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile. | ||
### 4. Examples | ||
|
||
#### 4.1 StableDiffusion XL Example | ||
The example shows how to run StableDiffusion XL example on Intel GPU. | ||
```bash | ||
python ./sdxl.py | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the stable diffusion xl model (e.g. `stabilityai/stable-diffusion-xl-base-1.0`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'stabilityai/stable-diffusion-xl-base-1.0'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered. It is default to be `'An astronaut in the forest, detailed, 8k'`. | ||
- `--save-path`: argument defining the path to save the generated figure. It is default to be `sdxl-gpu.png`. | ||
- `--num-steps`: argument defining the number of inference steps. It is default to be `20`. | ||
|
||
|
||
The sample output image looks like below. | ||
![image](https://llm-assets.readthedocs.io/en/latest/_images/sdxl-gpu.png) | ||
|
||
#### 4.2 LCM-LoRA Example | ||
The example shows how to performing inference with LCM-LoRA on Intel GPU. | ||
```bash | ||
python ./lora-lcm.py | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the stable diffusion xl model (e.g. `stabilityai/stable-diffusion-xl-base-1.0`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'stabilityai/stable-diffusion-xl-base-1.0'`. | ||
- `--lora-weights-path`: argument defining the huggingface repo id for the LCM-LoRA model (e.g. `latent-consistency/lcm-lora-sdxl`) to be downloaded, or the path to huggingface checkpoint folder. It is default to be `'latent-consistency/lcm-lora-sdxl'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered. It is default to be `'A lovely dog on the table, detailed, 8k'`. | ||
- `--save-path`: argument defining the path to save the generated figure. It is default to be `lcm-lora-sdxl-gpu.png`. | ||
- `--num-steps`: argument defining the number of inference steps. It is default to be `4`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# Code is adapted from https://huggingface.co/docs/diffusers/main/en/using-diffusers/inference_with_lcm_lora | ||
|
||
import torch | ||
from diffusers import DiffusionPipeline, LCMScheduler | ||
import ipex_llm | ||
import argparse | ||
|
||
|
||
def main(args): | ||
pipe = DiffusionPipeline.from_pretrained( | ||
args.repo_id_or_model_path, | ||
torch_dtype=torch.bfloat16, | ||
).to("xpu") | ||
|
||
# set scheduler | ||
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
|
||
# load LCM-LoRA | ||
pipe.load_lora_weights(args.lora_weights_path) | ||
|
||
generator = torch.manual_seed(42) | ||
image = pipe( | ||
prompt=args.prompt, num_inference_steps=args.num_steps, generator=generator, guidance_scale=1.0 | ||
).images[0] | ||
image.save(args.save_path) | ||
|
||
if __name__=="__main__": | ||
parser = argparse.ArgumentParser(description="Stable Diffusion lora-lcm") | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="stabilityai/stable-diffusion-xl-base-1.0", | ||
help='The huggingface repo id for the stable diffusion model checkpoint') | ||
parser.add_argument('--lora-weights-path',type=str,default="latent-consistency/lcm-lora-sdxl", | ||
help='The huggingface repo id for the lcm lora sdxl checkpoint') | ||
parser.add_argument('--prompt', type=str, default="A lovely dog on the table, detailed, 8k", | ||
help='Prompt to infer') | ||
parser.add_argument('--save-path',type=str,default="lcm-lora-sdxl-gpu.png", | ||
help="Path to save the generated figure") | ||
parser.add_argument('--num-steps',type=int,default=4, | ||
help="Number of inference steps") | ||
args = parser.parse_args() | ||
main(args) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# Code is adapted from https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl | ||
|
||
from diffusers import AutoPipelineForText2Image | ||
import torch | ||
import ipex_llm | ||
import numpy as np | ||
from PIL import Image | ||
import argparse | ||
|
||
|
||
def main(args): | ||
pipeline_text2image = AutoPipelineForText2Image.from_pretrained( | ||
args.repo_id_or_model_path, | ||
torch_dtype=torch.bfloat16, | ||
use_safetensors=True | ||
).to("xpu") | ||
|
||
image = pipeline_text2image(prompt=args.prompt,num_inference_steps=args.num_steps).images[0] | ||
image.save(args.save_path) | ||
|
||
if __name__=="__main__": | ||
parser = argparse.ArgumentParser(description="Stable Diffusion") | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="stabilityai/stable-diffusion-xl-base-1.0", | ||
help='The huggingface repo id for the stable diffusion model checkpoint') | ||
parser.add_argument('--prompt', type=str, default="An astronaut in the forest, detailed, 8k", | ||
help='Prompt to infer') | ||
parser.add_argument('--save-path',type=str,default="sdxl-gpu.png", | ||
help="Path to save the generated figure") | ||
parser.add_argument('--num-steps',type=int,default=20, | ||
help="Number of inference steps") | ||
args = parser.parse_args() | ||
main(args) |