-
Notifications
You must be signed in to change notification settings - Fork 9
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 710f8c8
Showing
314 changed files
with
31,841 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: a41fc0cab2f773448e4ba86f392a4181 | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Empty file.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
Diffusers | ||
=========== | ||
|
||
.. toctree:: | ||
:maxdepth: 2 | ||
|
||
install.rst | ||
quick_start.rst |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
安装指南 | ||
============== | ||
|
||
本教程面向使用 Diffusers & 昇腾开发者,帮助完成昇腾环境下 Diffusers 的安装。 | ||
|
||
昇腾环境安装 | ||
------------ | ||
|
||
请根据已有昇腾产品型号及CPU架构等按照 :doc:`快速安装昇腾环境指引 <../ascend/quick_install>` 进行昇腾环境安装,或直接获取对应产品的昇腾环境镜像 `ascendai/cann <https://hub.docker.com/r/ascendai/cann/tags>`_ 。 | ||
|
||
.. warning:: | ||
CANN 最低版本为 8.0.rc1,安装 CANN 时,请同时安装 Kernel 算子包。 | ||
|
||
Diffusers 安装 | ||
------------------ | ||
|
||
Python 环境创建 | ||
------------------ | ||
|
||
.. code-block:: shell | ||
:linenos: | ||
# 创建名为 diffusers 的 python 3.10 的虚拟环境 | ||
conda create -y -n diffusers python=3.10 | ||
# 激活虚拟环境 | ||
conda activate diffusers | ||
pip 安装 | ||
------------------ | ||
|
||
通过以下指令安装 Diffusers 及 torch-npu: | ||
|
||
.. code-block:: shell | ||
:linenos: | ||
pip install diffusers torch==2.2.0 torch-npu==2.2.0 torchvision -i https://pypi.tuna.tsinghua.edu.cn/simple | ||
安装校验 | ||
------------------ | ||
|
||
执行以下代码,若无任何报错,仅打印模型下载过程,即说明安装成功: | ||
|
||
.. code-block:: python | ||
:linenos: | ||
from diffusers import DiffusionPipeline | ||
import torch | ||
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) | ||
pipeline.to("npu") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
快速开始 | ||
================== | ||
|
||
.. note:: | ||
阅读本篇前,请确保已按照 :doc:`安装教程 <./install>` 准备好昇腾环境及 Diffusers ! | ||
|
||
本示例以文生图 Diffusers 库中文生图任务为样例,展示如何进行文生图模型 stable-diffusion-xl-base-1.0 的基于 LoRA 的微调及动态合并 LoRA 的推理。 | ||
|
||
文生图 | ||
------------- | ||
|
||
.. _download: | ||
|
||
模型及数据集下载 | ||
~~~~~~~~~~~~~~~~~~~~ | ||
|
||
1. 请提前下载 `stabilityai/stable-diffusion-xl-base-1.0 <https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0>`_ 模型至自定义路径 | ||
|
||
2. 请提前下载 `madebyollin/sdxl-vae-fp16-fix <https://huggingface.co/madebyollin/sdxl-vae-fp16-fix>`_ 模型至自定义路径 | ||
|
||
3. 请提前下载 `reach-vb/pokemon-blip-captions <https://huggingface.co/datasets/reach-vb/pokemon-blip-captions>`_ 数据集至自定义路径 | ||
|
||
|
||
.. _finetune: | ||
|
||
基于 LoRA 的微调 | ||
~~~~~~~~~~~~~~~~~~~~ | ||
|
||
进入 Diffusers 项目目录,新建并执行以下脚本: | ||
|
||
.. note:: | ||
|
||
请根据 :ref:`download` 中模型及数据集的实际缓存路径指定 stable-diffusion-xl-base-1.0 模型缓存路径 ``MODEL_NAME``,sdxl-vae-fp16-fix 模型缓存路径 ``VAE_NAME`` 和。 | ||
|
||
.. code-block:: shell | ||
:linenos: | ||
:emphasize-lines: 1,2,3 | ||
export MODEL_NAME="./models_ckpt/stable-diffusion-xl-base-1.0/" | ||
export VAE_NAME="./ckpt/sdxl-vae-fp16-fix" | ||
export TRAIN_DIR="~/diffusers/data/pokemon-blip-captions/pokemon" | ||
python3 ./examples/text_to_image/train_text_to_image_lora_sdxl.py \ | ||
--pretrained_model_name_or_path=$MODEL_NAME \ | ||
--pretrained_vae_model_name_or_path=$VAE_NAME \ | ||
--dataset_name=$DATASET_NAME --caption_column="text" \ | ||
--resolution=1024 \ | ||
--random_flip \ | ||
--train_batch_size=1 \ | ||
--num_train_epochs=2 \ | ||
--checkpointing_steps=500 \ | ||
--learning_rate=1e-04 \ | ||
--lr_scheduler="constant" \ | ||
--lr_warmup_steps=0 \ | ||
--mixed_precision="no" \ | ||
--seed=42 \ | ||
--output_dir="sd-pokemon-model-lora-sdxl" \ | ||
--validation_prompt="cute dragon creature" | ||
微调过程无报错,并且终端显示 ``Steps: 100%`` 的进度条说明微调成功。 | ||
|
||
|
||
动态合并 LoRA 的推理 | ||
~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. note:: | ||
|
||
请根据 :ref:`download` 中模型实际缓存路径指定 ``model_path`` | ||
|
||
根据 :ref:`finetune` 中指定的 LoRA 模型路径 ``output_dir`` 指定 ``lora_model_path`` | ||
|
||
[可选] 修改 ``prompt`` 可使得生成图像改变 | ||
|
||
.. code-block:: python | ||
:linenos: | ||
:emphasize-lines: 9 | ||
from diffusers import DiffusionPipeline | ||
import torch | ||
lora_model_path = "path/to/sd-pokemon-model-lora-sdxl/checkpoint-800/" | ||
model_path = "./models_ckpt/stable-diffusion-xl-base-1.0/" | ||
pipe = DiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) | ||
# 将模型放到 NPU 上 | ||
pipe.to("npu") | ||
# 加载 LoRA 权重 | ||
pipe.load_lora_weights(lora_model_path) | ||
# 输入 prompt | ||
prompt = "Sylveon Pokemon with elegant features, magical design, \ | ||
light purple aura, extremely detailed and intricate markings, \ | ||
photo realistic, unreal engine, octane render" | ||
# 推理 | ||
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] | ||
image.save("pokemon-finetuned-inference-generation.png") | ||
微调过程无报错,并且终端显示 ``Loading pipeline components...: 100%`` 的进度条说明微调成功。 | ||
查看当前目录下保存的 ``pokemon-finetuned-inference-generation.png`` 图像,可根据 ``prompt`` 生成内容相关的图像说明推理成功。 | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
Accelerate | ||
============== | ||
|
||
.. toctree:: | ||
:maxdepth: 2 | ||
|
||
install.rst | ||
quick_start.rst |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
安装指南 | ||
============== | ||
|
||
本教程面向使用 Accelerate & 昇腾的开发者,帮助完成昇腾环境下 Accelerate 的安装。 | ||
|
||
Accelerate 下载安装 | ||
-------------------- | ||
|
||
.. note:: | ||
|
||
阅读本篇前,请确保已按照 :doc:`安装教程 <./install>` 准备好昇腾环境! | ||
或者直接使用具备昇腾环境的镜像 `ascendai/cann:8.0.rc1-910b-ubuntu22.04 <https://hub.docker.com/layers/ascendai/cann/8.0.rc1-910b-ubuntu22.04/images/sha256-29ef8aacf6b2babd292f06f00b9190c212e7c79a947411e213135e4d41a178a9?context=explore>`_, | ||
更多的版本可至 `ascendai/cann <https://hub.docker.com/r/ascendai/cann/tags>`_ 获取。 | ||
|
||
启动镜像 | ||
::::::::::::::::: | ||
|
||
.. code-block:: shell | ||
docker run -itd --network host -v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi -v /usr/local/Ascend/driver:/usr/local/Ascend/driver -v /etc/ascend_install.info:/etc/ascend_install.info --device /dev/davinci7 --device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc --shm-size 16G --name accelerate ascendai/cann:8.0.rc1-910b-ubuntu22.04 bash | ||
安装 Accelerate 及依赖包 | ||
:::::::::::::::::::::::::: | ||
|
||
.. code-block:: shell | ||
pip install torch==2.2.0 torch_npu==2.2.0 accelerate -i https://pypi.tuna.tsinghua.edu.cn/simple | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
快速开始 | ||
============ | ||
|
||
.. note:: | ||
阅读本篇前,请确保已按照 :doc:`安装指南 <./install>` 准备好昇腾环境及 Accelerate ! | ||
|
||
本教程以一个简单的 NLP 模型为例,讲述如何使用 Accelerate 在昇腾 NPU 上进行模型的训练。 | ||
|
||
前置准备 | ||
------------ | ||
|
||
本篇将使用到 HuggingFace 其他工具链及 scikit-learn 库,请使用以下指令安装: | ||
|
||
.. code-block:: | ||
pip install datasets evaluate transformers scikit-learn -i https://pypi.tuna.tsinghua.edu.cn/simple | ||
本篇样例代码为 Accelrate 官方样例,需提前进行下载 | ||
|
||
.. code-block:: | ||
git clone https://github.com/huggingface/accelerate.git | ||
模型训练 | ||
------------ | ||
|
||
.. code-block:: | ||
:linenos: | ||
# 替换HF域名,方便国内用户进行数据及模型的下载 | ||
export HF_ENDPOINT=https://hf-mirror.com | ||
# 进入项目目录 | ||
cd accelerate/examples | ||
# 模型训练 | ||
python nlp_example.py | ||
出现如下日志代表训练成功: | ||
|
||
:: | ||
|
||
Downloading builder script: 5.75kB [00:01, 3.69kB/s] | ||
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████| 49.0/49.0 [00:00<00:00, 237kB/s] | ||
config.json: 570B [00:00, 2.23MB/s] | ||
vocab.txt: 79.5kB [00:12, 3.45kB/s]Error while downloading from https://hf-mirror.com/bert-base-cased/resolve/main/vocab.txt: HTTPSConnectionPool(host='hf-mirror.com', port=443): Read timed out. | ||
Trying to resume download... | ||
vocab.txt: 213kB [00:07, 15.5kB/s]] | ||
vocab.txt: 91.4kB [00:32, 2.81kB/s] | ||
tokenizer.json: 436kB [00:19, 22.8kB/s] | ||
Downloading readme: 35.3kB [00:01, 26.4kB/s] | ||
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 649k/649k [00:02<00:00, 288kB/s] | ||
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 75.7k/75.7k [00:00<00:00, 77.8kB/s] | ||
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 308k/308k [00:01<00:00, 204kB/s] | ||
Generating train split: 100%|███████████████████████████████████████████████████████████████████████████| 3668/3668 [00:00<00:00, 27701.23 examples/s] | ||
Generating validation split: 100%|████████████████████████████████████████████████████████████████████████| 408/408 [00:00<00:00, 73426.42 examples/s] | ||
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████| 1725/1725 [00:00<00:00, 246370.91 examples/s] | ||
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 3668/3668 [00:01<00:00, 3378.05 examples/s] | ||
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 408/408 [00:00<00:00, 3553.72 examples/s] | ||
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1725/1725 [00:00<00:00, 5109.03 examples/s] | ||
model.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 436M/436M [02:42<00:00, 2.68MB/s] | ||
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] | ||
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | ||
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | ||
To disable this warning, you can either: | ||
- Avoid using `tokenizers` before the fork if possible | ||
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | ||
You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. | ||
epoch 0: {'accuracy': 0.8014705882352942, 'f1': 0.8439306358381503} | ||
epoch 1: {'accuracy': 0.8578431372549019, 'f1': 0.8975265017667845} | ||
epoch 2: {'accuracy': 0.8700980392156863, 'f1': 0.9087779690189329} |
Oops, something went wrong.