Skip to content

Commit

Permalink
Merge branch 'main' into lf
Browse files Browse the repository at this point in the history
  • Loading branch information
MengqingCao authored Jul 17, 2024
2 parents eeeac20 + 8946f7d commit 6b501f5
Show file tree
Hide file tree
Showing 15 changed files with 1,069 additions and 11 deletions.
21 changes: 21 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

version: 2

build:
os: ubuntu-22.04
tools:
python: "3.8"

sphinx:
configuration: conf.py

# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf

# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: requirements.txt
25 changes: 14 additions & 11 deletions index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@

sources/pytorch/index.rst
sources/llamafactory/index.rst
sources/accelerate/index.rst
sources/transformers/index.rst
sources/onnxruntime/index.rst
sources/open_clip/index.rst
sources/timm/index.rst

Expand Down Expand Up @@ -82,11 +85,11 @@
</div>
<div class="flex-grow"></div>
<div class="flex space-x-4 text-blue-600">
<a href="#">官方链接</a>
<a href="https://github.com/microsoft/onnxruntime">官方链接</a>
<span class="split">|</span>
<a href="#">安装指南</a>
<a href="sources/onnxruntime/install.html">安装指南</a>
<span class="split">|</span>
<a href="#">快速上手</a>
<a href="sources/onnxruntime/quick_start.html">快速上手</a>
</div>
</div>
<!-- Card 4 -->
Expand Down Expand Up @@ -137,7 +140,7 @@
</div>
<div class="flex-grow"></div>
<div class="flex space-x-4 text-blue-600">
<a href="#">官方链接</a>
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">官方链接</a>
<span class="split">|</span>
<a href="#">安装指南</a>
<span class="split">|</span>
Expand All @@ -156,11 +159,11 @@
</div>
<div class="flex-grow"></div>
<div class="flex space-x-4 text-blue-600">
<a href="#">官方链接</a>
<a href="https://huggingface.co/docs/transformers/index">官方链接</a>
<span class="split">|</span>
<a href="#">安装指南</a>
<a href="href="sources/transformers/install.html">安装指南</a>
<span class="split">|</span>
<a href="#">快速上手</a>
<a href="href="sources/transformers/fine-tune.html">快速上手</a>
</div>
</div>
<!-- Card 8 -->
Expand All @@ -187,16 +190,16 @@
<div class="img w-16 h-16 rounded-md mr-4" style="background-image: url('_static/images/huggingface.png')"></div>
<div>
<h2 class="text-lg font-semibold">Accelerate</h2>
<p class="text-gray-600 desc">图像和音频生成等扩散模型工具链</p>
<p class="text-gray-600 desc">适用于Pytorch的多GPUs训练工具链</p>
</div>
</div>
<div class="flex-grow"></div>
<div class="flex space-x-4 text-blue-600">
<a href="#">官方链接</a>
<a href="https://github.com/huggingface/accelerate">官方链接</a>
<span class="split">|</span>
<a href="#">安装指南</a>
<a href="sources/accelerate/install.html">安装指南</a>
<span class="split">|</span>
<a href="#">快速上手</a>
<a href="sources/accelerate/quick_start.html">快速上手</a>
</div>
</div>
<!-- Card 10 -->
Expand Down
8 changes: 8 additions & 0 deletions sources/accelerate/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Accelerate
==============

.. toctree::
:maxdepth: 2

install.rst
quick_start.rst
28 changes: 28 additions & 0 deletions sources/accelerate/install.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
安装指南
==============

本教程面向使用 Accelerate & 昇腾的开发者,帮助完成昇腾环境下 Accelerate 的安装。

Accelerate 下载安装
--------------------

.. note::

阅读本篇前,请确保已按照 :doc:`安装教程 <./install>` 准备好昇腾环境!
或者直接使用具备昇腾环境的镜像 `cosdt/cann:8.0.rc1-910b-ubuntu22.04 <https://hub.docker.com/layers/cosdt/cann/8.0.rc1-910b-ubuntu22.04/images/sha256-29ef8aacf6b2babd292f06f00b9190c212e7c79a947411e213135e4d41a178a9?context=explore>`_,
更多的版本可至 `cosdt/cann <https://hub.docker.com/r/cosdt/cann/tags>`_ 获取。

启动镜像
:::::::::::::::::

.. code-block:: shell
docker run -itd --network host -v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi -v /usr/local/Ascend/driver:/usr/local/Ascend/driver -v /etc/ascend_install.info:/etc/ascend_install.info --device /dev/davinci7 --device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc --shm-size 16G --name accelerate cosdt/cann:8.0.rc1-910b-ubuntu22.04 bash
安装 Accelerate 及依赖包
::::::::::::::::::::::::::

.. code-block:: shell
pip install torch==2.2.0 torch_npu==2.2.0 accelerate -i https://pypi.tuna.tsinghua.edu.cn/simple
69 changes: 69 additions & 0 deletions sources/accelerate/quick_start.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
快速开始
============

.. note::
阅读本篇前,请确保已按照 :doc:`安装指南 <./install>` 准备好昇腾环境及 Accelerate !

本教程以一个简单的 NLP 模型为例,讲述如何使用 Accelerate 在昇腾 NPU 上进行模型的训练。

前置准备
------------

本篇将使用到 HuggingFace 其他工具链及 scikit-learn 库,请使用以下指令安装:

.. code-block::
pip install datasets evaluate transformers scikit-learn -i https://pypi.tuna.tsinghua.edu.cn/simple
本篇样例代码为 Accelrate 官方样例,需提前进行下载

.. code-block::
git clone https://github.com/huggingface/accelerate.git
模型训练
------------

.. code-block::
:linenos:
# 替换HF域名,方便国内用户进行数据及模型的下载
export HF_ENDPOINT=https://hf-mirror.com
# 进入项目目录
cd accelerate/examples
# 模型训练
python nlp_example.py
出现如下日志代表训练成功:

::

Downloading builder script: 5.75kB [00:01, 3.69kB/s]
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████| 49.0/49.0 [00:00<00:00, 237kB/s]
config.json: 570B [00:00, 2.23MB/s]
vocab.txt: 79.5kB [00:12, 3.45kB/s]Error while downloading from https://hf-mirror.com/bert-base-cased/resolve/main/vocab.txt: HTTPSConnectionPool(host='hf-mirror.com', port=443): Read timed out.
Trying to resume download...
vocab.txt: 213kB [00:07, 15.5kB/s]]
vocab.txt: 91.4kB [00:32, 2.81kB/s]
tokenizer.json: 436kB [00:19, 22.8kB/s]
Downloading readme: 35.3kB [00:01, 26.4kB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 649k/649k [00:02<00:00, 288kB/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 75.7k/75.7k [00:00<00:00, 77.8kB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 308k/308k [00:01<00:00, 204kB/s]
Generating train split: 100%|███████████████████████████████████████████████████████████████████████████| 3668/3668 [00:00<00:00, 27701.23 examples/s]
Generating validation split: 100%|████████████████████████████████████████████████████████████████████████| 408/408 [00:00<00:00, 73426.42 examples/s]
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████| 1725/1725 [00:00<00:00, 246370.91 examples/s]
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 3668/3668 [00:01<00:00, 3378.05 examples/s]
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 408/408 [00:00<00:00, 3553.72 examples/s]
Map: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1725/1725 [00:00<00:00, 5109.03 examples/s]
model.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 436M/436M [02:42<00:00, 2.68MB/s]
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
epoch 0: {'accuracy': 0.8014705882352942, 'f1': 0.8439306358381503}
epoch 1: {'accuracy': 0.8578431372549019, 'f1': 0.8975265017667845}
epoch 2: {'accuracy': 0.8700980392156863, 'f1': 0.9087779690189329}
8 changes: 8 additions & 0 deletions sources/onnxruntime/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
ONNX Runtime
============

.. toctree::
:maxdepth: 2

install.rst
quick_start.rst
33 changes: 33 additions & 0 deletions sources/onnxruntime/install.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
安装指南
===========

本教程面向使用 ONNX Runtime & Ascend NPU 的开发者,帮助完成昇腾环境下 ONNX Runtime 的安装。

.. note::

阅读本篇前,请确保已按照 :doc:`安装教程 <../ascend/quick_install>` 准备好昇腾环境!

ONNX Runtime 安装
-------------------

ONNX Runtime 目前提供了 源码编译 和 二进制包 两种安装方式,其中二进制包当前只支持Python。

从源码安装
^^^^^^^^^^^^

.. code-block:: shell
:linenos:
# Default path, change it if needed.
source /usr/local/Ascend/ascend-toolkit/set_env.sh
./build.sh --config <Release|Debug|RelWithDebInfo> --build_shared_lib --parallel --use_cann
从pip安装
^^^^^^^^^^^^

.. code-block:: shell
:linenos:
pip3 install onnxruntime-cann
97 changes: 97 additions & 0 deletions sources/onnxruntime/quick_start.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
快速开始
===========

.. note::
阅读本篇前,请确保已按照 :doc:`安装指南 <./install>` 准备好昇腾环境及 ONNX Runtime!

本教程以一个简单的 resnet50 模型为例,讲述如何在 Ascend NPU上使用 ONNX Runtime 进行模型推理。

环境准备
-----------

安装本教程所依赖的额外必要库。

.. code-block:: shell
:linenos:
pip install numpy Pillow onnx
模型准备
-----------

ONNX Runtime 推理需要 ONNX 格式模型作为输入,目前有以下几种主流途径获得 ONNX 模型。

1. 从 `ONNX Model Zoo <https://onnx.ai/models/>`_ 中下载模型。
2. 从 torch、TensorFlow 等框架导出 ONNX 模型。
3. 使用转换工具,完成其他类型到 ONNX 模型的转换。

本教程使用的 resnet50 模型是从 ONNX Model Zoo 中直接下载的,具体的 `下载链接 <https://github.com/onnx/models/blob/main/Computer_Vision/resnet50_Opset16_torch_hub/resnet50_Opset16.onnx>`_

类别标签
-----------

类别标签用于将输出权重转换成人类可读的类别信息,具体的 `下载链接 <https://raw.githubusercontent.com/anishathalye/imagenet-simple-labels/master/imagenet-simple-labels.json>`_

模型推理
-----------

.. code-block:: python
:linenos:
import onnxruntime as ort
import numpy as np
import onnx
from PIL import Image
def preprocess(image_path):
img = Image.open(image_path)
img = img.resize((224, 224))
img = np.array(img).astype(np.float32)
img = np.transpose(img, (2, 0, 1))
img = img / 255.0
mean = np.array([0.485, 0.456, 0.406]).reshape(3, 1, 1)
std = np.array([0.229, 0.224, 0.225]).reshape(3, 1, 1)
img = (img - mean) / std
img = np.expand_dims(img, axis=0)
return img
def inference(model_path, img):
options = ort.SessionOptions()
providers = [
(
"CANNExecutionProvider",
{
"device_id": 0,
"arena_extend_strategy": "kNextPowerOfTwo",
"npu_mem_limit": 2 * 1024 * 1024 * 1024,
"op_select_impl_mode": "high_performance",
"optypelist_for_implmode": "Gelu",
"enable_cann_graph": True
},
),
"CPUExecutionProvider",
]
session = ort.InferenceSession(model_path, sess_options=options, providers=providers)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
result = session.run([output_name], {input_name: img})
return result
def display(classes_path, result):
with open(classes_path) as f:
labels = [line.strip() for line in f.readlines()]
pred_idx = np.argmax(result)
print(f'Predicted class: {labels[pred_idx]} ({result[0][0][pred_idx]:.4f})')
if __name__ == '__main__':
model_path = '~/model/resnet/resnet50.onnx'
image_path = '~/model/resnet/cat.jpg'
classes_path = '~/model/resnet/imagenet_classes.txt'
img = preprocess(image_path)
result = inference(model_path, img)
display(classes_path, result)
Loading

0 comments on commit 6b501f5

Please sign in to comment.