Skip to content

Commit

Permalink
⚡️ optim: 优化脚本
Browse files Browse the repository at this point in the history
  • Loading branch information
henryzhuhr committed Jun 3, 2024
1 parent d21124d commit 628c4b1
Show file tree
Hide file tree
Showing 26 changed files with 491 additions and 492 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
# Ignore custom configuration files
scripts/*.custom.sh

tmp/
tmps/
temp/
temps/
resource/


# Vuepress/Vitepress
node_modules
docs/.vuepress/.temp
Expand Down
9 changes: 8 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
# deep-object-detect-track

查看[文档](https://henryzhuhr.github.io/deep-object-detect-track/)
查看[文档](https://henryzhuhr.github.io/deep-object-detect-track/)

也可以本地启动文档:

```bash
pnpm install
pnpm docs:dev
```
35 changes: 17 additions & 18 deletions dlinfer/detector/b_onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,21 @@

class ONNXDetectorBackend(IDetectoBackends):
NAME = "ONNX"
SUPPORTED_VERISONS = ["1.8.0"]
SUPPORTED_VERISONS = []
SUPPORTED_DEVICES = available_providers

def __init__(
self,
device: str = "CPUExecutionProvider",
device: List[str] = ["CPUExecutionProvider"],
inputs: List[str] = ["input"], # TODO in case of multiple inputs
outputs: List[str] = ["output"], # TODO in case of multiple outputs
) -> None:
if device.lower() == "cpu":
device = "CPUExecutionProvider"
assert device in self.SUPPORTED_DEVICES, (
f"specify device {device} is not supported, "
f"please choose one of supported device: {self.SUPPORTED_DEVICES}"
)
self.providers = [device]
for provider in device:
assert provider in self.SUPPORTED_DEVICES, (
f"specify device {device} is not supported, "
f"please choose one of supported device: {self.SUPPORTED_DEVICES}"
)
self.providers = device
self.ort_session: ort.InferenceSession = None
self.inputs = inputs
self.outputs = outputs
Expand All @@ -45,15 +44,15 @@ def load_model(self, model_path: str, verbose: bool = False) -> None:
if verbose:
# fmt: off
print(self.ColorStr.info("Parsing ONNX info:"))
print(self.ColorStr.info(" - providers:"), self.ort_session.get_providers())
print(self.ColorStr.info(" --- inputs:"), binding__input_names)
print(self.ColorStr.info(" -- names:"), binding__input_names)
print(self.ColorStr.info(" - shapes:"), binding__input_shapes)
print(self.ColorStr.info(" -- types:"), binding__input_types)
print(self.ColorStr.info(" --- outputs:"), binding_output_names)
print(self.ColorStr.info(" -- names:"), binding_output_shapes)
print(self.ColorStr.info(" - shapes:"), binding_output_shapes)
print(self.ColorStr.info(" -- types:"), binding_output_types)
print(self.ColorStr.info("- providers:"), self.ort_session.get_providers())
print(self.ColorStr.info("-- inputs:"), binding__input_names)
print(self.ColorStr.info(" - names: "), binding__input_names)
print(self.ColorStr.info(" - shapes: "), binding__input_shapes)
print(self.ColorStr.info(" - types: "), binding__input_types)
print(self.ColorStr.info("-- outputs:"), binding_output_names)
print(self.ColorStr.info(" - names: "), binding_output_shapes)
print(self.ColorStr.info(" - shapes: "), binding_output_shapes)
print(self.ColorStr.info(" - types: "), binding_output_types)
# fmt: on

# fmt: off
Expand Down
5 changes: 3 additions & 2 deletions dlinfer/detector/b_openvino.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ def infer(self, input: np.ndarray) -> np.ndarray:
preds = next(iter(results.values()))
return preds

def query_device(self):
@staticmethod
def query_device():
"""Query available devices for OpenVINO backend."""
return ["AUTO"] + self.core.available_devices
return Core().available_devices
2 changes: 1 addition & 1 deletion dlinfer/detector/interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def _check_version(self, version: str):
for sv in self.SUPPORTED_VERISONS:
if version.startswith(sv):
return
raise RuntimeError(
warnings.warn(
f"{self.NAME} version {version} is not supported, "
f"please upgrade to support version: {self.SUPPORTED_VERISONS}"
)
Expand Down
25 changes: 12 additions & 13 deletions docs/dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,30 +11,31 @@ outline: deep

将数据集放入如下目录

```bash
```shell
DATASET_DIR=/path/to/dataset
```

> 需要注意的是,数据集通常需要放置在项目外的路径,例如 `~/data` (推荐)。如果放置在项目内,导致编辑器对于项目的索引过大,会导致编辑器卡顿
这里准备好了一个示例数据集,可以下载

```bash
```shell
wget -P ~/data https://github.com/HenryZhuHR/deep-object-detect-track/releases/download/v1.0.0/bottle.tar.xz
tar -xvf ~/data/bottle.tar.xz -C ~/data
mv ~/data/bottle ~/data/yolodataset
```

随后可以设置数据集目录为
```bash
DATASET_DIR=~/data/bottle
```shell
DATASET_DIR=~/data/yolodataset
```

参考该目录构建自己的数据集,并且完成标注

考虑到单张图像中可能出现不同类别的目标,因此数据集不一定需要按照类别进行划分,可以自定义划分,按照项目的需求任意归档数据集,但是请确保,每一张图像同级目录下有同名的**标签文件**

按照类别划分的目录结构参考
```bash
```shell
·
└── /path/to/dataset
├── class_A
Expand All @@ -48,7 +49,7 @@ DATASET_DIR=~/data/bottle
```

不进行类别划分的目录结构参考
```bash
```shell
·
└── /path/to/dataset
├─ file_1.jpg
Expand All @@ -60,7 +61,7 @@ DATASET_DIR=~/data/bottle
## 启动标注工具

使用 labelImg 标注,安装并启动
```bash
```shell
pip install labelImg
labelImg
```
Expand Down Expand Up @@ -101,11 +102,11 @@ labelImg
## 数据处理

运行脚本,生成同名目录,但是会带 `-organized` 后缀,例如
```bash
python dataset-process.py --datadir ~/data/bottle
```shell
python dataset-process.py --datadir ~/data/yolodataset
```

生成 `~/data/bottle-organized` 用于数据集训练,并且该目录为 yolov5 中指定的数据集路径
生成的目录 `~/data/yolodataset-organized` 用于数据集训练,并且该目录为 yolov5 中指定的数据集路径

如果不需要完全遍历数据集、数据集自定义路径,则在 `get_all_label_files()` 函数中传入自定义的 `custom_get_all_files` 函数,以获取全部文件路径,该自定义函数可以参考 `default_get_all_files()`

Expand All @@ -114,15 +115,13 @@ def default_get_all_files(directory: str):
file_paths: List[str] = []
for root, dirs, files in os.walk(directory):
for file in files:
if file in [".DS_Store"]:
continue
file_paths.append(os.path.join(root, file))
return file_paths
```

并且在调用的时候传入该参数

```bash
```python
# -- get all label files, type: List[ImageLabel]
label_file_list = get_all_label_files(args.datadir) # [!code --]
label_file_list = get_all_label_files( # [!code ++]
Expand Down
126 changes: 110 additions & 16 deletions docs/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,41 +9,135 @@ outline: deep

## 导出模型

已经编写了一个脚本,可以直接导出
提供一个导出脚本 `scripts/train.sh`,复制一份到项目目录下进行自定义修改(推荐)

```shell
# 查看脚本,修改参数
bash scripts/export-yolov5.sh
cp scripts/export-yolov5.sh scripts/export-yolov5.custom.sh
```

也可以自行导出,进入 yolov5 项目目录
查看脚本 `scripts/export-yolov5.custom.sh` ,根据项目需求修改参数后执行

```shell
cd projects/yolov5
bash scripts/export-yolov5.custom.sh
#zsh scripts/export-yolov5.custom.sh # zsh
```
设定参数,执行
```shell
python export.py \
--weights ../weights/yolov5s.pt \
--data data/coco128.yaml \
--include onnx openvino
```
- `--weights` 模型路径: 修改为自己训练好的模型路径
- `--include` 导出类型: 可以导出多个模型,用空格分隔
- `--data` 训练模型使用的数据集,主要利用到里面的类别数据


## 部署模型

部署模型请单独创建虚拟环境,并根据部署中提到的依赖进行**最小化安装**,而避免引入过多不必要的包 (例如 pytorch),部署环境均经过测试

### ONNX 部署

修改 `scripts/variables.custom.sh` 文件中 `ENV_NAME` 如下

```shell
# export ENV_NAME="" # -- Uncomment to customize the environment name # [!code --]
export ENV_NAME="deploy-onnx" # [!code ++]
export ENV_PATH=$BASE_ENV_PATH/.env/$ENV_NAME
```

然后执行脚本创建虚拟环境,并激活

::: code-group

```shell [使用 venv]
bash scripts/create-python-env.sh -e venv -ni
#zsh scripts/create-python-env.sh -e venv -ni # zsh
```

```shell [使用 conda]
bash scripts/create-python-env.sh -e conda -ni
#zsh scripts/create-python-env.sh -e conda -ni # zsh
```

:::

根据上述脚本运行输出激活环境

```shell
- [INFO] Run command below to activate the environment:
... # 复制这里出现的激活命令并执行
```

手动安装依赖

```shell
pip install -r requirements/requirements.txt
pip install onnxruntime # CPU 版本
# pip install onnxruntime-gpu # GPU 版本
```

修改 `infer.py` 文件,指定模型路径
```python
## ------ ONNX ------
onnx_backend = backends.ONNXBackend
print("-- Available devices:", providers := onnx_backend.SUPPORTED_DEVICES)
detector = onnx_backend(
device=providers, inputs=["images"], outputs=["output0"]
)
```

然后执行推理脚本

```shell
python infer.py --model .cache/yolov5/yolov5s.onnx
```


### OpenVINO 部署

修改 `scripts/variables.custom.sh` 文件中 `ENV_NAME` 如下

```shell
python infer.py
# export ENV_NAME="" # -- Uncomment to customize the environment name # [!code --]
export ENV_NAME="deploy-ov" # [!code ++]
export ENV_PATH=$BASE_ENV_PATH/.env/$ENV_NAME
```

然后执行脚本创建虚拟环境,并激活

::: code-group

```shell [使用 venv]
bash scripts/create-python-env.sh -e venv -ni
#zsh scripts/create-python-env.sh -e venv -ni # zsh
```

```shell [使用 conda]
bash scripts/create-python-env.sh -e conda -ni
#zsh scripts/create-python-env.sh -e conda -ni # zsh
```

:::

根据上述脚本运行输出激活环境

```shell
- [INFO] Run command below to activate the environment:
... # 复制这里出现的激活命令并执行
```

手动安装依赖

```shell
pip install -r requirements/requirements.txt
pip install openvino
```

修改 `infer.py` 文件,指定模型路径
```python
## ------ ONNX ------
ov_backend = backends.OpenVINOBackend
print("-- Available devices:", ov_backend.query_device())
detector = ov_backend(device="AUTO")
```

然后执行推理脚本

```shell
python infer.py --model .cache/yolov5/yolov5s_openvino_model/yolov5s.xml
```

### TensorRT 部署

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ hero:
actions:
- theme: brand
text: 项目源码
link: https://henryzhuhr.github.io/deep-object-detect-track/
link: https://github.com/HenryZhuHR/deep-object-detect-track
- theme: alt
text: 项目文档
link: /install
Expand Down
Loading

0 comments on commit 628c4b1

Please sign in to comment.