Skip to content

Commit

Permalink
Update docs (#3238)
Browse files Browse the repository at this point in the history
* Update docs for release

* Fix wrong contents
  • Loading branch information
harimkang authored Mar 29, 2024
1 parent f9f8078 commit 7b44f67
Show file tree
Hide file tree
Showing 11 changed files with 128 additions and 80 deletions.
6 changes: 0 additions & 6 deletions docs/source/guide/explanation/additional_features/tiling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,12 +67,6 @@ To enable tiling in OTX training, set ``data.config.tile_config.enable_tiler`` p
(otx) ...$ otx train ... --data.config.tile_config.enable_tiler True
.. note::

To learn how to deploy the trained model and run the exported demo, refer to :doc:`../../tutorials/base/deploy`.

To learn how to run the demo in CLI and visualize results, refer to :doc:`../../tutorials/base/demo`.

Tile Size and Tile Overlap Optimization
-----------------------------------------
Expand Down
75 changes: 71 additions & 4 deletions docs/source/guide/explanation/product_design.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,12 +174,77 @@ Authors: @wonjuleee @vinnamkim
Entrypoint
~~~~~~~~~~

TBD @samet-akcay @harimkang

Intel Device Support
~~~~~~~~~~~~~~~~~~~~
1. **User Workflow for OpenVINO™ Training Extensions**

We defined the user workflow for the OpenVINO™ Training Extensions
before defining the entry points to provide to users.

+----------------------------------------------------------------+
| |Definition of OpenVINO™ Training Extensions Workflow| |
+================================================================+
| Figure 4. Definition of OpenVINO™ Training Extensions Workflow |
+----------------------------------------------------------------+

As shown above, the User will define the Dataset and the Task they want to solve.
OpenVINO™ Training Extensions will provide a model to solve that Problem.
You can use the built-in model or choose from several models.
Advanced Users can also import their own Custom Model.

The User will then define the Training Configuration and start the Training.
Basically, OpenVINO™ Training Extensions provide a recipe that allows user to run
the training and change the training parameters.

Users can also use HPO to help the machine find the optimal parameters.
After training, the user can evaluate the model and deploy it to the edge device.
We have defined this natural workflow, and the end result is that
the user is provided with a trained model, a model that is available on the edge device,
and an optimized model.

2. **Designing Engine Classes for a Natural Workflow**

+--------------------------------+
| |Engine Class Diagram| |
+================================+
| Figure 5. Engine Class Diagram |
+--------------------------------+

As shown in Figure 5, we have designed the Engine classes to provide a natural workflow.
In addition to the models and datamodules provided by OpenVINO™ Training Extensions's core,
Engine provides all the entry points that OpenVINO™ Training Extensions wants to provide through Engine.

By looking at the Engine class, users can see what OpenVINO™ Training Extensions is trying to provide and use it in a natural way.
User configure Model and Data, and train it using a Trainer called Engine.
The role of Model can be done in core's Model, and Dataset-related things can be done in Datamodule,
The rest, from training to deployment, will be handled by Engine.

3. **Auto-Configurator to help novice users get started**

Users without typical Trainer experience, such as Lightning, struggle to configure models and data pipelines.
This stems from the structure of the Model, Datamodule, and Engine requiring separate configuration.
To minimize this difficulty, OpenVINO™ Training Extensions uses a feature called Auto-Configuration.
If the user does not provide the required input, whether it is a Model or Data pipeline, the Auto-Configurator fills in the gaps.
This allows users to easily use OpenVINO™ Training Extensions by only configuring the dataset.

.. tab-set::

.. tab-item:: API

.. code-block:: python
from otx.engine import Engine
engine = Engine(data_root="<path_to_data_root>")
engine.train()
.. tab-item:: CLI

.. code-block:: bash
(otx) ...$ otx train ... --data_root <path_to_data_root>
Authors: @samet-akcay @harimkang

TBD

.. [1]
Meijer, Erik, and Peter Drayton. “Static typing where possible,
Expand Down Expand Up @@ -210,3 +275,5 @@ TBD
.. |Task-Data-Model| image:: ../../../utils/images/product_design/task_data_model.png
.. |Reuse Model| image:: ../../../utils/images/product_design/reuse_model.png
.. |Support Various Data Format| image:: ../../../utils/images/product_design/support_various_data_format.png
.. |Definition of OpenVINO™ Training Extensions Workflow| image:: ../../../utils/images/product_design/otx_workflow.png
.. |Engine Class Diagram| image:: ../../../utils/images/product_design/engine_diagram.png
Original file line number Diff line number Diff line change
Expand Up @@ -188,15 +188,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
├── outputs/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
...
The training time highly relies on the hardware characteristics, for example on 1 NVIDIA GeForce RTX 3090 the training took about 3 minutes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -181,15 +181,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
├── outputs/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
...
The training time highly relies on the hardware characteristics, for example on 1 NVIDIA GeForce RTX 3090 the training took about 3 minutes.
Expand Down
17 changes: 8 additions & 9 deletions docs/source/guide/tutorials/base/how_to_train/classification.rst
Original file line number Diff line number Diff line change
Expand Up @@ -279,15 +279,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
├── outputs/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
...
The training time highly relies on the hardware characteristics, for example on 1 NVIDIA GeForce RTX 3090 the training took about 3 minutes.
Expand Down
17 changes: 8 additions & 9 deletions docs/source/guide/tutorials/base/how_to_train/detection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -291,15 +291,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
├── outputs/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
...
The training time highly relies on the hardware characteristics, for example on 1 NVIDIA GeForce RTX 3090 the training took about 3 minutes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -305,15 +305,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
└── outputs/
├── 20240403_134256/
| ├── csv/
| ├── checkpoints/
| | └── epoch_*.pth
| ├── tensorboard/
| └── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
| ├── csv/
| ├── checkpoints/
| | └── epoch_*.pth
| ├── tensorboard/
| └── configs.yaml
└── .latest
└── train/
...
After that, we have the PyTorch instance segmentation model trained with OpenVINO™ Training Extensions, which we can use for evaluation, export, optimization and deployment.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,7 @@ The list of supported recipes for semantic segmentation is available with the co

The characteristics and detailed comparison of the models could be found in :doc:`Explanation section <../../../explanation/algorithms/segmentation/semantic_segmentation>`.

<<<<<<< HEAD
We also can modify the architecture of supported models with various backbones, please refer to the :doc:`advanced tutorial for model customization <../../advanced/backbones>`.

.. tab-set::
=======
.. code-block::
>>>>>>> b55d82cf6f648c42b6b9e3a6c9b1c1e3dbe5d6c2

.. tab-item:: CLI

Expand Down Expand Up @@ -98,7 +92,7 @@ The list of supported recipes for semantic segmentation is available with the co
]
'''
2. On this step we will configure configuration
1. On this step we will configure configuration
with:

- all necessary configs for litehrnet_18
Expand Down Expand Up @@ -230,15 +224,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
└── outputs/
├── 20240403_134256/
| ├── csv/
| ├── checkpoints/
| | └── epoch_*.pth
| ├── tensorboard/
| └── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
| ├── csv/
| ├── checkpoints/
| | └── epoch_*.pth
| ├── tensorboard/
| └── configs.yaml
└── .latest
└── train/
...
After that, we have the PyTorch instance segmentation model trained with OpenVINO™ Training Extensions, which we can use for evaluation, export, optimization and deployment.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -290,15 +290,14 @@ while training logs can be found in the ``{work_dir}/{timestamp}`` dir.
.. code-block::
otx-workspace
├── outputs/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
├── 20240403_134256/
├── csv/
├── checkpoints/
| └── epoch_*.pth
├── tensorboard/
└── configs.yaml
└── .latest
└── train/
...
The training time highly relies on the hardware characteristics, for example on 1 NVIDIA GeForce RTX 3090 the training took about 4 minutes.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 7b44f67

Please sign in to comment.