Skip to content

Commit

Permalink
Modify OpenVINO and Streaming Python doc (#1414)
Browse files Browse the repository at this point in the history
* Add System requirements into OpenVINO doc and OpenVINO example.
* Modify Streaming Python doc.
* Fix path problem in autograd example.
  • Loading branch information
qiyuangong authored May 30, 2019
1 parent 6cb4b0c commit 61f5581
Show file tree
Hide file tree
Showing 7 changed files with 50 additions and 18 deletions.
10 changes: 9 additions & 1 deletion docs/docs/APIGuide/PipelineAPI/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,19 @@ Inference Model is a package in Analytics Zoo aiming to provide high-level APIs

**OpenVINO requirements:**

[System requirements](https://software.intel.com/en-us/openvino-toolkit/documentation/system-requirements):

Ubuntu 16.04.3 LTS (64 bit)
Windows 10 (64 bit)
CentOS 7.4 (64 bit)
macOS 10.13, 10.14 (64 bit)

Python requirements:

tensorflow>=1.2.0
networkx>=1.11
numpy>=1.12.0
protobuf==3.6.1
onnx>=1.1.2

**Supported models:**

Expand Down
10 changes: 9 additions & 1 deletion docs/docs/ProgrammingGuide/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,19 @@ Inference Model is a package in Analytics Zoo aiming to provide high-level APIs

**OpenVINO requirements:**

[System requirements](https://software.intel.com/en-us/openvino-toolkit/documentation/system-requirements):

Ubuntu 16.04.3 LTS (64 bit)
Windows 10 (64 bit)
CentOS 7.4 (64 bit)
macOS 10.13, 10.14 (64 bit)

Python requirements:

tensorflow>=1.2.0
networkx>=1.11
numpy>=1.12.0
protobuf==3.6.1
onnx>=1.1.2

**Java**

Expand Down
8 changes: 4 additions & 4 deletions pyzoo/zoo/examples/autograd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ Follow the instructions [here](https://analytics-zoo.github.io/master/#PythonUse
You can easily use the following commands to run this example:
```
export SPARK_DRIVER_MEMORY=2g
python custom.py
python customloss.py
python path/to/custom.py
python path/to/customloss.py
```

See [here](https://analytics-zoo.github.io/master/#PythonUserGuide/run/#run-after-pip-install) for more running guidance after pip install.
Expand All @@ -24,12 +24,12 @@ ${ANALYTICS_ZOO_HOME}/bin/spark-submit-with-zoo.sh \
--master ${MASTER}\
--driver-memory 2g \
--executor-memory 2g \
custom.py
path/to/custom.py
${ANALYTICS_ZOO_HOME}/bin/spark-submit-with-zoo.sh \
--master ${MASTER}\
--driver-memory 2g \
--executor-memory 2g \
customloss.py
path/to/customloss.py
```
See [here](https://analytics-zoo.github.io/master/#PythonUserGuide/run/#run-without-pip-install) for more running guidance without pip install.

17 changes: 15 additions & 2 deletions pyzoo/zoo/examples/openvino/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,19 @@ to make inferences with OpenVINO toolkit as backend using Analytics Zoo, which d
## Install or download Analytics Zoo
Follow the instructions [here](https://analytics-zoo.github.io/master/#PythonUserGuide/install/) to install analytics-zoo via __pip__ or __download the prebuilt package__.

[OpenVINO System requirements](https://software.intel.com/en-us/openvino-toolkit/documentation/system-requirements):

Ubuntu 16.04.3 LTS (64 bit)
Windows 10 (64 bit)
CentOS 7.4 (64 bit)
macOS 10.13, 10.14 (64 bit)

OpenVINO Python requirements:

tensorflow>=1.2.0
networkx>=1.11
numpy>=1.12.0
protobuf==3.6.1

## Model and Data Preparation
1. Prepare a pre-trained TensorFlow object detection model. You can download from [tensorflow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md).
Expand All @@ -21,7 +34,7 @@ In this example, we use `frozen_inference_graph.pb` of the `faster_rcnn_resnet10
```bash
export SPARK_DRIVER_MEMORY=10g
image_path=directory path that contain images
model_path=path to frozen_inference_graph.pb
model_path=dir contains frozen_inference_graph.pb and pipeline.config

python predict.py --image ${image_path} --model ${model_path}
```
Expand All @@ -35,7 +48,7 @@ export SPARK_HOME=the root directory of Spark
export ANALYTICS_ZOO_HOME=the directory where you extract the downloaded Analytics Zoo zip package
MASTER=local[*]
image_path=directory path that contain images
model_path=path to frozen_inference_graph.pb
model_path=dir contains frozen_inference_graph.pb and pipeline.config

${ANALYTICS_ZOO_HOME}/bin/spark-submit-with-zoo.sh \
--master $MASTER \
Expand Down
5 changes: 4 additions & 1 deletion pyzoo/zoo/examples/openvino/predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

import sys
import numpy as np
from os.path import join
from optparse import OptionParser

from zoo.common.nncontext import init_nncontext
Expand All @@ -41,7 +42,9 @@
resize_height=600, resize_width=600).get_image().collect()
input_data = np.concatenate([image.reshape((1, 1) + image.shape) for image in images], axis=0)
model = InferenceModel()
model.load_tf(options.model_path, backend="openvino", model_type=options.model_type)
model.load_tf(join(options.model_path, "frozen_inference_graph.pb"),
backend="openvino", model_type=options.model_type,
ov_pipeline_config_path=join(options.model_path, "pipeline.config"))
predictions = model.predict(input_data)
# Print the detection result of the first image.
print(predictions[0])
14 changes: 7 additions & 7 deletions pyzoo/zoo/examples/streaming/objectdetection/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Analytics Zoo Streaming Object Detection
Imagining we have pre-trained model and image files in file system, and we want to detect objects in these images. In streaming case, it's not an easy task to read image files with help of a third part framework (such as HDFS or Kafka). To simplify this example, we package image paths into text files. Then, these image paths will be passed to executors through streaming API. Executors will read image content from file systems, and make prediction. The predicted results (images with boxes) will be stored to output dir.

So, there are two applications in this example: image_path_writer and streaming_object_detection. ImagePathWriter will package image paths into text files. Meanwhile, StreamingObjectDetection read image path from those text files, then read image content and make prediction.
So, there are two applications in this example: image_path_writer and streaming_object_detection. image_path_writer will package image paths into text files. Meanwhile, streaming_object_detection read image path from those text files, then read image content and make prediction.

## Environment
* Python (2.7, 3.5 or 3.6)
Expand All @@ -19,26 +19,26 @@ Make sure all nodes can access image files, model and text files. Local file sys
```
MASTER=...
model=... // model path. Local file system/HDFS/Amazon S3 are supported
streamingPath=... // text files location. Only local file system is supported
output=... // output path of prediction result. Only local file system is supported
streaming_path=... // text files location. Only local file system is supported
output_path=... // output path of prediction result. Only local file system is supported
${ANALYTICS_ZOO_HOME}/bin/spark-submit-with-zoo.sh \
--master ${MASTER} \
--driver-memory 5g \
--executor-memory 5g \
streaming_object_detection.py \
--streamingPath ${streamingPath} --model ${model} --output ${output}
--streaming_path ${streaming_path} --model ${model} --output_path ${output_path}
```

2. Start image_path_writer
```
MASTER=...
imageSourcePath=... // image path. Only local file system is supported
streamingPath=... // text files. Only local file system is supported
img_path=... // image path. Only local file system is supported
streaming_path=... // text files. Only local file system is supported
${ANALYTICS_ZOO_HOME}/bin/spark-submit-with-zoo.sh \
--master ${MASTER} \
--driver-memory 5g \
image_path_writer.py \
--streamingPath ${streamingPath} --imageSourcePath ${imageSourcePath}
--streaming_path ${streaming_path} --img_path ${img_path}
```

## Results
Expand Down
4 changes: 2 additions & 2 deletions pyzoo/zoo/examples/streaming/textclassification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,14 @@ nc -lk [port]
```
MASTER=...
model=... // model path. Local file system/HDFS/Amazon S3 are supported
indexPath=... // word index path. Local file system/HDFS/Amazon S3 are supported
index_path=... // word index path. Local file system/HDFS/Amazon S3 are supported
port=... // The same port with nc command
${ANALYTICS_ZOO_HOME}/bin/spark-submit-with-zoo.sh \
--master ${MASTER} \
--driver-memory 2g \
--executor-memory 5g \
streaming_text_classification.py \
--model ${model} --indexPath ${indexPath} --port ${port}
--model ${model} --index_path ${index_path} --port ${port}
```

3. TERMINAL 1: Input text in Netcat
Expand Down

0 comments on commit 61f5581

Please sign in to comment.