Skip to content

Commit

Permalink
[backport 0.8] Modify py35 to py37 in doc (#2228) (#2229)
Browse files Browse the repository at this point in the history
* Modify py35 to py37 in doc (#2228)

* update
  • Loading branch information
hkvision authored Apr 17, 2020
1 parent e1ce193 commit 38bc0ad
Show file tree
Hide file tree
Showing 18 changed files with 22 additions and 22 deletions.
2 changes: 1 addition & 1 deletion apps/anomaly-detection-hd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Since outliers are rare and different, that the auto-encoder will not learn to m
(Retrieved from https://edouardfouche.com/Neural-based-Outlier-Discovery/)

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)

## Run with Jupyter
Expand Down
2 changes: 1 addition & 1 deletion apps/anomaly-detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is a simple example of unsupervised anomaly detection using Analytics Zoo Keras-Style API. We use RNN to predict following data values based on previous sequence (in order) and measure the distance between predicted values and actual values. If the distance is above some threshold, we report those values as anomaly.

## Requirement
* Python 3.5/3.6 (pandas 1.0+)
* Python 3.6/3.7 (pandas 1.0+)
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)

## Install
Expand Down
2 changes: 1 addition & 1 deletion apps/dogs-vs-cats/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
In this notebook, we will use a pre-trained Inception_V1 model. But we will operate on the pre-trained model to freeze first few layers, replace the classifier on the top, then fine tune the whole model. And we use the fine-tuned model to solve the dogs-vs-cats classification problem,

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* JDK 8
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Jupyter Notebook 4.1
Expand Down
2 changes: 1 addition & 1 deletion apps/image-augmentation-3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is a simple example of image augmentation for 3D images using Analytics ZOO API. We use various ways to transform images to augment the dataset.

## Environment
* Python 3.5/3.6 (numpy 1.11.1)
* Python 3.6/3.7 (numpy 1.11.1)
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)

## Install or download Analytics Zoo
Expand Down
2 changes: 1 addition & 1 deletion apps/image-augmentation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is a simple example of image augmentation using Analytics ZOO API. We use various ways to transform images to augment the dataset.

## Environment
* Python 3.5/3.6 (numpy 1.11.1)
* Python 3.6/3.7 (numpy 1.11.1)
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)

## Install or download Analytics Zoo
Expand Down
2 changes: 1 addition & 1 deletion apps/image-similarity/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ introduced. A real estate example was used to recommend similar houses based on
provided by users.

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)

## Install or download Analytics Zoo
Expand Down
2 changes: 1 addition & 1 deletion apps/object-detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ In object-detection.ipynb we use SSD-MobileNet to predict instances of target cl
In messi.ipynb we use a pretrained detect messi model to detect messi in a video. Proposed areas are labeled with boxes and class scores.

## Environment
* Python 3.5/3.6 (Need moviepy)
* Python 3.6/3.7 (Need moviepy)
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)

## Install or download Analytics Zoo
Expand Down
2 changes: 1 addition & 1 deletion apps/recommendation-ncf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ This notebook demonstrates how to build a neural network recommendation system (


## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* JDK 8
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Jupyter Notebook 4.1
Expand Down
2 changes: 1 addition & 1 deletion apps/recommendation-wide-n-deep/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ This notebook demonstrates how to build a neural network recommendation system (


## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* JDK 8
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Jupyter Notebook 4.1
Expand Down
2 changes: 1 addition & 1 deletion apps/sentiment-analysis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ In this example, you will learn how to use Analytics Zoo to develop deep learnin
* How to train deep learning models

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Numpy >= 1.16.0

Expand Down
6 changes: 3 additions & 3 deletions docs/docs/ProgrammingGuide/AnalyticsZoo-on-Databricks.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,9 @@ sc = init_nncontext()
Output on Databricks:

```
Prepending /databricks/python/lib/python3.5/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
Adding /databricks/python/lib/python3.5/site-packages/zoo/share/lib/analytics-zoo-bigdl_0.9.1-spark_2.4.3-0.6.0-jar-with-dependencies.jar to BIGDL_JARS
Prepending /databricks/python/lib/python3.5/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
Prepending /databricks/python/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
Adding /databricks/python/lib/python3.6/site-packages/zoo/share/lib/analytics-zoo-bigdl_0.9.1-spark_2.4.3-0.6.0-jar-with-dependencies.jar to BIGDL_JARS
Prepending /databricks/python/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
```

If you would like to run a completed Analytics Zoo notebook, you can import an Analytics Zoo notebook from a URL directly.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/ProgrammingGuide/visualization.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ With the summary info generated, we can then use [TensorBoard](https://pypi.pyth

Prerequisites:

1. Python version: 3.5 or 3.6
1. Python version: 3.6 or 3.7
2. Pip version >= 9.0.1
3. TensorFlow 1.13.1

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/PythonUserGuide/install.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
For Python users, Analytics Zoo can be installed either [from pip](#install-from-pip-for-local-usage) or [without pip](#install-without-pip).

**NOTE**: Only __Python 3.5__ and __Python 3.6__ are supported for now. We have removed our support and test for Python 2.7 due to its end of life.
**NOTE**: We have tested on __Python 3.6__ and __Python 3.7__. Support for Python 2.7 has been removed due to its end of life.

---
## **Install from pip for local usage**
Expand All @@ -10,7 +10,7 @@ You can use the following command to install the latest release version of __ana
pip install analytics-zoo
```

* You are strongly recommended to use Python 3.5 or 3.6. You might need to run `pip3 install analytics-zoo` instead.
* You are strongly recommended to use Python 3.6 or 3.7. You might need to run `pip3 install analytics-zoo` instead.
* You might need to add `sudo` if you don't have the permission for installation.

**Important:**
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/PythonUserGuide/python-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ This page lists solutions to some common questions.
``` export PYTHONPATH=${ANALYTICS_ZOO_PY_ZIP}:$PYTHONPATH```

2. Python in worker has a different version than that in driver
- ```export PYSPARK_PYTHON=/usr/local/bin/python3.5``` This path should be valid on every worker node.
- ```export PYSPARK_DRIVER_PYTHON=/usr/local/bin/python3.5``` This path should be valid on every driver node.
- ```export PYSPARK_PYTHON=/usr/local/bin/python3.6``` This path should be valid on every worker node.
- ```export PYSPARK_DRIVER_PYTHON=/usr/local/bin/python3.6``` This path should be valid on every driver node.

3. __TypeError__: 'JavaPackage' object is not callable
- Check if every path within the launch script is valid especially the path that ends with jar.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/PythonUserGuide/run.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
You need to first [install](install.md) analytics-zoo, either [from pip](install/#install-from-pip-for-local-usage) or [without pip](install/#install-without-pip).

**NOTE**: Only __Python 3.5__ and __Python 3.6__ are supported for now. We have removed our support and test for Python 2.7 due to its end of life.
**NOTE**: We have tested on __Python 3.6__ and __Python 3.7__. Support for Python 2.7 has been removed due to its end of life.

---

Expand Down
2 changes: 1 addition & 1 deletion pyzoo/zoo/examples/streaming/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Quick example about integrating analytics-zoo inference/predict service into str
2. [Streaming Text Classification](https://github.com/intel-analytics/analytics-zoo/tree/master/pyzoo/zoo/examples/streaming/textclassification) uses pre-trained CNN/LSTM model to classify text from network input.

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Analytics Zoo ([install analytics-zoo]((https://analytics-zoo.github.io/master/#PythonUserGuide/install/) ) via __pip__ or __download the prebuilt package__.)

Expand Down
2 changes: 1 addition & 1 deletion pyzoo/zoo/examples/streaming/objectdetection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Imagining we have pre-trained model and image files in file system, and we want
So, there are two applications in this example: image_path_writer and streaming_object_detection. image_path_writer will package image paths into text files. Meanwhile, streaming_object_detection read image path from those text files, then read image content and make prediction.

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Analytics Zoo ([install analytics-zoo]((https://analytics-zoo.github.io/master/#PythonUserGuide/install/) ) via __pip__ or __download the prebuilt package__.)

Expand Down
2 changes: 1 addition & 1 deletion pyzoo/zoo/examples/streaming/textclassification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Based on Streaming example NetworkWordCount and Zoo text classification example. Network inputs (Strings) are pre-processed and classified by zoo. We applied a simple text classification model based on zoo example.

## Environment
* Python 3.5/3.6
* Python 3.6/3.7
* Apache Spark 2.x (This version needs to be same with the version you use to build Analytics Zoo)
* Analytics Zoo ([install analytics-zoo]((https://analytics-zoo.github.io/master/#PythonUserGuide/install/) ) via __pip__ or __download the prebuilt package__.)

Expand Down

0 comments on commit 38bc0ad

Please sign in to comment.