Skip to content

Commit

Permalink
Add linting steps to doc-automation workflow (#1855)
Browse files Browse the repository at this point in the history
* Add linting steps to doc-automation workflow

* Move changes to lint.yml

* Some improvements

* Fix bug

* Some enhancements

* Fix bug

* Fix some links, a few typos and also linting steps

* Fix intel-extension links and some others

* Fix some typos

* Fix remaining typos

* Fix some other links

* Fix bug

* Add some words to wordllist.txt

* Add some other words to wordlist.txt

* Fix some links and improve markdown-lin-check config

* Fix bug

* Temporary fix a link

* Fix 4 remaining links

* Fix typos

* Fix typos

* Fix bug

* Improve spellcheck and fix misspellings

* Clean wordlist.txt abit

* Fix GitHub 403 errors

From tcort/markdown-link-check#201 (comment)

* Fix md-link-check version

* Fix a link

Co-authored-by: Mark Saroufim <[email protected]>
  • Loading branch information
sadra-barikbin and msaroufim authored Oct 13, 2022
1 parent b2f80d2 commit 3a7187c
Show file tree
Hide file tree
Showing 53 changed files with 650 additions and 213 deletions.
48 changes: 42 additions & 6 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
jobs:
build:
runs-on: ubuntu-latest
name: Test changed-files
name: Lint changed files
steps:
- uses: actions/checkout@v3
with:
Expand All @@ -20,18 +20,24 @@ jobs:
- name: Install lint utilities
run: |
pip install pre-commit
pre-commit install
pre-commit install
- name: Get specific changed files
id: changed-files-specific
uses: tj-actions/[email protected]
- name: Check links in all markdown files
uses: gaurav-nelson/[email protected]
with:
use-verbose-mode: 'yes'
config-file: "ts_scripts/markdown_link_check_config.json"

- name: Get changed files
id: changed-files
uses: tj-actions/[email protected]
with:
files: |
**/*.py
- name: Lint all changed files
run: |
for file in ${{ steps.changed-files-specific.outputs.all_changed_files }}; do
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
pre-commit run --files $file
done
Expand All @@ -43,3 +49,33 @@ jobs:
echo "cd serve/"
echo "pre-commit install"
echo "pre-commit will lint your code for you, so git add and commit those new changes and this check should become green"
spellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Install dependencies
run: |
sudo apt-get install aspell aspell-en
pip install pyspelling
- name: Get changed files
id: changed-files
uses: tj-actions/[email protected]
with:
files: |
**/*.md
- name: Check spellings
run: |
sources=""
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
sources+=" -S $file"
done
pyspelling -c $GITHUB_WORKSPACE/ts_scripts/spellcheck_conf/spellcheck.yaml --name Markdown $sources
- name: In the case of misspellings
if: ${{ failure() }}
run: |
echo "Please fix the misspellings. If you are sure about some of them, "
echo "so append those to ts_scripts/spellcheck_conf/wordlist.txt"
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ plugins/.gradle
*.pem
*.backup
docs/sphinx/src/
ts_scripts/spellcheck_conf/wordlist.dic

# Postman files
test/artifacts/
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Your contributions will fall into two categories:
- For running individual test suites refer [code_coverage](docs/code_coverage.md) documentation
- If you are updating an existing model make sure that performance hasn't degraded by typing running [benchmarks](https://github.com/pytorch/serve/tree/master/benchmarks) on the master branch and your branch and verify there is no performance regression
- Run `ts_scripts/spellcheck.sh` to fix any typos in your documentation
- For large changes make sure to run the [automated benchmark suite](https://github.com/pytorch/serve/tree/master/test/benchmark) which will run the apache bench tests on several configurations of CUDA and EC2 instances
- For large changes make sure to run the [automated benchmark suite](https://github.com/pytorch/serve/tree/master/benchmarks) which will run the apache bench tests on several configurations of CUDA and EC2 instances
- If you need more context on a particular issue, please create raise a ticket on [`TorchServe` GH repo](https://github.com/pytorch/serve/issues/new/choose) or connect to [PyTorch's slack channel](https://pytorch.slack.com/)

Once you finish implementing a feature or bug-fix, please send a Pull Request to https://github.com/pytorch/serve.
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Refer to [torchserve docker](docker/README.md) for details.

## 🏆 Highlighted Examples
* [🤗 HuggingFace Transformers](examples/Huggingface_Transformers)
* [Model parallel inference](examples/Huggingface_Transformers#model-paralellism)
* [Model parallel inference](examples/Huggingface_Transformers#model-parallelism)
* [MultiModal models with MMF](https://github.com/pytorch/serve/tree/master/examples/MMF-activity-recognition) combining text, audio and video
* [Dual Neural Machine Translation](examples/Workflows/nmt_transformers_pipeline) for a complex workflow DAG

Expand All @@ -96,7 +96,7 @@ To learn more about how to contribute, see the contributor guide [here](https://
* [Optimize your inference jobs using dynamic batch inference with TorchServe on Amazon SageMaker](https://aws.amazon.com/blogs/machine-learning/optimize-your-inference-jobs-using-dynamic-batch-inference-with-torchserve-on-amazon-sagemaker/)
* [Using AI to bring children's drawings to life](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/)
* [🎥 Model Serving in PyTorch](https://www.youtube.com/watch?v=2A17ZtycsPw)
* [Evolution of Crestas machine learning architecture: Migration to AWS and PyTorch](https://aws.amazon.com/blogs/machine-learning/evolution-of-crestas-machine-learning-architecture-migration-to-aws-and-pytorch/)
* [Evolution of Cresta's machine learning architecture: Migration to AWS and PyTorch](https://aws.amazon.com/blogs/machine-learning/evolution-of-crestas-machine-learning-architecture-migration-to-aws-and-pytorch/)
* [🎥 Explain Like I’m 5: TorchServe](https://www.youtube.com/watch?v=NEdZbkfHQCk)
* [🎥 How to Serve PyTorch Models with TorchServe](https://www.youtube.com/watch?v=XlO7iQMV3Ik)
* [How to deploy PyTorch models on Vertex AI](https://cloud.google.com/blog/topics/developers-practitioners/pytorch-google-cloud-how-deploy-pytorch-models-vertex-ai)
Expand Down
8 changes: 4 additions & 4 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The benchmarks measure the performance of TorchServe on various models and bench
We currently support benchmarking with JMeter & Apache Bench. One can also profile backend code with snakeviz.

* [Benchmarking with Apache Bench](#benchmarking-with-apache-bench)
* [Auto Benchmarking with Apache Bench](#Auto-Benchmarking-with-Apache-Bench)
* [Auto Benchmarking with Apache Bench](#auto-benchmarking-with-apache-bench)
* [Benchmarking and Profiling with JMeter](jmeter.md)

# Benchmarking with Apache Bench
Expand All @@ -32,7 +32,7 @@ Apache Bench is available on Mac by default. You can test by running ```ab -h```

* Windows
- Download apache binaries from [Apache Lounge](https://www.apachelounge.com/download/)
- Extract and place the contents at some location eg: `C:\Program Files\`
- Extract and place the contents at some location e.g.: `C:\Program Files\`
- Add this path `C:\Program Files\Apache24\bin`to the environment variable PATH.
NOTE - You may need to install Visual C++ Redistributable for Visual Studio 2015-2019.

Expand Down Expand Up @@ -156,7 +156,7 @@ The reports are generated at location "/tmp/benchmark/"
![](predict_latency.png)

# Auto Benchmarking with Apache Bench
`auto_benchmark.py` runs Apache Bench on a set of models and generates an easy to read `report.md` once [Apach bench installation](https://github.com/pytorch/serve/tree/master/benchmarks#installation-1) is done.
`auto_benchmark.py` runs Apache Bench on a set of models and generates an easy to read `report.md` once [Apache bench installation](https://github.com/pytorch/serve/tree/master/benchmarks#installation-1) is done.

## How does the auto benchmark script work?
Auto Benchmarking is tool to allow users to run multiple test cases together and generates final report. Internally, the workflow is:
Expand Down Expand Up @@ -214,6 +214,6 @@ If you need to run your benchmarks on a specific cloud or hardware infrastructur
The high level approach
1. Create a cloud instance in your favorite cloud provider
2. Configure it so it can talk to github actions by running some shell commands listed here https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners
3. Tag your instances in https://github.com/pytorch/serve/settings/actions/runners
3. Tag your instances in the runners tab on Github
3. In the `.yml` make sure to use `runs-on [self-hosted, your_tag]`
4. Inspect the results in https://github.com/pytorch/serve/actions and download the artifacts for further analysis
4 changes: 2 additions & 2 deletions benchmarks/add_jmeter_test.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ A new Jmeter test plan for torchserve benchmark can be added as follows:

* Assuming you know how to create a jmeter test plan. If not then please use this jmeter [guide](https://jmeter.apache.org/usermanual/build-test-plan.html)
* Here, we will show you how 'MMS Benchmarking Image Input Model Test Plan' plan can be added.
This test plan doesn following:
This test plan does following:

* Register a model - `default is resnet-18`
* Scale up to add workers for inference
Expand Down Expand Up @@ -40,7 +40,7 @@ e.g. on macOS, type `jmeter` on commandline
![](img/inference.png)
* Right Click on test plan to add `tearDown Thread Group` and configured required details indicated in the following screenshot

![](img/teardn-tg.png)
![](img/teardown-tg.png)
* Right Click on `tearDown Thread Group` to add `HTTP Request` and configure `unregister` request per given screenshot

![](img/unregister.png)
Expand Down
File renamed without changes
32 changes: 16 additions & 16 deletions benchmarks/sample_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,29 +10,29 @@ TorchServe Benchmark on gpu

|version|Benchmark|Batch size|Batch delay|Workers|Model|Concurrency|Requests|TS failed requests|TS throughput|TS latency P50|TS latency P90|TS latency P99|TS latency mean|TS error rate|Model_p50|Model_p90|Model_p99|predict_mean|handler_time_mean|waiting_time_mean|worker_thread_mean|cpu_percentage_mean|memory_percentage_mean|gpu_percentage_mean|gpu_memory_percentage_mean|gpu_memory_used_mean|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|master|AB|1|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|2345.99|4|5|6|4.263|0.0|1.04|1.15|1.53|1.06|1.02|1.93|0.28|0.0|0.0|0.0|0.0|0.0|
|master|AB|2|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|3261.31|3|4|5|3.066|0.0|1.36|1.91|2.18|1.45|1.41|0.17|0.44|0.0|0.0|0.0|0.0|0.0|
|master|AB|4|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|2457.64|4|6|7|4.069|0.0|1.89|2.2|2.96|1.97|1.94|0.53|0.59|0.0|0.0|0.0|0.0|0.0|
|master|AB|8|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|1640.2|5|9|11|6.097|0.0|2.95|3.15|3.43|3.0|2.96|1.06|0.8|0.0|0.0|0.0|0.0|0.0|
|master|AB|1|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|3444.57|3|3|4|2.903|0.0|1.32|1.68|1.87|1.37|1.34|0.08|0.46|0.0|0.0|0.0|0.0|0.0|
|master|AB|2|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|3275.88|3|4|5|3.053|0.0|1.61|2.23|2.51|1.72|1.68|0.01|0.55|0.0|0.0|0.0|0.0|0.0|
|master|AB|4|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|2346.15|4|6|8|4.262|0.0|2.01|2.42|3.19|2.1|2.06|0.57|0.57|0.0|0.0|0.0|0.0|0.0|
|master|AB|8|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|[input](10000)|0|1572.82|5|9|12|6.358|0.0|3.09|3.39|4.7|3.15|3.11|1.1|0.82|0.0|0.0|0.0|0.0|0.0|
|master|AB|1|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|2345.99|4|5|6|4.263|0.0|1.04|1.15|1.53|1.06|1.02|1.93|0.28|0.0|0.0|0.0|0.0|0.0|
|master|AB|2|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|3261.31|3|4|5|3.066|0.0|1.36|1.91|2.18|1.45|1.41|0.17|0.44|0.0|0.0|0.0|0.0|0.0|
|master|AB|4|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|2457.64|4|6|7|4.069|0.0|1.89|2.2|2.96|1.97|1.94|0.53|0.59|0.0|0.0|0.0|0.0|0.0|
|master|AB|8|100|4|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|1640.2|5|9|11|6.097|0.0|2.95|3.15|3.43|3.0|2.96|1.06|0.8|0.0|0.0|0.0|0.0|0.0|
|master|AB|1|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|3444.57|3|3|4|2.903|0.0|1.32|1.68|1.87|1.37|1.34|0.08|0.46|0.0|0.0|0.0|0.0|0.0|
|master|AB|2|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|3275.88|3|4|5|3.053|0.0|1.61|2.23|2.51|1.72|1.68|0.01|0.55|0.0|0.0|0.0|0.0|0.0|
|master|AB|4|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|2346.15|4|6|8|4.262|0.0|2.01|2.42|3.19|2.1|2.06|0.57|0.57|0.0|0.0|0.0|0.0|0.0|
|master|AB|8|100|8|[.mar](https://torchserve.pytorch.org/mar_files/mnist_v2.mar)|10|10000|0|1572.82|5|9|12|6.358|0.0|3.09|3.39|4.7|3.15|3.11|1.1|0.82|0.0|0.0|0.0|0.0|0.0|

## eager_mode_vgg16

|version|Benchmark|Batch size|Batch delay|Workers|Model|Concurrency|Requests|TS failed requests|TS throughput|TS latency P50|TS latency P90|TS latency P99|TS latency mean|TS error rate|Model_p50|Model_p90|Model_p99|predict_mean|handler_time_mean|waiting_time_mean|worker_thread_mean|cpu_percentage_mean|memory_percentage_mean|gpu_percentage_mean|gpu_memory_percentage_mean|gpu_memory_used_mean|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|master|AB|1|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|[input](10000)|0|277.64|353|384|478|360.178|0.0|13.27|14.49|18.55|13.61|13.57|343.11|0.35|69.2|11.3|22.25|12.4|2004.0|
|master|AB|2|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|[input](10000)|0|284.7|344|377|462|351.248|0.0|25.69|29.79|49.7|26.86|26.82|320.57|0.84|33.3|11.29|16.25|12.39|2002.0|
|master|AB|4|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|[input](10000)|0|298.66|331|355|386|334.831|0.0|50.61|54.65|72.63|51.69|51.64|278.95|1.33|66.7|11.63|16.0|12.81|2070.0|
|master|AB|8|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|[input](10000)|0|302.97|321|367|401|330.066|0.0|100.17|108.43|134.97|102.03|101.97|222.5|2.62|0.0|12.1|15.25|13.4|2166.0|
|master|AB|1|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|10000|0|277.64|353|384|478|360.178|0.0|13.27|14.49|18.55|13.61|13.57|343.11|0.35|69.2|11.3|22.25|12.4|2004.0|
|master|AB|2|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|10000|0|284.7|344|377|462|351.248|0.0|25.69|29.79|49.7|26.86|26.82|320.57|0.84|33.3|11.29|16.25|12.39|2002.0|
|master|AB|4|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|10000|0|298.66|331|355|386|334.831|0.0|50.61|54.65|72.63|51.69|51.64|278.95|1.33|66.7|11.63|16.0|12.81|2070.0|
|master|AB|8|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16.mar)|100|10000|0|302.97|321|367|401|330.066|0.0|100.17|108.43|134.97|102.03|101.97|222.5|2.62|0.0|12.1|15.25|13.4|2166.0|

## scripted_mode_vgg16

|version|Benchmark|Batch size|Batch delay|Workers|Model|Concurrency|Requests|TS failed requests|TS throughput|TS latency P50|TS latency P90|TS latency P99|TS latency mean|TS error rate|Model_p50|Model_p90|Model_p99|predict_mean|handler_time_mean|waiting_time_mean|worker_thread_mean|cpu_percentage_mean|memory_percentage_mean|gpu_percentage_mean|gpu_memory_percentage_mean|gpu_memory_used_mean|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|master|AB|1|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|[input](10000)|0|282.06|351|368|430|354.53|0.0|13.18|13.91|18.68|13.41|13.37|337.73|0.33|80.0|11.32|23.25|12.4|2004.0|
|master|AB|2|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|[input](10000)|0|288.03|345|363|406|347.18|0.0|25.68|29.08|40.61|26.53|26.49|316.93|0.83|37.5|11.31|16.5|12.39|2002.0|
|master|AB|4|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|[input](10000)|0|296.25|332|356|447|337.552|0.0|50.72|55.09|84.0|52.09|52.04|281.21|1.34|0.0|11.63|16.0|12.81|2070.0|
|master|AB|8|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|[input](10000)|0|301.07|324|367|407|332.147|0.0|100.49|109.71|136.18|102.69|102.63|223.7|2.59|0.0|0.0|0.0|0.0|0.0|
|master|AB|1|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|10000|0|282.06|351|368|430|354.53|0.0|13.18|13.91|18.68|13.41|13.37|337.73|0.33|80.0|11.32|23.25|12.4|2004.0|
|master|AB|2|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|10000|0|288.03|345|363|406|347.18|0.0|25.68|29.08|40.61|26.53|26.49|316.93|0.83|37.5|11.31|16.5|12.39|2002.0|
|master|AB|4|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|10000|0|296.25|332|356|447|337.552|0.0|50.72|55.09|84.0|52.09|52.04|281.21|1.34|0.0|11.63|16.0|12.81|2070.0|
|master|AB|8|100|4|[.mar](https://torchserve.pytorch.org/mar_files/vgg16_scripted.mar)|100|10000|0|301.07|324|367|407|332.147|0.0|100.49|109.71|136.18|102.69|102.63|223.7|2.59|0.0|0.0|0.0|0.0|0.0|
18 changes: 9 additions & 9 deletions binaries/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Building TorchServe and Torch-Model-Archiver release binaries
1. Make sure all the dependencies are installed
##### Linux and MacOs:
##### Linux and macOS:
```bash
python ts_scripts/install_dependencies.py --environment=dev
```
Expand All @@ -13,7 +13,7 @@

2. To build a `torchserve` and `torch-model-archiver` wheel execute:
##### Linux and MacOs:
##### Linux and macOS:
```bash
python binaries/build.py
```
Expand All @@ -26,7 +26,7 @@
> For additional info on conda builds refer to [this readme](conda/README.md)
3. Build outputs are located at
##### Linux and MacOs:
##### Linux and macOS:
- Wheel files
`dist/torchserve-*.whl`
`model-archiver/dist/torch_model_archiver-*.whl`
Expand All @@ -44,7 +44,7 @@

# Install torchserve and torch-model-archiver binaries
1. To install torchserve using the newly created binaries execute:
##### Linux and MacOs:
##### Linux and macOS:
```bash
python binaries/install.py
```
Expand All @@ -56,7 +56,7 @@
```
2. Alternatively, you can manually install binaries
- Using wheel files
##### Linux and MacOs:
##### Linux and macOS:
```bash
pip install dist/torchserve-*.whl
pip install model-archiver/dist/torch_model_archiver-*.whl
Expand All @@ -70,7 +70,7 @@
pip install .\workflow-archiver\dist\<torch_workflow_archiver_wheel>
```
- Using conda packages
##### Linux and MacOs:
##### Linux and macOS:
```bash
conda install --channel ./binaries/conda/output -y torchserve torch-model-archiver torch-workflow-archiver
```
Expand All @@ -80,7 +80,7 @@

# Building TorchServe, Torch-Model-Archiver & Torch-WorkFlow-Archiver nightly binaries
1. Make sure all the dependencies are installed
##### Linux and MacOs:
##### Linux and macOS:
```bash
python ts_scripts/install_dependencies.py --environment=dev
```
Expand All @@ -93,7 +93,7 @@


2. To build a `torchserve`, `torch-model-archiver` & `torch-workflow-archiver` nightly wheel execute:
##### Linux and MacOs:
##### Linux and macOS:
```bash
python binaries/build.py --nightly
```
Expand All @@ -106,7 +106,7 @@
> For additional info on conda builds refer to [this readme](conda/README.md)

3. Build outputs are located at
##### Linux and MacOs:
##### Linux and macOS:
- Wheel files
`dist/torchserve-*.whl`
`model-archiver/dist/torch_model_archiver-*.whl`
Expand Down
4 changes: 2 additions & 2 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ cd serve/docker
# Create TorchServe docker image

Use `build_image.sh` script to build the docker images. The script builds the `production`, `dev` and `codebuild` docker images.
| Parameter | Desciption |
| Parameter | Description |
|------|------|
|-h, --help|Show script help|
|-b, --branch_name|Specify a branch name to use. Default: master |
Expand Down Expand Up @@ -271,7 +271,7 @@ torch-model-archiver --model-name densenet161 --version 1.0 --model-file /home/m

Refer [torch-model-archiver](../model-archiver/README.md) for details.

6. desnet161.mar file should be present at /home/model-server/model-store
6. densenet161.mar file should be present at /home/model-server/model-store

# Running TorchServe in a Production Docker Environment.

Expand Down
Loading

0 comments on commit 3a7187c

Please sign in to comment.