Skip to content

Commit

Permalink
Merge branch 'main' into dependabot/pip/dot-ci/docker/jinja2-3.1.4
Browse files Browse the repository at this point in the history
  • Loading branch information
svekars authored Sep 5, 2024
2 parents b8c99ce + 0e530ea commit 93003e4
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 10 deletions.
5 changes: 3 additions & 2 deletions beginner_source/onnx/intro_onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,14 @@
- `ONNX <https://onnx.ai>`_ standard library
- `ONNX Script <https://onnxscript.ai>`_ library that enables developers to author ONNX operators,
functions and models using a subset of Python in an expressive, and yet simple fashion.
functions and models using a subset of Python in an expressive, and yet simple fashion
- `ONNX Runtime <https://onnxruntime.ai>`_ accelerated machine learning library.
They can be installed through `pip <https://pypi.org/project/pip/>`_:
.. code-block:: bash
pip install --upgrade onnx onnxscript
pip install --upgrade onnx onnxscript onnxruntime
To validate the installation, run the following commands:
Expand Down
6 changes: 6 additions & 0 deletions conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,12 @@
#
# needs_sphinx = '1.0'

html_meta = {
'description': 'Master PyTorch with our step-by-step tutorials for all skill levels. Start your journey to becoming a PyTorch expert today!',
'keywords': 'PyTorch, tutorials, Getting Started, deep learning, AI',
'author': 'PyTorch Contributors'
}

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
Expand Down
1 change: 1 addition & 0 deletions index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ Welcome to PyTorch Tutorials

**What's new in PyTorch tutorials?**

* `torch.export AOTInductor Tutorial for Python runtime (Beta) <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__
* `A guide on good usage of non_blocking and pin_memory() in PyTorch <https://pytorch.org/tutorials/intermediate/pinmem_nonblock.html>`__
* `Introduction to Distributed Pipeline Parallelism <https://pytorch.org/tutorials/intermediate/pipelining_tutorial.html>`__
* `Introduction to Libuv TCPStore Backend <https://pytorch.org/tutorials/intermediate/TCPStore_libuv_backend.html>`__
Expand Down
10 changes: 4 additions & 6 deletions prototype_source/gpu_quantization_torchao_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,12 @@
#
# Segment Anything Model checkpoint setup:
#
# 1. Go to the `segment-anything repo <checkpoint https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can just use ``wget``: `wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>
# 1. Go to the `segment-anything repo checkpoint <https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can use ``wget`` (for example, ``wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>``).
# 2. Pass in that directory by editing the code below to say:
#
# .. code-block::
#
# {sam_checkpoint_base_path}=<path>
# .. code-block:: bash
#
# This was run on an A100-PG509-200 power limited to 330.00 W
# {sam_checkpoint_base_path}=<path>
#

import torch
Expand Down Expand Up @@ -297,7 +295,7 @@ def get_sam_model(only_one_block=False, batchsize=1):
# -----------------
# In this tutorial, we have learned about the quantization and optimization techniques
# on the example of the segment anything model.

#
# In the end, we achieved a full-model apples to apples quantization speedup
# of about 7.7% on batch size 16 (677.28ms to 729.65ms). We can push this a
# bit further by increasing the batch size and optimizing other parts of
Expand Down
8 changes: 6 additions & 2 deletions recipes_source/torch_export_aoti_python.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
# -*- coding: utf-8 -*-

"""
(Beta) ``torch.export`` AOTInductor Tutorial for Python runtime
.. meta::
:description: An end-to-end example of how to use AOTInductor for Python runtime.
:keywords: torch.export, AOTInductor, torch._inductor.aot_compile, torch._export.aot_load
``torch.export`` AOTInductor Tutorial for Python runtime (Beta)
===============================================================
**Author:** Ankith Gunapal, Bin Bao, Angela Yi
"""
Expand All @@ -18,7 +22,7 @@
# a shared library that can be run in a non-Python environment.
#
#
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for python runtime.
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for Python runtime.
# We will look at how to use :func:`torch._inductor.aot_compile` along with :func:`torch.export.export` to generate a
# shared library. Additionally, we will examine how to execute the shared library in Python runtime using :func:`torch._export.aot_load`.
# You will learn about the speed up seen in the first inference time using AOTInductor, especially when using
Expand Down

0 comments on commit 93003e4

Please sign in to comment.