Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update PT Cheat page to stop referencing torchscript #3139

Merged
merged 2 commits into from
Oct 31, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 4 additions & 17 deletions beginner_source/ptcheat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,27 +22,12 @@ Neural Network API
import torch.nn as nn # neural networks
import torch.nn.functional as F # layers, activations and more
import torch.optim as optim # optimizers e.g. gradient descent, ADAM, etc.
from torch.jit import script, trace # hybrid frontend decorator and tracing jit

See `autograd <https://pytorch.org/docs/stable/autograd.html>`__,
`nn <https://pytorch.org/docs/stable/nn.html>`__,
`functional <https://pytorch.org/docs/stable/nn.html#torch-nn-functional>`__
and `optim <https://pytorch.org/docs/stable/optim.html>`__

TorchScript and JIT
-------------------

.. code-block:: python

torch.jit.trace() # takes your module or function and an example
# data input, and traces the computational steps
# that the data encounters as it progresses through the model

@script # decorator used to indicate data-dependent
# control flow within the code being traced

See `Torchscript <https://pytorch.org/docs/stable/jit.html>`__

ONNX
----

Expand Down Expand Up @@ -225,8 +210,10 @@ Optimizers

opt = optim.x(model.parameters(), ...) # create optimizer
opt.step() # update weights
optim.X # where X is SGD, Adadelta, Adagrad, Adam,
# AdamW, SparseAdam, Adamax, ASGD,
opt.zero_grad() # clear the gradients
optim.X # where X is SGD, AdamW, Adam,
# Adafactor, NAdam, RAdam, Adadelta,
# Adagrad, SparseAdam, Adamax, ASGD,
# LBFGS, RMSprop or Rprop

See `optimizers <https://pytorch.org/docs/stable/optim.html>`__
Expand Down
Loading