ExecuTorch is a PyTorch platform that provides infrastructure to run PyTorch programs everywhere from AR/VR wearables to standard on-device iOS and Android mobile deployments. One of the main goals for ExecuTorch is to enable wider customization and deployment capabilities of the PyTorch programs.
The executorch
pip package is in alpha.
- Supported python versions: 3.10, 3.11
- Compatible systems: Linux x86_64, macOS aarch64
The prebuilt executorch.extension.pybindings.portable_lib
module included in
this package provides a way to run ExecuTorch .pte
files, with some
restrictions:
- Only core ATen operators are linked into the prebuilt module
- Only the XNNPACK backend delegate is linked into the prebuilt module
Please visit the ExecuTorch website for tutorials and documentation. Here are some starting points:
- Getting
Started
- Set up the ExecuTorch environment and run PyTorch models locally.
- Working with
local LLMs
- Learn how to use ExecuTorch to export and accelerate a large-language model from scratch.
- Exporting to
ExecuTorch
- Learn the fundamentals of exporting a PyTorch
nn.Module
to ExecuTorch, and optimizing its performance using quantization and hardware delegation.
- Learn the fundamentals of exporting a PyTorch
- Running LLaMA on
iOS and
Android
devices.
- Build and run LLaMA in a demo mobile app, and learn how to integrate models with your own apps.