Skip to content

Commit

Permalink
Remove "under active development" in XNNPACK (pytorch#6048)
Browse files Browse the repository at this point in the history
Summary: Pull Request resolved: pytorch#6048

Reviewed By: digantdesai

Differential Revision: D64115305

fbshipit-source-id: 3eb526f395b8eab88e1730a1dee3ee1ea3f32171
  • Loading branch information
mergennachin authored and facebook-github-bot committed Oct 9, 2024
1 parent 36a5bc6 commit cb3a546
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions docs/source/native-delegates-executorch-xnnpack-delegate.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@

This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases.

::::{note}
XNNPACK delegate is currently under active development, and may change in the future
::::

## What is XNNPACK?
XNNPACK is a library of highly-optimized neural network operators for ARM, x86, and WebAssembly architectures in Android, iOS, Windows, Linux, and macOS environments. It is an open source project, you can find more information about it on [github](https://github.com/google/XNNPACK).

Expand Down

0 comments on commit cb3a546

Please sign in to comment.