From cb3a546be0d22fe9d7aaa2c56f4555db974aa669 Mon Sep 17 00:00:00 2001 From: Mergen Nachin Date: Wed, 9 Oct 2024 10:15:11 -0700 Subject: [PATCH] Remove "under active development" in XNNPACK (#6048) Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/6048 Reviewed By: digantdesai Differential Revision: D64115305 fbshipit-source-id: 3eb526f395b8eab88e1730a1dee3ee1ea3f32171 --- docs/source/native-delegates-executorch-xnnpack-delegate.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/docs/source/native-delegates-executorch-xnnpack-delegate.md b/docs/source/native-delegates-executorch-xnnpack-delegate.md index a26ae0c63e..b21f4c4d44 100644 --- a/docs/source/native-delegates-executorch-xnnpack-delegate.md +++ b/docs/source/native-delegates-executorch-xnnpack-delegate.md @@ -2,10 +2,6 @@ This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases. -::::{note} -XNNPACK delegate is currently under active development, and may change in the future -:::: - ## What is XNNPACK? XNNPACK is a library of highly-optimized neural network operators for ARM, x86, and WebAssembly architectures in Android, iOS, Windows, Linux, and macOS environments. It is an open source project, you can find more information about it on [github](https://github.com/google/XNNPACK).