From a306a4abe476d846758dfc6d20a151980a02742a Mon Sep 17 00:00:00 2001 From: Shubham Kushwaha <65639964+EricLiclair@users.noreply.github.com> Date: Thu, 5 Dec 2024 09:00:47 +0530 Subject: [PATCH] Update custom-c++-operators-devguide.rst --- .../programming-guide/custom-c++-operators-devguide.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/neuron-customops/programming-guide/custom-c++-operators-devguide.rst b/neuron-customops/programming-guide/custom-c++-operators-devguide.rst index 32191afc..1724a80e 100644 --- a/neuron-customops/programming-guide/custom-c++-operators-devguide.rst +++ b/neuron-customops/programming-guide/custom-c++-operators-devguide.rst @@ -157,7 +157,7 @@ In the example above, name refers to the name of the library file to be created .. warning:: The library file name should not be "builtin" as it is a reserved keyword. -CustomOp also supports multicore execution mode. If you want to the library to run in multicore mode, pass the flag ``multicore=True`` into the ``load`` API. Notice that the execution mode is specified at the library level, so all the functions in the library run in the same mode. For more details of multicore CustomOp, please refer to `Using multiple GPSIMD cores` section in :ref:`custom-ops-api-ref-guide`. +CustomOp also supports multicore execution mode. If you want the library to run in multicore mode, pass the flag ``multicore=True`` into the ``load`` API. Notice that the execution mode is specified at the library level, so all the functions in the library run in the same mode. For more details of multicore CustomOp, please refer to `Using multiple GPSIMD cores` section in :ref:`custom-ops-api-ref-guide`. Similar to PyTorch, the Neuron custom op will be available at ``torch.ops..`` where ``lib_name`` is defined in the ``NEURON_LIBRARY`` macro, and ``op_name`` is defined in the call to ``m.def``. ::