Skip to content

Release 2.10.0 corresponding to NGC container 21.05

Compare
Choose a tag to compare
@dzier dzier released this 21 May 14:38
021593f

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New In 2.10.0

  • Triton on Jetson now supports ONNX via the ONNX Runtime backend.

  • The Triton server and HTTP clients (Python and C++) now support compression.

  • Ragged batching is now supported for ONNX models.

  • The Triton clients have moved to a separate repo: https://github.com/triton-inference-server/client

  • Trace now correctly reports all timestamps for all backends.

  • NVTX annotations are fixed.

  • The legacy custom backend support is removed. All custom backends must be implemented using the TRITONBACKEND API described here: https://github.com/triton-inference-server/backend

  • Added CLI subcommands in Model Analyzer for profile, analyze, and report. See CLI documentation for usage instructions.

  • Model Analyzer can create a detailed report of any specific model configuration with the report subcommand.

  • CPU only mode is now supported in Model Analyzer.

Known Issues

  • There are backwards incompatible changes in the example Python client shared-memory support library when that library is used for tensors of type BYTES. The utils.serialize_byte_tensor() and utils.deserialize_byte_tensor() functions now return np.object_ numpy arrays where previously they returned np.bytes_ numpy arrays. Code depending on np.bytes_ must be updated. This change was necessary because the np.bytes_ type removes all trailing zeros from each array element and so binary sequences ending in zero(s) could not be represented with the old behavior. Correct usage of the Python client shared-memory support library is shown in https://github.com/triton-inference-server/server/blob/r21.03/src/clients/python/examples/simple_http_shm_string_client.py.

  • Some versions of Google Kubernetes Engine (GKE) contain a regression in the handling of LD_LIBRARY_PATH that prevents the inference server container from running correctly (see issue 141255952). Use a GKE 1.13 or earlier version or a GKE 1.14.6 or later version to avoid this issue.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.10.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.10.0-sdk-win.zip file.

Windows Support

An alpha release of Triton for Windows is provided in the attached file: tritonserver2.10.0-win.zip. This is an alpha release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • TensorRT models are supported. The TensorRT version is 7.2.2.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.7.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • Only the GRPC endpoint is supported, HTTP/REST is not supported.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

The following components are required for this release and must be installed on the Windows system:

  • NVIDIA Driver release 455 or later.

  • CUDA 11.1.1

  • cuDNN 8.0.5

  • TensorRT 7.2.2

Jetson Jetpack Support

A release of Triton for JetPack 4.5 (https://developer.nvidia.com/embedded/jetpack) is provided in the attached file: tritonserver2.10.0-jetpack4.5.tgz. This release supports the TensorFlow 2.4.0, TensorFlow 1.15.5, TensorRT 7.1, OnnxRuntime 1.7.1 and as well as ensembles. For the OnnxRuntime backend the TensorRT execution provider is supported but the OpenVINO execution provider is not supported. System shared memory is supported on Jetson. GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples.

Installation and Usage

The following dependencies must be installed before running Triton.

apt-get update && \
    apt-get install -y --no-install-recommends \
        software-properties-common \
        autoconf \
        automake \
        build-essential \
        cmake \
        git \
        libb64-dev \
        libre2-dev \
        libssl-dev \
        libtool \
        libboost-dev \
        libcurl4-openssl-dev \
        libopenblas-dev \
        rapidjson-dev \
        patchelf \
        zlib1g-dev

To run the clients the following dependencies must be installed.

apt-get install -y --no-install-recommends \
        curl \
        libopencv-dev=3.2.0+dfsg-4ubuntu0.1 \
        libopencv-core-dev=3.2.0+dfsg-4ubuntu0.1 \
        pkg-config \
        python3 \
        python3-pip \
        python3-dev

pip3 install --upgrade wheel setuptools cython && \
pip3 install --upgrade grpcio-tools numpy future attrdict

The Python wheel for the python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.10.0-py3-none-linux_aarch64.whl[all]

On Jetson, the backend directory needs to be explicitly set with the --backend-directory flag. Triton also defaults to using TensorFlow 1.x and a version string is required to specify TensorFlow 2.x.

  tritonserver --model-repository=/path/to/model_repo --backend-directory=/path/to/tritonserver/backends \
         --backend-config=tensorflow,version=2