Release 1.4.0, corresponding to NGC container 19.07
NVIDIA TensorRT Inference Server
The NVIDIA TensorRT Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server.
What's New In 1.4.0
-
Added libtorch as a new backend. PyTorch models manually decorated or automatically traced to produce TorchScript can now be run directly by the inference server.
-
Build system converted from bazel to CMake. The new CMake-based build system is more transparent, portable and modular.
-
To simplify the creation of custom backends, a Custom Backend SDK and improved documentation is now available.
-
Improved AsyncRun API in C++ and Python client libraries.
-
perf_client can now use user-supplied input data (previously perf_client could only use random or zero input data).
-
perf_client now reports latency at multiple confidence percentiles (p50, p90, p95, p99) as well as a user-supplied percentile that is also used to stabilize latency results.
-
Improvements to automatic model configuration creation (--strict-model-config=false).
-
C++ and Python client libraries now allow additional HTTP headers to be specified when using the HTTP protocol.
Known Issues
- Google Cloud Storage (GCS) support has been restored in this release.
Client Libraries and Examples
Ubuntu 16.04 and Ubuntu 18.04 builds of the client libraries and examples are included in this release in the attached v1.4.0_ubuntu1604.clients.tar.gz and v1.4.0_ubuntu1804.clients.tar.gz files. See the documentation section 'Building the Client Libraries and Examples' for more information on using these files.
Custom Backend SDK
Ubuntu 16.04 and Ubuntu 18.04 builds of the custom backend SDK are included in this release in the attached v1.4.0_ubuntu1604.custombackend.tar.gz and v1.4.0_ubuntu1804.custombackend.tar.gz files. See the documentation section 'Building a Custom Backend' for more information on using these files.