TRT_Triton Hands_On (2022/04/21 AI Developer Meetup) Set up environments docker run --gpus '"device=0"' -it --rm -p 8887:8887 -v $(pwd):/hands_on nvcr.io/nvidia/pytorch:22.03-py3 cd /hands_on jupyter notebook --ip 0.0.0.0 --port 8887 Additional Resources NGC DLI Deploying a Model for Inference at Production Scale TRT Quick Start TRT Documentation TF-TRT Torch-TRT Triton Server Triton Client Triton Model Analyzer GTC On-Demand - To watch more deep dive sessions