Contents

Triton Inference Server Tips & Tricks

Adam Wawrzyński

07 Jun 2023.6 minutes read

Triton Inference Server Tips & Tricks webp image

This is the second post regarding Triton Inference Server. In the first one, I’ve described features and use cases. If you’ve missed it and you don’t understand some of the concepts from this text I encourage you to read it first. You can find it here.

This blog post describes tips and tricks, presents a cheat sheet of useful commands, and provides code samples for benchmarking different model configurations for computer vision models. This post is more technical and by reading it you will learn specific commands for optimizing and configuring the models served by Triton.

Note: in all commands below I am using containers with versions 22.04 or 22.11 but you can choose whatever version that works for you. Those containers have dependencies on CUDA drivers and versions I am using in commands below might not work for you.

Cheatsheets

  • Run Triton in docker container

1

Important note: You have to pass the absolute path of the current directory and mount at the same path in the container filesystem. Otherwise, it won’t be able to find models in the model-repository

  • Use model-analyzer tool in docker container

2

  • Define global response cache

3

Define model response cache

  • in config.pbtxt:

4

  • Define models poll interval

5

  • Typical model-repository structure

6

  • Define dynamic batching (here I use 100 microseconds as time to aggregate dynamic batch)
    in config.pbtxt:

7

  • Define model’s max batch size (here I use 8 as maximum batch size)
    in config.pbtxt:

8

  • Model warmup

Add this line in config.pbtxt. The field key must match the defined input name, type, and dims.
9

  • TensorRT optimization (use half-precision model, define max workspace size)

99

  • OpenVINO optimization (for models executed on CPU)

10

  • TF JIT optimization

11

  • TensorFlow XLA optimizations

12

  • TensorFlow with automatic FP16 optimization

13

Tips & tricks - things I wish I knew before playing around with Triton

  • To use model-analyzer in docker container remember to mount a volume with model-repository at the same path in the container as is in the host filesystem. Otherwise model-analyzer will have problems with finding deployed models:

14

  • To export the model to TensorRT use docker image:

15

It’s very important to use the same version of tensorrt container as tritonserver due to version validation. In other words, Triton won’t server model exported with a different version of TensorRT.

  • Basic command to convert ONNX model to TensorRT:

16

  • To export model in half-precision You can add flags:

17

This optimization gave up to 2x speedup in terms of latency and throughput on MobileNetV3 compared to FP32 ONNX model.

  • To export the model in quantized format (some of the weights will be stored as int8) use flags:

18

This optimization gave up to 2-3x speedup in terms of latency and throughput on MobileNetV3 compared to FP32 ONNX model.

  • To export the model with dynamic batch size You have to specify 3 parameters to trtexec program. For a model trained on the ImageNet, those parameters can look like this:

20

This optimization gave up to 2x speedup in terms of latency and throughput on MobileNetV3 compared to models in single batch mode.

  • For higher throughput and lower latency in single batch mode You should use explicit batch size.

Around 20% increased throughput and 16% reduced inference time were observed for the model with the explicit batch size compared to the model with dynamic batch size.

  • To export a model to ONNX (for example from PyTorch code) with dynamic batch size You have to specify parameter _dynamicaxes:

21

You can define input and output names to gain better control on parameter names during inference. Otherwise, names based on the model’s layers names will be used.

You can take a look at the PyTorch example to gain a better understanding of the process.

  • config.pbtxt:

Define _max_batchsize of the model to a reasonable value greater or equal to 1. If the model does not support dynamic batches, for example, a model exported to TensorRT with an explicit batch size equal to 1, You should select this value to 1.

If _max_batchsize is larger than 1 then in _dims _the first dimension by default is always -1. If You define dims: [3, 224, 224] Triton will append -1 at the first position in the list for You.

  • model-analyzer:

If You use a docker container to perform model analysis and tune Triton configuration parameters remember to mount a volume with the model-repository inside the container at the same path as it is on the host machine. Otherwise _perfanalyzer and _modelanalyzer will struggle to find models.
22

You can check the documentation and all parameters of the _modelanalyzer here.

  • Example of command:

23

  • _perfanalyzer:

It’s problematic to analyze models that operate on images if You can’t request random data. In this case, You can prepare a file with example data to be sent in a request. It has a predefined format with data as a flat list of values and the shape of a desired tensor. In the example below I want to send an image of a shape (640, 480, 3) but in the field “content” I have to specify the image as a flat list of values, in this case of shape (921600,).

24

  • Command with data example stored in a file data.json will look like this:

25

  • Ensemble model

You have to create an empty directory with version to meet Triton’s requirements.

Ensemble model

For example, deployment of a ViT model as a Python model looks like this:
26

Functions initialize and execute are required by Triton. In initialization model and feature extractor are obtained from huggingface. execute does some preprocessing, calls models, and returns results.

And its config.pbtxt:
27

And directory looks like this:
28

Below I present You ViT model deployed as an ensemble model with separate pre- and post-processing.

_ensemblemodel/config.pbtxt:
29

preprocessing/1/model.py:
30

preprocessing/config.pbtxt:
31

vit/config.pbtxt:
32

Directory structure looks like this:
33

34

Take a look at a comparison of those two deployment configurations. You can clearly see that optimization of the core model gives speedup of the inference pipeline, up to around 30%.

Benchmark code

I’ve selected MobileNetV2 for a benchmark. Below You can find two tables with results for different batch sizes.
35

36

As one could expect, TorchScript without any optimizations is the worst in this comparison. From the table above we can conclude that increasing _batchsize with dynamic batching translates to increased throughput. Model conversion to FP16 and INT8 gives noticeable speedup but it may cause reduced performance. What is interesting is that TensorRT FP16 has higher throughput than TensorRT FP16 optimized. The first model was exported as a half-precision model, whereas the second one was exported as a full-precision model and configured in Triton to use FP16. From the chart above we can conclude that if you want to use a half-precision model it’s better to export it in this form and don’t rely on Triton’s conversion. As always results will differ on different hardware and you should test all configurations before deployment on Your machine.

Summary

This post presents a cheat sheet with useful Triton commands and configurations that you can refer back to when working with this tool. Code and benchmarks for different configurations of the model for computer vision are presented. The reader observed the process of optimizing a complex model by separating the pre and post-processing code from the model inference itself. I hope you learned something new today and I encourage you to try to play around with Triton. I have a feeling you might like each other 🙂

Blog Comments powered by Disqus.