Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to atomic counter for uid generation #912

Merged
merged 13 commits into from
Jan 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ publish-local-extra: build-extra
docker tag savant-deepstream$(PLATFORM_SUFFIX)-extra ghcr.io/insight-platform/savant-deepstream$(PLATFORM_SUFFIX)-extra

build:
docker buildx build \
docker build \
--platform $(PLATFORM) \
--target base \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
Expand All @@ -38,7 +38,7 @@ build:
-t savant-deepstream$(PLATFORM_SUFFIX) .

build-adapters-deepstream:
docker buildx build \
docker build \
--platform $(PLATFORM) \
--target adapters \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
Expand All @@ -47,14 +47,14 @@ build-adapters-deepstream:
-t savant-adapters-deepstream$(PLATFORM_SUFFIX) .

build-adapters-gstreamer:
docker buildx build \
docker build \
--platform $(PLATFORM) \
--build-arg SAVANT_RS_VERSION=$(SAVANT_RS_VERSION) \
-f docker/Dockerfile.adapters-gstreamer \
-t savant-adapters-gstreamer$(PLATFORM_SUFFIX) .

build-adapters-py:
docker buildx build \
docker build \
--platform $(PLATFORM) \
--build-arg SAVANT_RS_VERSION=$(SAVANT_RS_VERSION) \
-f docker/Dockerfile.adapters-py \
Expand All @@ -63,7 +63,7 @@ build-adapters-py:
build-adapters-all: build-adapters-py build-adapters-gstreamer build-adapters-deepstream

build-extra-packages:
docker buildx build \
docker build \
--platform $(PLATFORM) \
--target extra$(PLATFORM_SUFFIX)-builder \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
Expand All @@ -78,7 +78,7 @@ build-extra-packages:
savant-extra$(PLATFORM_SUFFIX)-builder

build-extra:
docker buildx build \
docker build \
--platform $(PLATFORM) \
--target deepstream$(PLATFORM_SUFFIX)-extra \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
Expand All @@ -89,7 +89,7 @@ build-extra:
build-opencv: build-opencv-amd64 build-opencv-arm64

build-opencv-amd64:
docker buildx build \
docker buildx build --load \
--platform linux/amd64 \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
-f docker/Dockerfile.deepstream-opencv \
Expand All @@ -100,7 +100,7 @@ build-opencv-amd64:
savant-opencv-builder

build-opencv-arm64:
docker buildx build \
docker buildx build --load \
--platform linux/arm64 \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
-f docker/Dockerfile.deepstream-opencv \
Expand All @@ -112,7 +112,7 @@ build-opencv-arm64:

build-docs:
rm -rf docs/source/reference/api/generated
docker buildx build \
docker build \
--target docs \
--build-arg DEEPSTREAM_VERSION=$(DEEPSTREAM_VERSION) \
--build-arg SAVANT_RS_VERSION=$(SAVANT_RS_VERSION) \
Expand Down Expand Up @@ -164,7 +164,7 @@ run-dev:
-v `pwd`/var:$(PROJECT_PATH)/var \
-v /tmp/zmq-sockets:/tmp/zmq-sockets \
--entrypoint /bin/bash \
savant-deepstream$(PLATFORM_SUFFIX)-extra
savant-deepstream$(PLATFORM_SUFFIX)

clean:
find . -type d -name __pycache__ -exec rm -rf {} \+
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.deepstream
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ARG DEEPSTREAM_VERSION
ARG DEEPSTREAM_VERSION=7.0
ARG DEEPSTREAM_DEVEL_IMAGE=$DEEPSTREAM_VERSION-triton-multiarch
ARG DEEPSTREAM_BASE_IMAGE=$DEEPSTREAM_VERSION-samples-multiarch
FROM nvcr.io/nvidia/deepstream:$DEEPSTREAM_DEVEL_IMAGE AS builder
Expand Down
2 changes: 1 addition & 1 deletion docs/source/advanced_topics/0_dead_stream_eviction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@ Take a look at the ``default.yml`` for details:

.. literalinclude:: ../../../savant/config/default.yml
:language: YAML
:lines: 195-228
:lines: 193-226

You can override only required parameters in your module YAML configuration file. Also, take a look at corresponding environment variables helping to configure the parameters without specifying them in the module config.
8 changes: 3 additions & 5 deletions docs/source/advanced_topics/14_jetson_dla.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,12 @@ To use DLA, you need to specify the target device in the YAML `config`. Use the
.. code-block:: yaml

- element: nvinfer@detector
name: LPDNet
name: yolo
model:
remote:
url: https://api.ngc.nvidia.com/v2/models/nvidia/tao/lpdnet/versions/pruned_v2.2/zip
format: onnx
model_file: LPDNet_usa_pruned_tao5.onnx
model_file: yolo.onnx
precision: int8
int8_calib_file: usa_cal_8.6.1.bin
int8_calib_file: calib.bin
batch_size: 16
enable_dla: true # allocate this model on DLA
use_dla_core: 1 # use DLA core 1
Expand Down
4 changes: 2 additions & 2 deletions docs/source/savant_101/12_module_definition.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Modules are executed within specially prepared docker containers. If a module do

docker pull ghcr.io/insight-platform/savant-deepstream:latest

* Deepstream 6.4 capable Nvidia edge devices (Jetson AGX Orin, Orin NX, Orin Nano)
* Deepstream 7.0 capable Nvidia edge devices (Jetson AGX Orin, Orin NX, Orin Nano)

.. code-block:: bash

Expand Down Expand Up @@ -54,7 +54,7 @@ The following parameters are defined for a Savant module by default:

.. literalinclude:: ../../../savant/config/default.yml
:language: YAML
:lines: 1-195
:lines: 1-193

.. note::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/savant_101/12_pipeline.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Default module configuration file already defines the :py:attr:`~savant.config.s

.. literalinclude:: ../../../savant/config/default.yml
:language: YAML
:lines: 195-
:lines: 193-

It is possible to redefine them, but the encouraged operation mode assumes the use of ZeroMQ source and sink.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/savantdev/0_configure_doc_env.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ We recommend pulling Nvidia docker containers separately because the Nvidia regi

.. code-block:: bash

docker pull nvcr.io/nvidia/deepstream:6.4-samples-multiarch
docker pull nvcr.io/nvidia/deepstream:6.4-triton-multiarch
docker pull nvcr.io/nvidia/deepstream:7.0-samples-multiarch
docker pull nvcr.io/nvidia/deepstream:7.0-triton-multiarch

Build The Dockerized Sphinx Runtime
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down
3 changes: 0 additions & 3 deletions samples/conditional_video_processing/demo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,6 @@ pipeline:
checksum_url: s3://savant-data/models/peoplenet/peoplenet_pruned_v2.0.md5
parameters:
endpoint: https://eu-central-1.linodeobjects.com
# or get the model directly from NGC API
# peoplenet v2.0
# url: "https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/zip"

# model file name, without location
model_file: resnet34_peoplenet_pruned.etlt # v2.0 Accuracy: 84.3 Size 20.9 MB
Expand Down
3 changes: 0 additions & 3 deletions samples/kafka_redis_adapter/demo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,6 @@ pipeline:
checksum_url: s3://savant-data/models/peoplenet/peoplenet_pruned_v2.0.md5
parameters:
endpoint: https://eu-central-1.linodeobjects.com
# or get the model directly from NGC API
# peoplenet v2.0
# url: "https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/zip"

# model file name, without location
model_file: resnet34_peoplenet_pruned.etlt # v2.0 Accuracy: 84.3 Size 20.9 MB
Expand Down
2 changes: 1 addition & 1 deletion samples/license_plate_recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Preview:

Tested on platforms:

- Nvidia Turing
- Nvidia Turing, Ampere
- Nvidia Jetson Orin family

Demonstrated adapters:
Expand Down
7 changes: 4 additions & 3 deletions samples/license_plate_recognition/module.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,11 +73,12 @@ pipeline:
name: LPDNet
model:
remote:
url: https://api.ngc.nvidia.com/v2/models/nvidia/tao/lpdnet/versions/pruned_v2.2/zip
url: s3://savant-data/models/lpdnet/lpdnet_v2.2.zip
checksum_url: s3://savant-data/models/lpdnet/lpdnet_v2.2.md5
parameters:
endpoint: https://eu-central-1.linodeobjects.com
format: onnx
model_file: LPDNet_usa_pruned_tao5.onnx
precision: int8
int8_calib_file: usa_cal_8.6.1.bin
batch_size: 16
input:
object: yolov8n.Car
Expand Down
3 changes: 0 additions & 3 deletions samples/multiple_rtsp/demo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,6 @@ pipeline:
checksum_url: s3://savant-data/models/peoplenet/peoplenet_pruned_v2.0.md5
parameters:
endpoint: https://eu-central-1.linodeobjects.com
# or get the model directly from NGC API
# peoplenet v2.0
# url: "https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/zip"

# model file name, without location
model_file: resnet34_peoplenet_pruned.etlt # v2.0 Accuracy: 84.3 Size 20.9 MB
Expand Down
13 changes: 0 additions & 13 deletions samples/nvidia_car_classification/flavors/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,16 +100,3 @@ Run this sample config with
```bash
python scripts/run_module.py samples/nvidia_car_classification/module.yml
```

## Module configuration using etlt models

It is also simple enough to swap the models in the pipeline for other ones. For example,
replace the sample models from the DeepStream SDK with TAO models available from [Nvidia NGC](https://catalog.ngc.nvidia.com/).

Config file that uses etlt models for this sample is provided in the [module-etlt-config.yml](module-etlt-config.yml).

Run this sample config with

```bash
python scripts/run_module.py samples/nvidia_car_classification/flavors/module-etlt-config.yml
```
125 changes: 0 additions & 125 deletions samples/nvidia_car_classification/flavors/module-etlt-config.yml

This file was deleted.

3 changes: 0 additions & 3 deletions samples/peoplenet_detector/module.yml
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,6 @@ pipeline:
checksum_url: s3://savant-data/models/peoplenet/peoplenet_pruned_v2.0.md5
parameters:
endpoint: https://eu-central-1.linodeobjects.com
# or get the model directly from NGC API
# peoplenet v2.0
# url: "https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/zip"

# model file name, without location
model_file: resnet34_peoplenet_pruned.etlt # v2.0 Accuracy: 84.3 Size 20.9 MB
Expand Down
3 changes: 0 additions & 3 deletions samples/template/src/module/module.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,6 @@ pipeline:
checksum_url: s3://savant-data/models/peoplenet/peoplenet_pruned_v2.0.md5
parameters:
endpoint: https://eu-central-1.linodeobjects.com
# or get the model directly from NGC API
# peoplenet v2.0
# url: "https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/zip"

# model file name, without location
model_file: resnet34_peoplenet_pruned.etlt # v2.0 Accuracy: 84.3 Size 20.9 MB
Expand Down
4 changes: 2 additions & 2 deletions savant/config/default.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ name: ${oc.env:MODULE_NAME}
parameters:
# Logging specification string in the rust env_logger's format
# https://docs.rs/env_logger/latest/env_logger/
# The string is parsed and Python logging is setup accordingly
# The string is parsed and Python logging is set up accordingly
# e.g. "info", or "info,insight::savant::target=debug"
log_level: ${oc.env:LOGLEVEL, 'INFO'}

Expand Down Expand Up @@ -186,7 +186,7 @@ parameters:
# and reloaded in case changes are detected
dev_mode: ${oc.decode:${oc.env:DEV_MODE, False}}

# Shutdown authorization key. If set, module will shutdown when it receives
# Shutdown authorization key. If set, module will shut down when it receives
# a Shutdown message with this key.
# shutdown_auth: "shutdown-auth"

Expand Down
Loading