diff --git a/launch/tier4_perception_launch/launch/object_recognition/detection/lidar_based_detection.launch.xml b/launch/tier4_perception_launch/launch/object_recognition/detection/lidar_based_detection.launch.xml
index 23e0297dc5e44..3b6b9ba652a24 100644
--- a/launch/tier4_perception_launch/launch/object_recognition/detection/lidar_based_detection.launch.xml
+++ b/launch/tier4_perception_launch/launch/object_recognition/detection/lidar_based_detection.launch.xml
@@ -10,7 +10,7 @@
-
+
diff --git a/launch/tier4_perception_launch/launch/perception.launch.xml b/launch/tier4_perception_launch/launch/perception.launch.xml
index 36d43bab74894..b0219376e9625 100644
--- a/launch/tier4_perception_launch/launch/perception.launch.xml
+++ b/launch/tier4_perception_launch/launch/perception.launch.xml
@@ -25,11 +25,12 @@
-
+
+
@@ -78,11 +79,11 @@
-
+
+
-
+
diff --git a/perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/centerpoint_vs_centerpoint-tiny.launch.xml b/perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/centerpoint_vs_centerpoint-tiny.launch.xml
index 13fd386238eda..b9c056cfb5686 100644
--- a/perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/centerpoint_vs_centerpoint-tiny.launch.xml
+++ b/perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/centerpoint_vs_centerpoint-tiny.launch.xml
@@ -5,6 +5,7 @@
+
@@ -21,7 +22,7 @@
-
+
@@ -35,7 +36,7 @@
-
+
diff --git a/perception/lidar_centerpoint/launch/lidar_centerpoint.launch.xml b/perception/lidar_centerpoint/launch/lidar_centerpoint.launch.xml
index d552cb702b980..a7f181ab78ac6 100644
--- a/perception/lidar_centerpoint/launch/lidar_centerpoint.launch.xml
+++ b/perception/lidar_centerpoint/launch/lidar_centerpoint.launch.xml
@@ -2,8 +2,9 @@
+
-
+
diff --git a/perception/lidar_centerpoint/launch/single_inference_lidar_centerpoint.launch.xml b/perception/lidar_centerpoint/launch/single_inference_lidar_centerpoint.launch.xml
index 0f6923d5e6414..491abfbad7764 100644
--- a/perception/lidar_centerpoint/launch/single_inference_lidar_centerpoint.launch.xml
+++ b/perception/lidar_centerpoint/launch/single_inference_lidar_centerpoint.launch.xml
@@ -1,7 +1,8 @@
+
-
+
diff --git a/perception/tensorrt_yolo/README.md b/perception/tensorrt_yolo/README.md
index afa9209c43bb2..58d4af0dfa83d 100644
--- a/perception/tensorrt_yolo/README.md
+++ b/perception/tensorrt_yolo/README.md
@@ -55,6 +55,7 @@ Jocher, G., et al. (2021). ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, Rob
| Name | Type | Default Value | Description |
| ----------------------- | ------ | ------------- | ------------------------------------------------------------------ |
+| `data_path` | string | "" | Packages data and artifacts directory path |
| `onnx_file` | string | "" | The onnx file name for yolo model |
| `engine_file` | string | "" | The tensorrt engine file name for yolo model |
| `label_file` | string | "" | The label file with label names for detected objects written on it |
@@ -71,7 +72,7 @@ This package includes multiple licenses.
All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.
-All models are downloaded automatically when building. When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
+All models are downloaded during env preparation by ansible (as mention in [installation](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/)). It is also possible to download them manually, see [Manual downloading of artifacts](https://github.com/autowarefoundation/autoware/tree/main/ansible/roles/artifacts) . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
### YOLOv3
diff --git a/perception/tensorrt_yolo/launch/tensorrt_yolo.launch.xml b/perception/tensorrt_yolo/launch/tensorrt_yolo.launch.xml
index a548939f2cebe..b2656de0ab72e 100755
--- a/perception/tensorrt_yolo/launch/tensorrt_yolo.launch.xml
+++ b/perception/tensorrt_yolo/launch/tensorrt_yolo.launch.xml
@@ -3,7 +3,8 @@
-
+
+
@@ -11,11 +12,11 @@
-
+
-
+
-
+
diff --git a/perception/tensorrt_yolox/README.md b/perception/tensorrt_yolox/README.md
index 3c253a0b68489..ca407b1ff6811 100644
--- a/perception/tensorrt_yolox/README.md
+++ b/perception/tensorrt_yolox/README.md
@@ -71,7 +71,7 @@ those are labeled as `UNKNOWN`, while detected rectangles are drawn in the visua
## Onnx model
-A sample model (named `yolox-tiny.onnx`) is downloaded automatically during the build process.
+A sample model (named `yolox-tiny.onnx`) is downloaded by ansible script on env preparation stage, if not, please, follow [Manual downloading of artifacts](https://github.com/autowarefoundation/autoware/tree/main/ansible/roles/artifacts).
To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference,
`EfficientNMS_TRT` module is attached after the ordinal YOLOX (tiny) network.
The `EfficientNMS_TRT` module contains fixed values for `score_threshold` and `nms_threshold` in it,
@@ -146,7 +146,7 @@ Please refer [the official document](https://github.com/Megvii-BaseDetection/YOL
## Label file
-A sample label file (named `label.txt`)is also downloaded automatically during the build process
+A sample label file (named `label.txt`)is also downloaded automatically during env preparation process
(**NOTE:** This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).
This file represents the correspondence between class index (integer outputted from YOLOX network) and
diff --git a/perception/tensorrt_yolox/launch/yolox_s_plus_opt.launch.xml b/perception/tensorrt_yolox/launch/yolox_s_plus_opt.launch.xml
index cb89f5829c65d..3f8d7897ab5d3 100644
--- a/perception/tensorrt_yolox/launch/yolox_s_plus_opt.launch.xml
+++ b/perception/tensorrt_yolox/launch/yolox_s_plus_opt.launch.xml
@@ -4,7 +4,8 @@
-
+
+
diff --git a/perception/tensorrt_yolox/launch/yolox_tiny.launch.xml b/perception/tensorrt_yolox/launch/yolox_tiny.launch.xml
index d8c67e39e0b8a..2f08031ea159f 100644
--- a/perception/tensorrt_yolox/launch/yolox_tiny.launch.xml
+++ b/perception/tensorrt_yolox/launch/yolox_tiny.launch.xml
@@ -3,7 +3,8 @@
-
+
+
diff --git a/perception/traffic_light_classifier/README.md b/perception/traffic_light_classifier/README.md
index 3d15af0cf7805..7df0c5466695b 100644
--- a/perception/traffic_light_classifier/README.md
+++ b/perception/traffic_light_classifier/README.md
@@ -55,6 +55,7 @@ These colors and shapes are assigned to the message as follows:
| Name | Type | Description |
| ----------------- | ---- | ------------------------------------------- |
| `classifier_type` | int | if the value is `1`, cnn_classifier is used |
+| `data_path` | str | packages data and artifacts directory path |
### Core Parameters
diff --git a/perception/traffic_light_classifier/launch/traffic_light_classifier.launch.xml b/perception/traffic_light_classifier/launch/traffic_light_classifier.launch.xml
index d4794443d95d9..10aa04cc585af 100644
--- a/perception/traffic_light_classifier/launch/traffic_light_classifier.launch.xml
+++ b/perception/traffic_light_classifier/launch/traffic_light_classifier.launch.xml
@@ -2,8 +2,9 @@
-
-
+
+
+
diff --git a/perception/traffic_light_fine_detector/README.md b/perception/traffic_light_fine_detector/README.md
index 1ed6debfeae91..dcc89c76387c6 100644
--- a/perception/traffic_light_fine_detector/README.md
+++ b/perception/traffic_light_fine_detector/README.md
@@ -50,12 +50,13 @@ Based on the camera image and the global ROI array detected by `map_based_detect
### Node Parameters
-| Name | Type | Default Value | Description |
-| -------------------------- | ------ | ------------- | ------------------------------------------------------------------ |
-| `fine_detector_model_path` | string | "" | The onnx file name for yolo model |
-| `fine_detector_label_path` | string | "" | The label file with label names for detected objects written on it |
-| `fine_detector_precision` | string | "fp32" | The inference mode: "fp32", "fp16" |
-| `approximate_sync` | bool | false | Flag for whether to ues approximate sync policy |
+| Name | Type | Default Value | Description |
+| -------------------------- | ------ | --------------------------- | ------------------------------------------------------------------ |
+| `data_path` | string | "$(env HOME)/autoware_data" | packages data and artifacts directory path |
+| `fine_detector_model_path` | string | "" | The onnx file name for yolo model |
+| `fine_detector_label_path` | string | "" | The label file with label names for detected objects written on it |
+| `fine_detector_precision` | string | "fp32" | The inference mode: "fp32", "fp16" |
+| `approximate_sync` | bool | false | Flag for whether to ues approximate sync policy |
## Assumptions / Known limits
diff --git a/perception/traffic_light_fine_detector/launch/traffic_light_fine_detector.launch.xml b/perception/traffic_light_fine_detector/launch/traffic_light_fine_detector.launch.xml
index 5ce7840c28fd6..6e32d3410c260 100644
--- a/perception/traffic_light_fine_detector/launch/traffic_light_fine_detector.launch.xml
+++ b/perception/traffic_light_fine_detector/launch/traffic_light_fine_detector.launch.xml
@@ -1,6 +1,7 @@
-
-
+
+
+
diff --git a/perception/traffic_light_ssd_fine_detector/README.md b/perception/traffic_light_ssd_fine_detector/README.md
index 1dd05665709f5..4dbda8421d85d 100644
--- a/perception/traffic_light_ssd_fine_detector/README.md
+++ b/perception/traffic_light_ssd_fine_detector/README.md
@@ -122,15 +122,16 @@ Based on the camera image and the global ROI array detected by `map_based_detect
### Node Parameters
-| Name | Type | Default Value | Description |
-| ------------------ | ------ | ------------------------------ | -------------------------------------------------------------------- |
-| `onnx_file` | string | "./data/mb2-ssd-lite-tlr.onnx" | The onnx file name for yolo model |
-| `label_file` | string | "./data/voc_labels_tl.txt" | The label file with label names for detected objects written on it |
-| `dnn_header_type` | string | "pytorch" | Name of DNN trained toolbox: "pytorch" or "mmdetection" |
-| `mode` | string | "FP32" | The inference mode: "FP32", "FP16", "INT8" |
-| `max_batch_size` | int | 8 | The size of the batch processed at one time by inference by TensorRT |
-| `approximate_sync` | bool | false | Flag for whether to ues approximate sync policy |
-| `build_only` | bool | false | shutdown node after TensorRT engine file is built |
+| Name | Type | Default Value | Description |
+| ------------------ | ------ | ------------------------------------------------------------------------ | -------------------------------------------------------------------- |
+| `data_path` | string | "$(env HOME)/autoware_data" | packages data and artifacts directory path |
+| `onnx_file` | string | "$(var data_path)/traffic_light_ssd_fine_detector/mb2-ssd-lite-tlr.onnx" | The onnx file name for yolo model |
+| `label_file` | string | "$(var data_path)/traffic_light_ssd_fine_detector/voc_labels_tl.txt" | The label file with label names for detected objects written on it |
+| `dnn_header_type` | string | "pytorch" | Name of DNN trained toolbox: "pytorch" or "mmdetection" |
+| `mode` | string | "FP32" | The inference mode: "FP32", "FP16", "INT8" |
+| `max_batch_size` | int | 8 | The size of the batch processed at one time by inference by TensorRT |
+| `approximate_sync` | bool | false | Flag for whether to ues approximate sync policy |
+| `build_only` | bool | false | shutdown node after TensorRT engine file is built |
## Assumptions / Known limits
diff --git a/perception/traffic_light_ssd_fine_detector/launch/traffic_light_ssd_fine_detector.launch.xml b/perception/traffic_light_ssd_fine_detector/launch/traffic_light_ssd_fine_detector.launch.xml
index a4d61b774652a..714c4d288b603 100644
--- a/perception/traffic_light_ssd_fine_detector/launch/traffic_light_ssd_fine_detector.launch.xml
+++ b/perception/traffic_light_ssd_fine_detector/launch/traffic_light_ssd_fine_detector.launch.xml
@@ -1,6 +1,7 @@
-
-
+
+
+